tag:blogger.com,1999:blog-136741632024-03-14T06:05:44.123-07:00Stevey's Blog RantsRandom whining and stuff.Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.comBlogger66125tag:blogger.com,1999:blog-13674163.post-15896402976680676552018-01-23T21:38:00.003-08:002018-01-23T21:38:35.112-08:00Why I left Google to join GrabI've officially switched from Blogger to Medium. See you over <a href="https://medium.com/@steve.yegge/why-i-left-google-to-join-grab-86dfffc0be84" target="_blank">there</a>!Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com3tag:blogger.com,1999:blog-13674163.post-48954645817235839552017-05-17T14:06:00.004-07:002017-05-17T14:08:42.232-07:00Why Kotlin Is Better Than Whatever Dumb Language You're Using<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: arial; font-size: 11pt; white-space: pre-wrap;">Ah, clickbait. Where would the internet be without it? The answer will shock you!</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">But seriously, I didn't mean to insult your favorite language… much. After all, your language of choice is probably getting better at a glacial pace. Right? If your language isn't dead, then it's gradually getting better as they release updates to it.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">How slowly, though? Well... If the language you're using happens to be Java, then you've no doubt realized that by the time Java becomes a really good language, you'll be dead. Loooong dead. I know we don't like to contemplate our own mortality, but when you plot the trajectory of Java from its birth 20+ years ago to its full knee and hip replacement with Java 8, you can't help but wonder, "Am I going to be stuck with this for literally the rest of my life? What if this is as good as it gets?"</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Anyhoo, I ran across the old language question again because I finally tried my hand at Android development. I have an iOS client for my old game </span><a href="http://reddit.com/r/wyvernrpg" style="text-decoration: none;"><span style="background-color: transparent; color: #1155cc; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Wyvern</span></a><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">, and I decided somewhat recently to take the plunge and write an Android version. I didn't realize that it would turn into a language question (as in, "What the hell am I doing with my life?") But then, if you've done any Android programming at all, you'll know that this is in fact a </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: italic; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">burning</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> question in Android-land.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">My first attempt at doing Android was last summer, and my god it sucked. I mean, they warned me. Everyone warned me. "The APIs are terrible", they all said. I can't say I wasn't warned.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">How terrible could they </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: italic; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">be</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">, though? It's just Java, right?</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Legacy Yuck</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Unfortunately -- for long complicated legacy reasons that nobody cares about -- some of Android's core APIs really are bad. I mean baaaaad bad. Shut the book, take a deep breath, and go out for coffee bad. The warnings were spot on.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">It's a mixed bag, though. A lot of their APIs are just ducky. I found plenty of things that are hard in iOS and easy in Android. Product flavors, the Downloads service, findViewById(), the Preferences activity, etc. There is a ton of stuff in Android that has no equivalent at all in iOS, so in iOS you wind up writing gross hacky code or building elaborate libraries to work around it.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">But! There's a big "But". </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">When you're learning and writing for Android, everyone focuses on the bad APIs</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">, for the same reason that when you're in traffic you focus on the red lights, not the green lights. You tend to judge your commute by how many red lights it has.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">And Android has some pretty big red-light APIs. Fragments, for example, are a well-known Flagship Bad API in Android. In fact the entire lifecycle is maddeningly awful, for both Activities and Fragments. iOS is living proof that it didn't have to be that bad. There's no defending it. It's so bad that when I tried it for the first time last summer, I just gave up. Threw in the towel. Screw it, I said to myself -- I'll hire someone to do this port, someday.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">And I didn't look at Android programming again for another half a year.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Rescued by Russians</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">I kept hearing about this new-ish programming language for the JVM and Android called Kotlin. From Russia, of all places. More specifically, from JetBrains, the makers of the world-famous IntelliJ IDEA IDE, whose primary claim to fame is its lovely orange, green, purple and black 'Darcula' theme.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><img height="375" src="https://lh6.googleusercontent.com/eNcWzzfFCrHOrcZXnPRuCJbET7RgsnMzTyFNDtgrCdvrKoW5nbZUbcAScGVPi7Cp1rLG4HcsH4BH2kC6vDCQ237A8-VVBSy97PWQKhMjuiOuf3ogNSpsc8QnnDYO_FIEoiPkVlVI" style="-webkit-transform: rotate(0.00rad); border: none; transform: rotate(0.00rad);" width="624" /></span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> Figure 1: A thousand-year-old vampire expressing his excitement over Java 8.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">So why is it called Kotlin? Well, there's a clear play on incrementing the 'J' in Java. Beyond that, one can only assume that 'Kremlin', 'Khrushchev' and 'KGB' were already taken, probably by UC Berkeley. So they did the next best thing and named it after a Russian military base. It's not a bad name, though. You get used to it.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Last year I noticed that Kotlin had a fair amount of buzz. Not hype, just... buzz. People were low-key buzzing about it. So, sure, whatever, I took a look, just like I've done for fifty or a hundred other languages in the past 15 years, on my Quest to Replace Java with Anything Reasonable. </span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Kotlin first impressions</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">When I first looked at Kotlin, I honestly didn't think there was any chance I'd use it in real life, not even the remotest possibility. I was just window shopping. First glance? Nothing immediately wrong with it. It's clean and modern. If anything it felt almost hipsterish in its adoption of all the latest new trends in language design. But there are oodles of languages like that. Just look at Rust. Another solid, appropriately-named language that almost nobody uses. How "good" a language is doesn't really matter from an adoption standpoint.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Kotlin came across as strangely </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><i>familiar</i></span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">, though, and eventually I realized it's because it looks like Swift -- which I was slow to notice because my iOS app is in Objective C for irritating legacy reasons. And of course now I know that's backwards: Kotlin predates Swift by several years, so it's more accurate to say that </span><a href="http://nilhcem.com/swift-is-like-kotlin/" style="text-decoration: none;"><span style="background-color: transparent; color: #1155cc; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Swift is like Kotlin</span></a><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">But none of this made me want to sit down and </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><i>use</i></span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> it. Kotlin was just another decent-looking language, and as a working stiff, I didn't really feel like putting in the effort to learn it well enough to do anything real.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">From Kotlin Experiment to Java Expatriate</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">I don't remember exactly when or how I fell in love with Kotlin. I can tell you I sure as hell wasn't expecting it.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">As best I can remember, my players had been begging me to do an Android version of my game. It launched to the Apple App Store in December, and within a few weeks, tons of old fans emerged to tell me that they couldn't play unless it was on Android. So, despite my swearing off Android "forever", I decided I'd better give it one more try. But </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: italic; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">something</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> had to change -- I wasn't going to be able to stomach the vanilla Android Java programming language experience. I needed a framework or whatever, to ease the pain.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">In mid-January I did a quick-and-dirty evaluation and decided to try Kotlin, which also targets the Android Dalvik and Art runtimes. I think my evaluation was equal parts (a) Kotlin buzz, (b) wishing I'd written my iOS app in Swift, and (c) Kotlin had some sort of clever Android DSL called Anko, which I never wound up using, but which initially piqued my interest.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">So I took it for a test drive. And within maybe four or five weeks, just like that, I was rewriting my 20-year-old game </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: italic; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">server platform</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> in Kotlin. One month of using Kotlin and I was </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">sold</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">. I mean, I'm not knocking Scala or any of those other languages, but for an ordinary working clod like me, Kotlin is </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: italic; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">perfect</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">. I just want street food, you know? Scala is nice but it's just too fancy for me, all frog legs and calf brains and truffled snails. I'm too blue-collar to use Clojure or Scala or any of those guys.</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">It only took maybe 3 days to learn Kotlin well enough to start busting out code, fully aware that I didn't know what the hell I was doing, but knowing the language and IDE were doing a great job of keeping me out of trouble anyway.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">And once I was a bit fluent, well, wow. I'm so jaded that I didn't think it was possible to love a language ever again, but Kotlin is just gorgeous. Everything you write in it feels like you made something cool. I've certainly felt that way with other languages before. But most of them had </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><i>really </i></span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">steep learning curves. Kotlin is just butter: Tailor-made for us Java programmers who are still sort of scratching our heads over Java 8's parallel streaming filterable collecting scheduled completable callbacking futuring listening forking executor noun kingdom. Kotlin gives you all the same power -- substantially more, actually, with its coroutines support -- but makes it way easier to </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><i>say stuff. </i>Java 8 lets you say interesting things, but you have to do it with a mouthful of sand.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">I think a fair share of why Kotlin is so easy to pick up, though, owes to its IDE support. The IDE support for pretty much every other JVM or Android language (besides Java) tends to be bolted on by a couple of community volunteers. Whereas Kotlin is made by world-class IDE vendors, so right from the start it has the best tooling support </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: italic; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">ever</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">. How many languages can you name that were built with IDE support from the ground up? Languages don't usually evolve like that; in fact many language designers outright eschew IDEs. (Hi Rob!) The only other one I can think of offhand is C# -- and C# is easily one of the best languages on earth, hands-down.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">The upshot of being an IDE-first language is that you can type pretty much anything even </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><i>approximately</i></span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> correct in a Kotlin buffer, and the IDE will gently tell you what you meant to type. Heck, you can even paste in Java code and it'll convert it for you automatically. If you like Java's IDE support, well, I'm pleased to report that Kotlin has pushed that experience to unprecedented levels. Even ex-Microsoft engineers tell me, "I used to think Visual Studio was the unbeatable flagship IDE, but IntelliJ is actually better!" I mean, I don't know Visual Studio, so I'm just relating what they say. But I'm betting IDEA is at least on par with VS.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Of course, I always need to switch over to Emacs to get real work done. IntelliJ doesn't like it when you type fast. Its completions can't keep up and you wind up with half-identifiers everywhere. And it's just as awful for raw text manipulation as every other IDE out there. So you need to use both. The Emacs Kotlin support is unfortunately only so-so right now, but presumably it'll improve over time. I constantly switch back and forth between Emacs and IntelliJ and I'm getting by. Good enough for now.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">And there you have it. I spent over a decade searching for a language to replace Java. I mean I looked </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><i>hard. </i> Ironically,</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> it was only after I gave up that it finally came along. Go figure. Kudos to JetBrains for an amazing achievement.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Android: Kotlin's Killer App</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">It's nigh-impossible for any new language to get traction these days. That's not to say there are no new languages. There are neat new ones almost every year! </span><span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">But nobody will ever, <i>ever</i> use them. It's hard bordering on impossible. The language market is fully saturated. The </span><span style="font-family: "arial"; font-size: 11pt; font-style: italic; vertical-align: baseline; white-space: pre-wrap;">only</span><span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> way a new language can make a big splash -- and I think this has been true for at least ten, maybe twenty years -- is for it to have a "killer app". It needs a platform that everyone wants to use so badly that they're willing to put up with learning a new language in order to program on that platform.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">It turns out the perfect killer app here -- and this brings us full circle -- is Android's crappy Red Light APIs. When you're zooming along the road in Android-land, every time you hit an API that stops you in your tracks, you curse the platform. It doesn't actually matter how many </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: italic; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">good</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> APIs Android has, as long as there are sufficiently many bad ones to make you pause and look around for big solutions.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">And boyo, do big "solutions" ever abound in Android. For starters, there are a bunch of Java annotation processors, which are a sure sign there's a language problem afoot. And there are a bunch of mini-frameworks like (say) Lyft's Scoop. There are even full-on departures from Android: React Native, Cordova, Xamarin, Flutter and so on. Make no mistake -- people are looking for alternatives.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">When you have a big gap like that, there's an opportunity for a language-based solution. And unsurprisingly, the full-on departures are all based around specific languages that aren't Java.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Kotlin's competitive advantage, though, is that it's </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: italic; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">not</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> a full-on departure. It's completely 100% interoperable and even interminglable with Java, almost (though not quite) to the extent that C++ was to C. Kotlin feels like an evolutionary step. You can just start mixing it right into your existing Android project, right there in the same directories, and call back and forth without batting an eyelash.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">All the other big Android platform contenders force you to learn and use a completely different language and platform, each with its own paradigms and idioms and quirks. Kotlin just lets you program Android like regular old working-class Android programmers do. It's all the same APIs, but they're somehow </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: italic; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">better</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> now. It feels an order of magnitude better.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">I was first in line to throw the Android book at the wall and give up last summer, but now with Kotlin I'm finding Android programming is, dare I say it -- enjoyable? I think this suggests that Android's "bad" APIs weren't all that bad to begin with, but rather that Java has been masking Android's neat functionality with a bunch of Java-y cruft.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Kotlin manages to help you route around just about all of Android's Red Lights, and turns the experience into something that on the whole I now find superior to iOS development. Well, at least for Objective-C -- I'm sure Swift is awesome. Because it's like Kotlin!</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">What I specifically like about Kotlin</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Well now, the </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: italic; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">specifics</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> will be another large write-up, so I'll have to do a separate post. Here I'll just mention a few high-level generalities.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<ul style="margin-bottom: 0pt; margin-top: 0pt;">
<li dir="ltr" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">It works like Java.</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> It's not "weird" like Clojure or Scala (and let's face it, they're both pretty weird.) You can learn it quickly. It was obviously designed to be accessible to Java developers.</span></div>
</li>
<li dir="ltr" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">It's safer than Java.</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> It provides built-in support for many things that are handled in Java these days with annotation processors -- override checking, nullability analysis, etc. It also has safer numeric conversion rules, and although I'm not sure I like them, I have to appreciate how they force me to think about all my number representations.</span></div>
</li>
<li dir="ltr" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">It's interoperable with Java.</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> And I mean their interop is </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: italic; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">flawless</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">. I've seen too many JVM languages go down in flames because you couldn't subclass, I dunno, a static inner class of a nonstatic inner class, or whatever weird-ass edge case you needed at the time. Kotlin has made Java interop a top priority, which means migration to Kotlin can be done incrementally, one file at a time.</span></div>
</li>
<li dir="ltr" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-variant-caps: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-position: normal; font-weight: 400; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">It's succinct.</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> I'm a bit of a golfer, I'll be honest. All else being equal, I like shorter programs that do the same thing, if they're clear enough. Kotlin makes for a great round of golf. On average I find it to be about 5-10% shorter than the equivalent Jython code (which is sort of my gold standard), while remaining more readable and </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><i>far</i></span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> more typesafe.</span></div>
</li>
<li dir="ltr" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">It's practical.</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> Kotlin allows multiple classes per file, top-level functions, operator overloading, extension methods, type aliasing, string templating, and a whole bunch of other bog-standard language features that for whatever reason Java just never adopted even though everyone wanted them.</span></div>
</li>
<li dir="ltr" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">It's evolving fast.</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> For instance they just launched coroutine support, which is going to provide the foundation for async/await, generators and all your other favorite non-threaded concurrency features.</span></div>
</li>
<li dir="ltr" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">It's unashamed.</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> Kotlin often borrows great ideas from other languages, and doesn't try to hide it. They'll say, "We liked C#'s generics, so we did it that way." I like that.</span></div>
</li>
<li dir="ltr" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">It's got DSLs.</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> No DSL should ever be created without </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: italic; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">serious</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> consideration of the alternatives -- but a DSL done well can be a powerful tool. Look at Gradle's DSL, for instance, in comparison to the thousands of lines of XML in a typical Maven project. Kotlin makes that kind of thing easy.</span></div>
</li>
<li dir="ltr" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">It's got one hell of an IDE.</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> Lately I've taken to writing new files in Emacs, which lets me bust out a ton of code very quickly, code which just happens to be full of horrible errors. And then I open it in IntelliJ and hit Alt-Enter like 50 times while the IDE fixes everything for me. It's a great symbiosis.</span></div>
</li>
<li dir="ltr" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">It's fun.</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> Kotlin is just plain fun. Maybe it's subliminal advertising, since their keyword for declaring methods is </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: italic; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">fun</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">. But it's somehow turned me from a surly professional programmer into a hobbyist again.</span></div>
</li>
</ul>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Anyhoo, you get the idea. I'm packed up and moving into a new neighborhood called Kotlin</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">. I've raved about other languages plenty of times before, but never once, not </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: italic; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">ever</span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">, did I rewrite any of my precious Java game server code in any of them. But here I am, busily rewriting everything in Kotlin as fast as I can.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">I know a few other programmers who've also full-on converted to Kotlin. Most of them beat me to it by at least a year or two. We buzz about it sometimes. "Kotlin makes programming fun again," we tell each other. The funny thing is, we hadn't fully grasped that programming had become </span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><i>non-fun </i></span><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">until we tried Kotlin. It takes you back to when you were first learning programming and everything seemed achievable.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Once again, big kudos to JetBrains. They've done an amazing job with this language. I am hats-off impressed.</span></div>
<b style="font-weight: normal;"><br /></b>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Is Kotlin better than whatever dumb language you're using? I think so. Certainly so, if the language you're using happens to be Java. If your daily routine involves coding in Java, I think you'll find Kotlin is an unexpected breath of fresh air. Let me know what you think!</span></div>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: italic; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Disclaimer: These are my own personal opinions based on personal Android development, and are not endorsed in any way by my employer nor JetBrains.</span></div>
Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com36tag:blogger.com,1999:blog-13674163.post-30979535339395215432016-11-16T23:11:00.001-08:002016-12-13T14:06:42.291-08:00The Monkey and the AppleIt's been a while!<br />
<br />
I took a couple of years off blogging because I felt I didn't have much left in the way of interesting things to say. So I've been just been programming, and studying, and learning this and that. I've been doing a bit of Cloud development, and I taught myself iOS development, and after years in the Google cocoon I poked my head out and learned how people do things in the real world with open source technologies.<br />
<br />
And lo at long last, after some five years of tinkering, I finally have something kind of interesting to share. I wrote a game! Well, to be more precise, I took an old game that I wrote, which I've perhaps mentioned once or twice before, and I turned it into a mobile game, with a Cloud backend.<br />
<br />
It has been waaaay more work than I expected. Starting with a more-or-less working game, and tweaking it to work on Cloud and mobile -- I mean, come on, how hard can it be, really? Turns out, yeah, yep, very hard. Stupidly hard. Especially since out of brand loyalty I chose Google's cloud platform, which 3 or 4 years ago was pretty raw. And let's face it, iOS APIs have evolved a ton in that timeframe as well. So even as "recently" as 2013 I was working with some pretty immature technology stacks, all of which have come leaps and bounds since then.<br />
<br />
And now I have all <i>sorts</i> of stuff to share. Definitely enough for a series of blog posts. But I also have less time than before, because it's all happening in my non-copious spare time, all late nights and weekends. And running an MMORPG is a fearsome task in its own right.<br />
<br />
Incidentally, I've just opened the game up for <strike>beta testing</strike>. So if you want to try it out while you read along, visit <a href="http://ghosttrack.com/" target="_blank">http://ghosttrack.com </a>to request an invite. <b>(Edit, 12/13/16 -- BETA IS NOW CLOSED.) </b>You'll need an iPhone, iPad, or iPod running iOS <strike>10.2</strike> <i>(Edit: 9.2!) </i>or later. I'd love to do Android and PC, but there's only one of me. For now.<br />
<br />
So where do I start? I guess the logical thing to do would be to start at the beginning, but screw all that, I'm starting with the monkey.<br />
<h2>
The Monkey</h2>
<br />
I had the following conversation with my wife the other day. It went something like:<br />
<br />
<b>Wifey</b>: Baby, I lost all my stuff!<br />
<br />
<i>(A lot of our conversations have started this way since April, give or take, when she started playing the game.)</i><br />
<br />
<b>Me</b>: What stuff baby?<br />
<br />
<b>Wifey</b>: (<i>wailing</i>) All the stuff I had in my house!! I dropped it all there and them boom, it disappeared, right in front of my eyes. I was watching it and then five seconds later it was gone. This happened before, and I didn't want to tell you because I wasn't sure, but I just saw it! It happened!<br />
<br />
<b>Me</b>: OK baby I'll come look.<br />
<br />
<b>Wifey</b>: See? It was right there! I had a lot of good stuff there and it's gone!<br />
<br />
<b>Me</b>: (<i>looking around)</i> I believe you baby.<br />
<br />
<b>Me</b>: (<i>looking around some more</i>) I think... I think the Monkey did it.<br />
<br />
<b>Wifey</b>: What monkey? What!? That monkey took my stuff?<br />
<br />
<b>Me</b>: (<i>checking</i>) Yep. It picked it all up and it's carrying it now.<br />
<br />
Sure enough, her pet monkey had picked up all her precious loot and valuables. But while she stared in disbelief at this unexpected betrayal, I was worrying about how I was going to get her stuff back. Because there were two problems.<br />
<br />
First, the monkey wasn't killable via combat, since I had marked pet creatures in your personal home as non-attackable. I don't know if that was the best decision ever, but it seemed reasonable at the time. And second, there was a chance that if I pulled out the big guns and killed it myself, for instance by invoking its <span style="font-family: "courier new" , "courier" , monospace;">kill()</span> method directly at runtime, its inventory (her loot) would be replaced with the default monkey inventory of bananas and fur, or whatever I'd given them.<br />
<br />
I'm pretty sure that in most states, accidentally replacing your wife's hard-earned treasure with bananas and bits of fur is legal grounds for divorce. So I was in a bit of a pickle.<br />
<br />
<b>Wifey</b>: How are you going to get it back? I can't believe that monkey! Can you just make it drop it?<br />
<br />
<b>Me</b>: Well I could, but the AI will just immediately pick everything up again.<br />
<br />
<b>Wifey</b>: I worked hard for that stuff! I can't even remember what I had! An amulet, a sword, a girdle, all kinds of stuff!<br />
<br />
<b>Me</b>: Don't worry, baby. Tap on the monkey, you can see it carrying everything. Just give me a second to figure it out.<br />
<br />
The picture below shows the predicament. Wifey is the naga warrior, I'm the old guy in the blue pajamas, just like in real life, and the monkey is barely visible 2 squares below her, by the Japanese shoji screen.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPZsovmyTzYbQU-oAtIE9_9xH0TlWW-cWRcHWqNiAur689tQM0jhd-tKqDwUTKnz8Vb__8cpgGNu7LIQ7GtyVSxjF-JBpNlrZmIy-yGE0KlfieLViSVNpez0Q7L3zE571sICJgXA/s1600/nagalinh_house.png" imageanchor="1"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPZsovmyTzYbQU-oAtIE9_9xH0TlWW-cWRcHWqNiAur689tQM0jhd-tKqDwUTKnz8Vb__8cpgGNu7LIQ7GtyVSxjF-JBpNlrZmIy-yGE0KlfieLViSVNpez0Q7L3zE571sICJgXA/s1600/nagalinh_house.png" /></a><br />
<br />
Every creature in the game has an event queue and a command processor, and can respond to generally the same set of commands as players. Normally the AI decides what commands to give a monster, but you can inject your own commands under the right circumstances, or even take control for a while (e.g. with the Charm Monster spell). So I decided I'd try to command the monkey to give me the items, one at a time.<br />
<br />
The game is written mostly in Java, but a good portion (maybe 25%) is written in Jython, which is an implementation of the Python language on the Java virtual machine. And, usefully, Jython has eval and exec functions for interactive code evaluation. So I opened up my command interpreter and went to work.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">> exec monsters = me().map.monsterList</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">Ok</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">> eval monsters</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">[monkey]</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">> exec monkey = monsters.iterator().next()</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">Ok</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">> eval monkey</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">Monkey</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">> eval monkey.inventory</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">The MonsterInventory contains:</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> - bit of fur (0.06 lb)</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> - bone (1.5 lb)</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> - Amulet of Acid Resistance (0.12 lb)</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> - bag (0.5 lb)</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> ...</span><br />
<br />
So far, so good. I had a reference to the monkey in my interpreter, and I was seeing Wifey's stolen valuables, plus the expected monkey inventory. Now for the coup de grace.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">> eval monkey.commandNow("give amulet to rhialto")</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">> None</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">> Monkey gives you Amulet of Acid Resistance.</span><br />
<br />
Woot! Success. I had to keep chasing the monkey around, since offhand I couldn't think of a way to make it stand still. In retrospect I could have paralyzed it, or set its AI to the stationary AI used for immobile monsters. But I couldn't be bothered to look up how right then, since I hadn't ever been in a situation quite like this before, and my wife was alternating between indignation, amused disbelief, and near panic over her stolen stuff.<br />
<br />
I had a mechanism for getting the items back, so I just followed the monkey and issued those <span style="font-family: "courier new" , "courier" , monospace;">commandNow()</span> instructions.<br />
<br />
Unfortunately, and to my lasting surprise, the monkey started ignoring me after the fourth or fifth item. It continued about its business, but it would not give me any more items. I still don't know exactly why, since this was such an edge case scenario. I have a large toolchest of utilities and commands for manipulating player inventories and map contents. But it's rare that you need to command a monster to give you stuff. Normally you get a monster's inventory the old-fashioned way. You pay the iron price.<br />
<br />
I was irked by the monkey's emergent nonchalance, so I pulled out the hammer:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">> eval monkey.kill(me())</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">> Ok</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">> You killed monkey.</span><br />
<br />
Where <span style="font-family: "courier new" , "courier" , monospace;">me()</span> in this case is the attacker.<br />
<br />
As I feared, the corpse's inventory was completely empty, because I clear out the inventory for pet monsters, to prevent abuses where you just create them in builder mode, take their stuff, and make infinite cash.<br />
<br />
So I busted out the <span style="font-family: "courier new" , "courier" , monospace;">clone()</span> command and manually recreated the rest of the missing items as best I could, and gave them all back to Wifey. I'm pleased to report that this story had a happy ending.<br />
<h2>
How to Make a Game</h2>
<br />
You look at a game like Wyvern, which is at its heart just a tiles game like the old roguelikes, and maybe a bit of a MUD, and you think, gosh, I could do that. And you can!<br />
<br />
Off the top of my head, you will need:<br />
<br />
* A cloud computing platform such as AWS, Azure or GCP.<br />
* Xcode and a Mac, or else Android Studio and a whatever-you-want.<br />
* A programming language and a compiler. These days, I would go with Kotlin.<br />
* A service for browsing and licensing music and sound effects. They have those now.<br />
* A service for hiring contractors for artwork and maybe level design. They have those too!<br />
* A hosted datastore, because seriously it's 2016, don't administer your own.<br />
* A source hosting and bug tracking service, such as Bitbucket.<br />
* A good lawyer and a good accountant.<br />
* About twenty years.<br />
<br />
Ha! I kid. I've only put about ten years into it, spread over the past twenty years. I started in 1996; my character Rhialto will turn 20 on March 1st. But in terms of total person-years, it's in the hundreds, largely due to area design contributed by dozens of passionate volunteers.<br />
<br />
It's amazing how much stuff we have access to since I started this project back in 1996. Back then, I had Java, and we're not talking about Java 8 with fancy lambdas and streams. We're talking Java 1.0.2, with that O'Reilly book with the referees on the the front. You had to roll everything yourself back then. Uphill both ways, in the snow. <i>(Actually the game started in C++ in 1995, but I migrated to Java and never looked back.)</i><br />
<br />
I went through something of a life crisis in 2004, after 8 years of working on the game, because my productivity had tanked as the code base grew, and I wanted it back. So I stopped working on the game, for the most part, and went on a Grail-like quest to find a good language -- one that was ultimately unsuccessful, although I learned a lot and got some good rants out of it all.<br />
<br />
Nowadays, though, sheesh. Between GitHub and its infinite supply of high-quality libraries, cloud providers and their hosted services, Stack Overflow and its infinite supply of answers to just about every question you'll ever encounter, and all the services available for payments and contractors and everything else you could want -- I mean, it's a startup's dream come true. I am <i>way</i> more productive than I was in 2003. All I needed was a time machine.<br />
<br />
Even so, a game like this -- which, despite its simple appearance, is a true MMORPG with surprising depth -- is basically an infinite amount of work. Lifetimes of work. So you have to practice triage, time management, and stress management.<br />
<br />
But really, anyone can do it. You just gotta <i>_want_</i> it bad enough.<br />
<h2>
Apple vs. Android</h2>
<br />
I have enough material for lots of blog posts now, and I'd love to spend some time exploring Google's Cloud Platform (GCP) in future articles. I've learned a thing or two about Android as well. And Kotlin is absolutely entrancing. I haven't used it yet for anything serious, but it's one of the best new languages to emerge in a long, long time. Would love to talk more about Kotlin at some point.<br />
<br />
For today, though, I thought I'd offer a few musings on Apple, iOS, and their store ecosystem. I'm still no expert, and I don't do anything iOS-related at work. This is just some very personal impressions I've collected while making the game.<br />
<br />
A lot of people have asked me why I did my first mobile client in iOS rather than Android. The answer is monetization. iOS is straight-up easier to monetize. Android has cultivated a frugal audience, through both marketing and hardware choices, and that cultivation has been a success. Android users tend to be frugal. That doesn't mean they don't spend money, but it does mean they are more cautious about it. I have friends who've done simultaneous iOS/Android releases for their apps, and invariably the iOS users outspend the Android users by anywhere from 4:1 to 10:1 -- anecdotally, to be sure, but a little Googling is enough to support just about any confirmation bias you like. So I picked iOS.<br />
<br />
When I started, I didn't know Objective-C, and I started <i>just</i> before Swift came out, just a couple of months. But by then I was far enough into development, and wary enough from prior experiences with new languages, that I opted to continue in Obj-C. Obviously today I'd do it in Swift, and if I weren't always so time-constrained, I could even start introducing Swift class-by-class. Swift is cool. It actually reminds me a lot of Kotlin. I think language designers must have some sort of clique these days, whether they know it or not.<br />
<br />
What about iOS? Well, there's not much to it. And that's a <i>good thing</i>. It feels familiar, at least if you've done any sort of UI programming at all in your career. I've done some Java AWT and Swing, some Microsoft MFC and whatnot, some X-windows work, some web programming, whatever -- rarely anything major, but I've tinkered with UI throughout my career. Frontend UI is a skill every engineer should have, even if it's just one framework that you know well.<br />
<br />
Coming from all those frameworks, I had certain expectations, and they were 100% met by iOS. It has an MVC framework, and you add views and subviews, and you have all the hooks and lifecycle events you'd expect -- after learning Objective-C, I'd say it was only about four days (thanks to the excellent Big Nerd Ranch book) before I was able to start cranking out reams of code by copying it all directly from Stack Overflow, as is tradition.<br />
<br />
Why am I making such a big deal about iOS's almost boring familiarity? Because Android is the exact opposite of intuitive and familiar. I've gotta be a little careful here, since I recently joined the Android tools team at Google, and I don't want to throw anyone under the bus. They did the best they could with the environment and situation they had when they started back in the early 2000s, which featured phones that didn't even have memory -- they just had "address/contact slots". It was awful. They did fine.<br />
<br />
But now, thanks to Moore's Law, even your wearable Android or iOS watch has gigs of storage and a phat CPU, so all the decisions they made turned out in retrospect to be overly conservative. And as a result, the Android APIs and frameworks are far, far, FAR from what you would expect if you've come from literally any other UI framework on the planet. They feel alien. This <a href="https://www.reddit.com/r/androiddev/comments/2hlw20/am_i_retarded_or_android_development_is_a_mess/" target="_blank">reddit thread</a> pretty well sums up my early experiences with Android development.<br />
<br />
So as much as I'd love to make an Android client for my game, it's not going to happen for a while. Plus I'm still iterating heavily on the iOS UI, so I might as well wait until it stabilizes a bit.<br />
<br />
That's enough about Android for today. I always gauge how edgy my blog posts are by how likely I am to get fired over them, and my little indicator is redlining, so let's move back to Apple and iOS.<br />
<h2>
Apple: The Good, the Bad, and the Ugly</h2>
<br />
Objective C isn't so bad. They have continued iterating on it, so even though its string handling is comically verbose, and it has no namespacing, and there are tons of other modern features missing, the language is pretty capable overall. It has generics, literal syntax for sets/dictionaries/arrays, try/catch/finally macros, extremely well-implemented lambdas with proper closure capturing (unlike nearly every other non-functional language out there), properties, and many other modern conveniences. The syntax is awful, and it can get pretty weird when you're bridging to the C APIs, but on the whole you wind up writing less code than you'd think.<br />
<br />
In fact, various bloggers have measured it, and if I recall correctly, the consensus is that Android Java is about 30% more verbose than Objective-C. Which is pretty counterintuitive, because the Java language itself, verbose as it may be, is <i>clearly</i> less verbose than Objective-C. What's happening here is that iOS has such good APIs, you wind up needing to write a lot less code to get your job done.<br />
<br />
So Obj-C isn't bad, and Swift looks really good. The APIs are good, the documentation is solid, and Apple is aggressively deprecating crummy old APIs (like <span style="font-family: "courier new" , "courier" , monospace;">UIAlertView</span>) in favor of better-designed ones. Everything is still there in the system, and you can see generations of whole layers of API access dating all the way back to the old NeXT computers from the late '80s (heck, everything in iOS starts with NS, for NeXTStep)<br />
<br />
But you don't have to use most of that stuff, because Apple has been constantly layering on new APIs that modernize it all. Unlike, you know... some other, uh, people. >.><br />
<br />
Xcode is pretty good. It used to be bad, but now it's not bad at all. Yes, it crashes more than I'd like, and yes, its refactoring support is abysmal. It's no Visual Studio. But "pretty good" is good enough. Because all I'm doing is copying code from Stack Overflow, really I have no shame whatsoever, and Xcode works great for that. It even formats it for me. Who am I to complain? Besides, I use Emacs for any serious editing.<br />
<br />
So for the Good, we have the languages, the APIs, and the tools. What about the Bad?<br />
<br />
Well, Apple's review process is really, really, <b>really</b> long and convoluted. Sure, I can totally understand why. They have millions of developers trying to shoehorn crap into their store, and they are trying to make a strong quality stand. But it means you're in for a wild ride if you're making anything more complicated than a flashlight app.<br />
<br />
First you have to go through their checklist of roughly seventeen thousand rules, and make sure you have addressed each of them, since all of them can result a veto. And wouldn't you know it, I checked <i>exactly</i> sixteen thousand, nine hundred and ninety-nine of those rules very carefully, so my app was rejected. Because they don't mess about. You have to follow all of them.<br />
<br />
The story of my app's rejection is epic enough for an opera, but in a nutshell, Apple requires that all apps support ipv6-only networks. But none of the major Cloud providers supported ipv6 at the time of my submission, in late September. You're pretty well covered if you're just doing HTTP(S), but if you use sockets you're hosed. My game uses direct TCP/TLS connections to my cloud instances, so it didn't work on an ipv6-only network, and my app was kicked to the curb like so much garbage. At least they did it quickly.<br />
<br />
After some technical consideration, I did the only logical thing, and got on my knees and begged them for an exception, because what am I gonna do? Some cheesy hack with an ipv6 tunnel provider to a fixed IP address on a single instance? Well, yeah, that's exactly what I was going to do, if push came to shove, just to get through the review. Even though it's completely non-scalable. Desperate times.<br />
<br />
Fortunately, after a mere six weeks, and me finally sending them an angry-ish (but still cravenly and begging) note asking WTH, they granted me the exception for 1 year, backdated so it was really only 11 months, but whatevs. I was approved!<br />
<br />
<b>The Ugly</b><br />
<br />
Just kidding, haha joke's on me, I was NOT approved. Because when they gave me the exception, they also threw in a major feature request. Lordy. It's almost like they're a monopoly or something. They didn't like that my game required you to sign in via a social network -- Facebook, Google, or Twitter for now, since those are the sign-in SDKs that I've managed to wire up so far. So they asked me to implement Wyvern Accounts.<br />
<br />
Sigh. I was so relieved that I got the exception, I didn't fight it. I called back to ask if it was OK to require an email address, for account/password recovery functionality (but also because I use the email address to tie your characters together), and they said that was fine.<br />
<br />
So I went to work, even though my game had already been in Alpha for six weeks too long, and I implemented Wyvern accounts. New database table, new web service, new API service, new UI screens for registration and account creation and password resetting, new plumbing for passing credentials to the server, blah blah blah. God dammit, the nerve of them to ask for such a big feature.<br />
<br />
A week later when it was all finished, I realized FB/Google/Twitter all have minimum age requirements (all 13 years minimum because of COPPA), so I had been protected until Apple threw their curveball at me. Now I need underage reporting and god knows what else. Still working through it with the lawyers.<br />
<br />
I'd go back to Plan B (in iOS-land, B is for Begging), except that I actually sort of agree with Apple that I need this feature. Not everyone is on a social network. For example, there is an uncontacted tribe deep in the South American rainforest who are not on Facebook yet, although I believe they are still eligible for Amazon Prime. And also some of my alpha testers were struggling with it. I guess there are a lot of people who not only don't have GMail, but they prefer not to sign up for a free account. I can only assume the NSA is responsible for this phenomenon. But it means I most likely need Wyvern accounts.<br />
<br />
Another reason I sort of need Wyvern accounts is that Facebook, Google and Twitter all have very different philosophies about email-verification APIs.<br />
<br />
Facebook's philosophy is, roughly: "What's a few API calls between friends?" They have quotas, but they're so high that I won't have to worry about them for years.<br />
<br />
Google's philosophy is, roughly: "Our APIs should scale with your business." They have quotas, but they're so high that I won't have to worry about them for years.<br />
<br />
Twitter's philosophy is, roughly: "Go fuck yourself." I exhaust their tiny quotas every day, even after adding credential caching so that players only re-validate once every 8 hours or so. Even though I only have a few dozen, maybe fifty regular players right now, only a handful of which use Twitter. Their quotas are comically, absurdly low.<br />
<br />
So I'm probably going to have to yank Twitter out before launch, which would limit people to FB and Google sign-in. And that seems like... not enough options. I don't like having to maintain my own accounts, but I think I'm pretty well stuck with it.<br />
<br />
The takeaway (well, other than "don't use Twitter APIs"), is that Apple can jerk you around pretty much all they want, and you'd better like it. You should basically prepare for a long review.<br />
<br />
<b>Back to The Good</b><br />
<br />
Despite the bumps in the (long) road so far, some stuff has been great. TestFlight, which is Apple's beta testing system, is working nicely for me. It provides me with crash reports which have identified half a dozen real issues so far. The sign-up is a snap, and they'll let me have up to 2000 testers, which will help me make sure my stuff scales, if I can get that many.<br />
<br />
And their review turnaround time has been pretty good. It generally takes about a day, in my experience. I'm not sure why they require a manual review for my Beta builds, after they've already approved me for the actual store launch. And every build requires another 1-3 day review. But I've got a pipeline going, and I'm pleased overall with how straightforward it has been.<br />
<br />
I'm really worried about In-App Purchases. I offer them in my game (though it's definitely not pay-to-play), but Apple's testing for IAP leaves a lot to be desired. You have to sandbox it, and this requires setting up separate accounts. It's not possible to enable production IAP (with real money) before the actual launch. But their sandbox environment makes it really easy to screw up a transaction, after which your device will prompt you for a store login every 5 minutes for the rest of your miserable life, and likely into the hereafter. It's a mess.<br />
<br />
You can sort of enable IAP in Beta/TestFlight, but it's *free*, which means players would be able to acquire millions of coins for free, and it would require me to perform a full reset of everyone back to level 1 before launch. I'm trying to avoid that.<br />
<br />
So I have no idea if my IAP really works. I got it working in the sandbox at one point, and I'm hoping it works in prod, but until I'm confident that it's working, I'm going to have to charge for my app, to forestall the possibility of a massive meltdown from casual players (tourists, basically) eating up my server resources. I don't want to get a gigantic bill from Google. So I need to limit the growth as best I can for a while.<br />
<h2>
Going Forward</h2>
<br />
Building this game has been a lot of fun. I've learned more from doing this project than from anything I've ever done that was work-related, at any job I've had. Something about having to do a big project yourself forces you to pay attention to everything in a way that you rarely have to do at a corporation.<br />
<br />
I don't know if it's going to be a hit. Statistically, probably not. But I have some pretty darn loyal players. The game was down for <i>five years</i> (2011-2016), and when I brought it back up, a hundred or so old timers appeared out of nowhere. Many of them had to purchase iOS devices just to play, but they splurged. And they started playing insane hours. The ratio of 7-day-active to concurrent players has been crazy. In the old days it was about 100:1, so on a server that could comfortably support 100 concurrent players, I'd typically have about 10k 7-day actives. It was self-limiting because I only had one server back then.<br />
<br />
With these alpha testers, the ratio has been about 4:1. They're playing upwards of 8 hours a day, around the clock. And they're all over the world -- I have testers in Japan, England, New Zealand, Spain, Nigeria, Toronto, east coast, west coast, I forget where they're all from. But we're talking about a group of only about 60-70 regulars, so the diversity is quite remarkable.<br />
<br />
This <a href="http://wyvernsource.com/2016/10/06/remembering-players-letter-to-rhialto/" target="_blank">letter they wrote me back in 2012</a>, when the game went down so I could port it to Cloud, gives a pretty good sense of how much people like it.<br />
<br />
I'll report back in a few months and let you know how the launch went. Meantime, if you want to play, visit <a href="http://ghosttrack.com/">http://ghosttrack.com</a>. Hope to see you online!<br />
<br />
Rhialto<br />
<div>
<br /></div>
Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com13tag:blogger.com,1999:blog-13674163.post-82391868018852525842012-10-08T01:07:00.000-07:002012-10-08T01:40:12.127-07:00The Borderlands 2 Gun Discarders Club
<em>This is basically a review of, and a pros/cons rant about, Borderlands 2. If you're not into it, just don't read it! I'll write about stuff you like some other time. Maybe.</em><br /><br />
So!<br /><br />
I'm not the kind of person to say "I told you so." Noooo. Never. Well, never, <em>unless</em>, of course, I get to say it loudly, within hearing of a biggish stadium full of people. Which I can.<br /><br />
So here goes: <b>I told you so</b>. Toldya toldya toldya.<br /><br />
My predictions from my previous post, "The Borderlands Gun Collectors Club", all came <em>completely</em> 100% true, with Hyperionesque accuracy, Jakobsian impact, Maliwaney inflammatoryness, Tedioric blasting and surprisingly, even Vladofish speed. I made out like a Bandit.<br /><br />
I predicted, as you may recall, that (A) it'd be a great game ("duh"), (B) they'd screw up the token economy because they only partly understand it, and (C) as a direct result of B, players would gradually head back to Borderlands.<br /><br />
Three weeks after the release, I had my dreaded first "I really don't want to throw this gun away, but I have NO GODDAMN ROOM FOR IT, <b>THANK YOU RANDY PITCHFORK</b>" gun-discarding experience. And my reaction was, predictably, to think seriously about either creating a mule character or going back to play BL1.<br /><br />
I mean, I knew I'd have this reaction, but I failed to predict how amazingly fast it would happen. A week playing the game, another week on playthrough 2, a final week finishing all the optional side quests, and then boom -- the farming is fundamentally broken, so let's go play something else. But I don't <em>waaaaant</em> to! Why did they have to get this wrong? Why did I have to be so predictively correct? Argh!<br /><br />
Let me make this really simple and clear. You remember that famous exchange in <a href="http://www.imdb.com/title/tt0057012/">Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb</a>:<br /><br />
<b>Dr. Strangelove:</b> Of course, the whole point of a Doomsday Machine is <em>LOST</em> if you <b>keep</b> it a <b>SECRET</b>! Why didn't you tell the world, EH?<br />
<b>Ambassador de Sadesky:</b> It was to be announced at the Party Congress on Monday. As you know, the Premier loves surprises.<br /><br />
Well, if Dr. Strangelove were alive to play BL2 today, he'd have said:<br /><br />
<b>Dr. Strangelove:</b> Of course, the whole point of 87 Bazillion Guns is <em>LOST</em>, if you <b>don't let people keep them</b>! Why didn't you add more bank slots, EH?<br /><br />
I mean, at least Ambassador de Sadesky had a somewhat plausible excuse. But Gearbox has been thinking over this whole endgame-farming thing for, oh, probably eight years or more. How the hell did they arrive at the conclusion that "We should have 87 bazillion guns, and you personally should be able to keep, like, twelve of them!"<br /><br />
There's only one possible answer: they are clueless. I mean, don't get me wrong: they're also lovable, brilliant, passionate, technically astounding, and outright <em>visionary</em>. But they're also and bumbling and clueless. They're the neighborhood kid who catches a lizard and thinks it's really cool, and it *is* cool, except he puts it in a box and it dies.<br /><br />
I'll do this as a Good, Bad and Ugly post, just so you know it's, like, balanced. If I were just gushing a bunch of fanboy praise, you know as well as I do that it wouldn't be as credible. You have to hear the bad with the good.<br /><br />
But ugh, they were so <em>close</em>. So close! The game is so amazing!<br /><br />
Maybe they'll cut my second prediction in half, and release a DLC or patch within 6 months that gets the folks who drifted back to Borderlands to start collecting in BL2 again.<br /><br />
That, or maybe I'll contribute to the BL2 player-file-editor project. Modding is stupid, stupid, stupid; it's 12 year olds advertising that they are, in fact, mentally and emotionally really twelve years old. The mindset there is so juvenile that it pains me to admit that sometime in my distant past, decades ago, I probably would have thought that way myself. (That is to say: "Hey, I'm going to <em>mod</em>, because it's <em>not allowed</em> so it must be <em>cool</em>, and even though a <em>fugging gorilla</em> could figure out how to do it, and it takes any <em>hint</em> of challenge out of the game and makes me look like I was <em>severely shaken</em> as a baby, but I'm going to go ahead and show off my modded guns as if I'm some sort of <em>super</em>-gorilla." Yeah, that mindset.)<br /><br />
But modding will, to a very limited extent, help work around Gearbox's cluelessness by letting legit endgame farmers have <em>a place to put all their fugging produce</em>.<br /><br />
Very, limited, mind you. I'll tell you how it *should* be in the Ugly section. I'll talk about how to fix this situation, and, well, if Gearbox doesn't understand the fix, you can be sure some other upstart game company _will_ understand it, and it'll all be the upstart's limelight soon enough. A few more years of screwing this up, and as far as Gearbox goes we'll be, like, "hey, remember Diablo?"<br /><br />
Let's hope they get the message. Randy, guys, please -- <em>get the message!</em><br /><br />
<b>The Good -- no, wait -- The GREAT</b><br /><br />
Actually we should do Great first, then Good/Bad/Ugly. Because BL2 is a capital-G Great game. Best game ever? Well, no. Nobody's gonna top RDR for a while. But <em>dayum</em>, BL2 is a great game. It's actually a great test of whether you're an idiot, because if you don't like it or it doesn't appeal to you... well, you might not be an <em>idiot</em> per se; there's bound to be <em>some</em> explanation... I guess. In theory. There <em>could</em> be some other reason than you being an idiot, however improbable.<br /><br />
Anyhoo, let's get The Big Question out of the way: Is Borderlands II better than Borderlands I? Well, the answer depends on whether you think The Empire Strikes Back was a better movie than Star Wars. It's also the answer to "Should I play BL2 if I haven't played BL1?" If you think watching The Empire Strikes Back after Star Wars yields an acceptably awesome cinematic experience -- which it probably does -- then yeah, play BL2 first. Go for it!<br /><br />
The Lucasian comparison runs pretty deep. Borderlands is dry and dusty, has dome-like dwellings, introduces cute talking robots, features fully armored imperial bad guys with bad aim, and has a slow (but interesting) story arc up to a really dramatic finish in the last act. Whereas Borderlands 2 is lush and fast-paced and story-thick and incest-ridden and stuff, just like Ep. V. Let's just hope they don't carry the metaphor to the third installment, unless of course they want to have Patricia Tannis dressed like Princess Leia as the kept-plaything of some huge talking Thresher, in which case they have my blessing.<br /><br />
BTW, as an aside, and my wife agrees 100% -- all this chatter about Lilith vs. Maya vs. Moxxi is just outright silly. The answer is: Tannis. Followed, we think, by Helena Pierce, eye or no eye.<br /><br />
Anyway, where were we. Oh yeah, The GREAT. Where to begin?<br /><br />
The story is awesome. Burch was amazing. Tiny Tina is incredibly awesome. Other Burch, also amazing. Handsome Jack is so awesome that I found myself rooting for him most of the time. Threshers are way more awesome than they looked in the previews. The AIs are uniformly great, even when they occasionally make the mobs cower in the corner as if you're the Blair Witch. I don't mind. And the guns, oh the guns, they are beautiful and fascinating and a joy to behold.<br /><br />
The voice acting is awesome, with one noteworthy exception: Axton was mis-cast. He looks like Captain America (or Thor, or whoever, take your pick), and he possesses the competent, modest, sexily reserved flawed-hero look of Captain America (or Thor, or whoever, take your pick). But his voice and dialogue are pure Jack Black at his cheesiest. Oops. Oh well. But the rest of them are cool. Can't say much without giving the plot away, but everyone's voice acting was great, and Tiny Tina stole the show. Well her, and the Goliaths.<br /><br />
The game balance is exquisite. THEre have BEEn some missteps, naturally, and people are now vying to slay the much-vaunted raid boss Terramorphous in the fewest number of milliseconds with 100% legit gear. But on the whole the balance is superb.<br /><br />
My only cause for complaint is that the game balance feels far less serendipity-prone than Borderlands often was. They made BL2 so balanced that most of the time the stuff you find is really pretty boring. No one manufacturer shines above all the others, nor is one worse than the others (though I wasn't much of a fan of Jakobs or Pangolin, by and large). All the weapon types are about equally good. It feels as if they tightened the loot-rarity bell curve so they could keep the difficulty progression smooth. The game always felt challenging, albeit without ever descending into Survival Horror territory -- there is always enough ammo around to encourage exploration.<br /><br />
And when you do find the occasional legendary item -- I found only four of them during my first two complete playthroughs -- it will last you a good ten levels. Oranges are the new Pearls. They got this right, I think, and only in the difficult-to-balance endgame did they encounter any issues. In short, the game is challenging in a good way.<br /><br />
They kept the cel shading. Yay. Cel shading helps them avoid the Uncanny Valley where most other games reside today -- they look more and more realistic without actually looking, you know, realistic. The Borderands franchise embraces the graphic-novel look, and it's always stylish and fresh. Plus they don't cel-shade a lot of stuff: water, ice, atmospheric effects, weapon effects, and so on, which makes for some eye-popping moments. And just in case the poetic beauty of their rendering approach is lost on you, they also include some actual eye pops.<br /><br />
They kept the humor. Oh, did they ever. I'd find myself giggling at 3am until I was snorting and wheezing. The humor runs the whole gamut, from the coarse and obvious to the surprisingly subtle. I love the Dr. Zed vending machines (and the voice acting, and Zed's character in general -- he may be my favorite.) And I almost lost consciousness from laughing when I realized, after my second weapon swap, exactly what The Bane's curse was. Oh man, that one almost killed me. That whole mission was extraordinarily well-designed. And of course Claptrap is funny as always. Crazy Earl, too.<br /><br />
Not to mention the talking weapons and armor. I <em>love</em> my Hyperion auditing sniper -- haven't discarded it even though I can't really justify a precious inventory slot for it. There's a lot of genuinely funny stuff in this game.<br /><br />
But I think the Best Humor award has to go to the bad guys. Handsome Jack has his moments, but it's really the bandits that steal the show. Just when you think you've heard them say everything, they'll surprise you. Crazed psycho bandits running at you screaming that Pluto is still a planet, or complaining about how goddamn cold it is outside, or reciting Hamlet... it just never gets old. Bandit humor has quickly become one of the legendary defining hallmarks of the Borderlands experience.<br /><br />
No question about it: this game has it all. It's a huge, sprawling open-world game with an engaging story, superb balance, exciting game mechanics, outstanding writing, and absolutely unparalleled replay value.<br /><br />
The greatness of the game pretty much dwarfs everything else I say here. It's a worthy successor to Borderlands, and at this point it's already become one of the most important franchises in gaming history.<br /><br />
I know the folks at Gearbox love this game and they want to keep refining it, though, so I'll weigh in my $0.02 on how it could be even more awesome next time around -- hopefully as soon as the next DLC.<br /><br />
<b>The Good</b><br /><br />
The game has some aspects that I feel <em>bordered</em> on greatness without actually achieving it.<br /><br />
The biggest issue I have with the overall world design is that it's a theme park. No other word for it. It's a <em>cool</em> theme park, and I do love me my theme parks -- I'm a Disney Vacation Club member and we go to theme parks several times a year, rain or shine. But there was some sort of dynamic going on, maybe an overreaction to the misguided criticism of the dry dustiness of BL1 (which is about as valid a criticism as saying "Star Wars had too much sand!"), that maybe made them overcompensate a little with the paint gun.<br /><br />
So even though the game is often beautiful, the colors are often too saturated. When they get it right, it's nothing short of stunning. The Southern Shelf and Sawtooth Cauldron are standout examples. Both juxtapose bandit shantytowns with a rugged natural beauty -- but it's a beauty with a relatively subdued palette, dominated by just one or two colors.<br /><br />
Some of the man-made places are gorgeous too -- Opportunity City comes to mind, and the Friendship Gulag. But they, too, have dominant primary colors or motifs that shape and define the visual experience into something unique and refined.<br /><br />
In several other locations they went a little overboard. I'm not sure if it's the cel-shading adding visual clutter (as seemed to be the case in Fink's Slaughterhouse and in Sanctuary), or if it was actual clutter (Thousand Cuts comes to mind), or if they just went a little overboard with the fully-saturated paint gun (Wildlife Preserve, maybe, or Tundra Express). Whatever the reason, the game winds up looking a tad overdone in places. Still awesome, yes, but with colors that clash rather than harmonizing. They need to follow the basic color-matching advice from, say, Vogue or Cosmopolitan: Any three colors go together, and any more than that looks like a peacock shitting rainbows. I'm pretty sure it was Cosmopolitan who said that.<br /><br />
The theme-park quality goes beyond the color scheme. A lot of the areas feel bowl-shaped and directly connected to other equally bowl-shaped areas with <em>completely different</em> styling. So it feels a bit like you're walking from Adventureland to Tomorrowland to Fantasyland.<br /><br />
And there's a lot of... homage, let's say... to other games. It felt almost like they had other-game and/or other-movie envy, even though Borderlands is a game to be envied all on its own. So there's a dash of Skyrim (The Highlands), some Red Dead Redemption (Lynchwood), some Jurassic Park (Wildlife Preserve), and other maybe-unnecessary tributes. And Sanctuary reminds me way too much, ironically, of the towns in Rage.<br /><br />
So the overall world design lacked a certain cohesiveness of vision that was present in Borderlands I. It feels on the one hand like they were trying to elicit a Tolkeinesque or Homeresque journey from humble beginnings, increasing in scope, and ultimately walking into the heart of Mordor. There's a teeny bit of that going on. But it also feels like whoever had that vision was crushed by the weight of game directors all clamoring for unrelated themed areas to show off their... their what, I don't know. Just to show off.<br /><br />
On the whole, though -- coming from a guy who likes theme parks -- they did a really bang-up job of creating a theme park. The individual areas all have their own distinct personality. Some of them even have world-class atmosphere. The Fridge, the Bloodshot Stronghold and Ramparts, Overlook and the Highlands, and several other areas are really memorable. And the Arid Nexus Badlands were... well, that's my favorite area of the game overall, for reasons I can't go into, but wow.<br /><br />
My vote for Overall Best Area Design, though, goes to the Caustic Caverns. This area stood head and shoulders above the rest of the game, in the sense of being <em>new</em> -- who the hell has ever seen anything like that before? -- and <em>creepy</em>. I can only remember one or two times in my 35-year gaming history where I felt the sinking "I am on the WRONG side of the train tracks" feeling that I had upon entering the Nether Hive. The whole area gave me a new-found respect for -- and dread of -- the Dahl Corporation, whom I hope will be the villains of some upcoming installment. Oh, and the, uh, mission I can't give spoilers about, but it takes you to the top floor in the Caverns -- that was hands-down the best side quest of the game.<br /><br />
My third favorite location, after the Badlands and the Caustic Caverns, was Lynchwood. I'm a sucker for that sort of thing. It didn't make any sense AT ALL -- it was a gratuitous anachronism in a game that thrives on anachronisms. But I loved it. Robbing the bank and getting out of town before the posse came: that was straight-up inspired writing. I loved the Lynchwood mini-boss and that whole plot line; I loved the Marshall's announcements; I loved the whole thing. Lynchwood may not have made much sense in the larger story, but it was unquestionably awesome.<br /><br />
Anyway, let's face it: the game is a theme park. Not that this is bad! It's Good. But I'd argue that it's not Great. I think true greatness necessitates a uniformity of vision that admits no room for tongues planted too firmly acheek. BL3 is going to have to make some hard choices about whether to be good or great.<br /><br />
Good is OK, though. Nothing wrong with Good.<br /><br />
<b>The Bad</b><br /><br />
I'll try to keep this short. Mostly this is stuff that could be addressed in a straightforward way in a patch or DLC.<br /><br />
There was no explanation as to why NPCs don't get to use the New-U stations. Just sayin'. They'd better retcon that in next time.<br /><br />
No in-game explanation of the Golden Key chest in Sanctuary, so I (like half the rest of the civilized world) used both my golden keys right away without realizing what they were. In retrospect I don't think it matters, since having awesome weapons is probably more useful early in the game than later on. But it should have been a conscious choice, and I, like half the rest of the civilized world, was pretty pissed off to find that I'd squandered my keys without so much as a warning dialog.<br /><br />
They changed it so you can't open the menu if you're not on the ground -- that is, when you're jumping, or falling, or climbing a ladder, or being flung through the air by external forces (e.g. geyser, grenade), or stuck atop an enemy you had the misfortune to land on. This is hugely screwed up, so I can only imagine they did it as a last-ditch workaround for a no-holds-barred showstopper Christmas-won't-happen bug, and it'll get fixed in an upcoming release. That, or they hate their customers and think they're scum. Time will tell.<br /><br />
This change did help me understand that one of the habits I'd truly come to enjoy in BL1 was jumping and then opening the menu while in mid-air. Seriously. It was fun. I did it on purpose, all the time. I can't really articulate why, but it was exhilarating. It's as if they took away my childhood with that one simple dick move. I sure hope it was a last-resort thing that they plan to fix.<br /><br />
Inventory management has taken a turn for the worse overall. Yeah, it looks slick, but when has Gearbox ever been about "looks slick" over playability? I mean, no cutscenes, right? (Or at least no cheesy prerendered ones -- they do all their cuts right there in-game, and you can usually walk away from them.)<br /><br />
In BL2 they put a ton of work into the look-and-feel of inventory management, but they failed to nail the usability. In an RPG, even a quasi-RPG like Borderlands, inventory management is all-important. So maybe it's their shooter background at work here. I dunno. But there are a lot of serious wtfs going on. Examples:<br /><br />
* When you want to compare item A to other items, and you eventually navigate to item B, then close the let's-compare transaction, it leaves the selection on item B. Last I checked, this is not the way rational thought worked in any product designed by human beings with good intentions.<br /><br />
* You can mark items as "favorites", and then... nothing. You can't sort on them or do anything useful with them. But, alas, you CAN sell them, without any warnings or indicators that you just sold an item you'd marked as a favorite. So I have accidentally sold some really, really important shit, and only realized a few areas later, when it was too late to go back and buy them back. This has happened at least four or five times in my so-far 2.5 playthroughs of the game. That's too many for an experienced gamer. It means they have a UI problem.<br /><br />
* You can mark stuff as "trash", and sell it all at once. Except that's stupid. Everything should be trash by default. Most of the items you pick up ARE trash -- that is a natural outcome of the tightness of their rarity bell curve. If they had done the whole favorite/trash thing correctly, you'd only need to think about it at all when you picked up a blue-or-better weapon, at which point you could mark it as a favorite to prevent accidental sale. This, friends at Gearbox, would be less error prone AND less effort. Argh.<br /><br />
* There's still no way to "buy all" for ammo. Also, like in BL1, there are two concurrent views of your ammunition while you're shopping: the store's selection and your inventory levels. And, like in BL1, the two have unaccountably different sort orders. So as you move the selection cursor down the store's selection, the inventory cursor jumps around unpredictably. I can't believe they did this two games in a row.<br /><br />
* Unlike in BL1 (I think), when you're buying ammo by mashing buttons (because there's no bulk-buy function), you can <em>very easily</em> scroll past the grenades and start buying shit you didn't want while you're mashing the buttons. Again, no warnings, no "are you sure?", so it's really easy not to notice until a few load levels later.<br /><br />
* Like in BL1, they don't sort insta-health at the top of Zed's vending machines, which means if you run up to a machine to buy health in a firefight -- which is much more commmon now that they've eliminated portable health vials -- or if you're just not paying very close attention because your dog just knocked over your glass of water, then you stand a good chance of buying some expensive class mods and maybe not noticing. I've done this too. In all seriousness, sorting insta-health at the top is OBVIOUS, so only gross negligence can explain how it was done wrong two games in a row.<br /><br />
To be sure, they got a few inventory-management things right that were messed up in BL1. You can now compare items while shopping -- w00t! And the weapon cards show all the data rather than truncating. The item sorting makes a little more sense. The "examine this item" is REALLY cool, and I love just zooming and panning on my items to marvel at the intricate designs. But on the whole it was a step backwards, and it makes me very sad.<br /><br />
Other bad stuff... let's see. They still only let you quick-wield 4 weapons even though modern games all give you 8 slots on a wheel. In a game like BL, with elemental resistances and radically different opponent AIs, 4 slots just isn't enough. You need to be able to carry at least two different "weapon builds" with you. I don't care if we have to purchase them or work our way up, but we need more than 4 equipped-weapon slots. As things stand, swapping out weapons interrupts the otherwise smooth game flow and makes it sort of a drag. <em>Especially</em> when they don't let you open the menu mid-air. Jesus. How can the graphics be so beautiful, and the story so awesome, and the combat so smooth, but the inventory management is so screwed up? Is it different teams? What's going on here?<br /><br />
Let's see, what else, what else... oh yeah. On the PS3 version, every time you set your controller down it triggers a nuclear explosion. No, really. Well, it does if you're playing Axton with the middle skill tree. It's a side-effect of having switched the ability/grenade buttons with the zoom/fire buttons. I haven't made up my mind on this one; overall I think they probably made the right choice, but it reminds me of the Fable II days when you'd try to buy something from a blacksmith, hit the wrong button, destroy his house and send everyone screaming from the village for hours. It's not really ha-ha funny, at least not at the time.<br /><br />
Their bulk-vacuum function still sucks, so to speak. Actually the "interact with stuff" button hasn't changed behaviorally since BL1 in any significant ways. It still has all the old problems, and maybe some new ones.<br /><br />
For starters, they still have the horrible misfeature that holding the "pick up" button, which is used about 87 bazillion times per game session for bulk vacuuming, has different behavior if you do it on a weapon. What it does in that case is <em>grab it and wield it</em>, even though 9999 times out of ten thousand, the weapon in question is a piece of loot-crap that destined for a vending machine. Way to optimize for that 1 in 10,000 case, Gearbox. Moreover, way to keep it around for game 2.<br /><br />
The vacuum button still does a piss-poor job of actually vacuuming. And they've added "auto-vacuum", which does an equally piss-poor job of auto-vacuuming. I can't tell you how many times I've been standing there with 4 hit points, examining the item-card for a health vial on the ground, obscured only by the "pick up" text because it's within reach, thinking "um, why am I able to read this?"<br /><br />
They really need to fix it so that every replenishing item in a ten-foot radius from your character automatically zooms to you no matter what. Otherwise it devolves into a guessing-game as to whether their algorithm will be smart enough, and of course when the gameplay is fast and furious, you have to guess conservatively -- which defeats the entire purpose of having the feature. Picking stuff up -- heck, being <em>near</em> anything, degenerates into a button-mashfest.<br /><br />
And unless it's just my imagination, it feels like the pick-up button is <em>less</em> responsive than it was in BL1. When you open a chest, there is a nontrivial window during which the items <em>appear</em> to be grabbable, but pressing the button has no effect. You have to jab at it for up to half a second, maybe a second before the game says "aw fuck, that's right, I told them 'Pick Up' and they're pressing the button, so maybe they're actually trying to, you know, pick that shit up."<br /><br />
Is it really that hard to detect that the button is already down, once the items are actually grabbable?<br /><br />
And of course the whole cycle gets repeated twice per container, because the game is just as likely to ignore your button-press to <em>open</em> the container.<br /><br />
The last "Bad" line-item I'll whinge about is that although the game seems really generous about accuracy, they're real bitches about who died first, when you and the last nearby enemy expire at the same time. To illustrate how forgiving they are overall: you can be using a sniper from a thousand yards away, and pull the trigger when the cursor's kinda pretty far away from the mob's head, and it'll explode way more often than probability would dictate that it should. Very gratifying! No complaints here! And they're also really nice when it comes to landing jumps that you didn't quite hit, unlike in many other games. In general the game is pretty forgiving about controller accuracy.<br /><br />
But if you die "at the same time" as an enemy (i.e. it happens within ~100-200 ms before or after), you go into a fight-for-your-life bleedout. Sometimes it seems very, very clear that the enemy died second, but the game didn't actually realize it, and penalizes you. It seems unfair. To avoid any suspicions of stupidity on the part of the detection algorithm, it would be nice if they'd give you a half-second window AFTER the last enemy dies before your own death results in a bleedout. Hell, even 300ms would be nice. There are already plenty of legitimate situations for aggravating bleedouts -- the classic one being when the enemy shoots you and then walks around the corner. So I don't think it'll cause a balance problem to add the short grace period I'm proposing.<br /><br />
I know for a fact that there are some issues with the code that detects whether an enemy is dead. Several of the missions have resulted in me sitting around for a long, long time (several minutes) after the mission was obviously over, except the game couldn't figure out that it was over. Examples include the last round of the Natural Selection Annex, where I just wandered around the arena hoping the game would finally notice I'd won, and the plant-the-flag Sawtooth mission, where <em>twice</em> the last enemy disappeared minutes before the Slab King noticed I was victorious.<br /><br />
So I suspect there's a race condition here, in which you can legitimately die <em>before</em> the last opponent, but the game doesn't notice, and you bleed out while shaking your first at the unfairness of it all. Why go there? Just put in a short grace period, and make sure it's really really clear that you died <em>after</em> the opponent -- often from a long-fused grenade, I've noticed. Then there's no cause for questioning the game code itself, which undermines player confidence in the fairness and quality of the engine.<br /><br />
That's about it for the Bad. Inventory management woes, no menu while jumping, bulk-vacuum issues, and bleedout race conditions. That's pretty good, all things considered. Why not just fix them all in a patch, and make it perfect?<br /><br />
<b>The Ugly</b><br /><br />
There's only one Ugly in BL2, and it's a big one. The Ugly is that for no reason whatsoever -- negative reason in fact; it's flat-out anti-reason -- they don't give you enough bank slots to make farming fun for more than a few days.<br /><br />
They put an <em>astounding</em> amount of effort into the endgame mechanic, folks. This was not some casual design thing for them. They put in hooks for new raid bosses, tons of one-off unique legendary weapons with custom artwork and code, and a plethora of design decisions to prevent any one raid avenue from dominating the endgame. They made it so that every one of the dozens of bosses and mini-bosses has its own legendary that it can drop, so that farming is distributed across most of the locations in the game, which breaks up the monotony. They even added formal item-twinking across characters, amazingly enough.<br /><br />
But for all that, it's fundamentally broken. And what's more, they have this huge, gaping problem with a substance called Eridium (I'm sure they're sick of hearing about this by now), which lets you buy a limited number of carrying-capacity upgrades, including bank slots. So players are already simultaneously crying out for an Eridium-sink and more bank capacity. I mean, they should have seen this coming months ahead of their code freeze. It would have taken maybe 3 days of engineering and testing effort to make it so that Earl could sell you increased bank capacity at usury rates, even a geometric progression. And it would have been fine. Everyone would have been satisfied.<br /><br />
Here's the thing, though. It's not just about capacity. If Gearbox wants to do this Right, by which I mean pull their heads out and do something that nobody in the game industry has ever done before, what they really need to do is give players a database.<br /><br />
That's what we want, really. You make 87 bazillion guns, and let us collect them? Well then we're going to want hundreds and hundreds, maybe thousands of guns in our collections. Not twenty, or whatever stupidly low number you've given us. That just spawns modding and mule characters and leaving the game altogether -- any outlet from the collection pressure; players will use them all.<br /><br />
What BL1 needed was a way for you to effectively manage a collection of a thousand guns. What if you want to look at all your Mashers? Or all your weapons by type, or by elemental damage, or by manufacturer? I'm not asking for a data warehouse here, or for some fancy text-based console-query UI. I mean, *<em>I</em>* would use it, but obviously we want to keep this mainstream. <br /><br />
If you start by formulating the basic problem as: "How do I manage a collection of a thousand guns," then your UX guys should be able to come up with something acceptable. No — you know what? Fuck acceptable. They should be able to come up with something <em>awesome</em>, something in keeping with the innovation and forward-looking badassery that we've all come to associate with Gearbox and Borderlands.<br /><br />
Ironically, BL1 was better at this -- a LOT better. Of all the inventory "improvements" introduced in BL2, the only one that improves gun collecting as a hobby is the "examine this gun in 3D" feature.<br /><br />
I imagine I'm going to do exactly what I (and everyone else) did in BL1, which is to figure out how to modify the bank-slots and inventory-slots counters in the player save files, and hope like hell that you guys can actually scale up to something reasonable without crashing or locking us out or triggering some other godawful poison-pill.<br /><br />
But just having a lot of slots is only a tiny part of the picture. Gearbox has created a gun-collector's game, but they haven't given us a way to collect guns. How messed up is that? <br /><br />
I think it's pretty messed up.<br /><br />
All this talk about BL1 has given me a major case of nostalgia. I love BL2, but I think I need to kill some Drifters to pull me out of this funk. I remember when I finally reached the point with Brick where I could walk around the sand dunes and mow down drifters -- on foot -- and live to tell the tale. BL2 doesn't have any moments like that, not yet. Nothing you had to work for like that, anyway. It took <em>months</em> of gun collecting before I was that badass in BL1.<br /><br />
And I remember looking through that sepia-tinted window on entering T-Bone Junction, that window filled with promise of adventure, seeing those rowboats suspended over a forty-foot drop to the salt sand, with the scorching wind blowing the makeshift wind socks tied to the Lucasian architecture. I remember hearing Knoxx give his reports to Admiral Mikey, Mr. Shank asking if I thought I was being <em>stealthy</em>, Athena barking her ludicrous military-speak to me -- a merc -- and Thirsty the Midget asking if I could turn the power back on in the Brandywine.<br /><br />
I remember. And I think it's time to head back. I knew this would happen. I knew Gearbox would screw us on the gun collecting, and I knew sooner or later it'd be back to Knoxx and the Armory and Crawmerax.<br /><br />
I just didn't realize it'd happen so <em>fast</em>.<br /><br />
Sigh.
Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com8tag:blogger.com,1999:blog-13674163.post-14720484879605304642012-03-12T01:44:00.001-07:002012-03-14T01:49:41.248-07:00The Borderlands Gun Collector's Club<style type="text/css">
.dropcap {
font-weight:bold;
font-size:120px;
float:left;
padding:0;
margin:-4px 5px 0px 0px;
position: relative;
background-color:none;
line-height:0.9;
}
.note {
color: #4169e1;
font-style: italic;
}
</style>
<br />
<table><tbody>
<tr><td width="20%"></td><td><em>Craw is so damn frustrating!!! He and his sidekicks have killed me so many times that I think I am starting to get sore in real life....arghhh need better weapon!! He will die though, oh yes he will die and I will do the brick dance around his stupid purple corpse. --<a href="http://forums.gearboxsoftware.com/showthread.php?t=97845&page=6">dedbydwn</a></em>
<br />
<br />
<em>At the end of it all, Borderlands is well presented, but get under the {incomprehensible mumble} polish, it's just a dull, spare, nuts-and-bolts shooter with some unnecessarily good writing bridging the slow process of watching numbers steadily increase. A game for the kind of person who takes pictures of his car's odometer whenever it clocks another thousand miles. </em><br />
<em>--<a href="http://www.escapistmagazine.com/videos/view/zero-punctuation/1448-Borderlands">Zero Punctuation</a></em>
<br />
<br />
<em><b>Diablogenarian</b>: Someone who played Diablo as a kid, still waiting for Diablo III on his 80th birthday.</em></td></tr>
</tbody></table>
<br />
<br />
<span class="note"><b>Editor's Note:</b> I totally did not write this post. My friend did. Hi-*her* name is Chaz...mina. So for all you dear people waiting on me for "stuff", whether it's Wyvern or js2-mode or Amazon War Stories or Talent42 or PVOTU or the Effective Emacs movie screenplay or whatever: you can rest easy, confident in the knowledge that I am working 24x7 on your personal needs. I would never DREAM of playing Borderlands all day long for months on end. So *please* stop stalking me. Thank you!</span>
<br />
<br />
<h2>
Predictions</h2>
<br />
<span class="dropcap">G</span>earbox gets it. Well, sort of. I mean, it's kinda hard to tell.
<br />
<br />
I predict that Borderlands 2 is going to be an awesome game... but that's a pretty lame prediction, isn't it? We already <i>know</i> they "get it" at least that much. Gearbox knows how to make an awesome game.
<br />
<br />
Afterwards, though, I predict that everyone will immediately go back to playing Borderlands 1. Now _that_ is a prediction for you.
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://pcmedia.ign.com/pc/image/article/100/1007246/borderlands-20090724015041249_640w.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="http://pcmedia.ign.com/pc/image/article/100/1007246/borderlands-20090724015041249_640w.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>Looking back fondly at Borderlands 2</i></td></tr>
</tbody></table>
<br />
And then Gearbox will sit back and scratch their heads for an uncomfortably long time. Long enough to where you start wondering in all seriousness whether it's lice or something. But then after about a year, a very scratchy year, they'll release a DLC that <em>finally</em> gets people to stop playing Borderlands 1, over 5 years after its initial release. Even then it will be a slow transition, because they were too damned successful with the first game. People are living there now.
<br />
<br />
But I'm not convinced that Gearbox understands why. The evidence is mixed.
<br />
<br />
Today's topic: <span style="color: #aa0000;">the Magic Recipe for creating addiction.</span> Hell, it doesn't even matter if it's a game or not. If you want your website or product to be addictive, the recipe is really simple. I'll even share it with you. Gearbox somehow stumbled or fumbled into the recipe. But nothing they've said publicly since then indicates that they understand all the ingredients that went into creating it -- including a few all-important rats and cockroaches that fell in while they weren't looking.
<br />
<br />
Another prediction: there's going to be an MMO version of Borderlands someday. And when it happens, you will be able to find me there. I will renounce every last shred of my real-world identity and go live there until I'm finally whacked by one of my helpful stalkers.
<br />
<h2>
Wait, dude, hang on -- I played Borderlands. It was OK, but nothing to go cryin' to Mom about</h2>
Well, no. No, you didn't really play Borderlands. What you played was more like a teaser demo. If you made it through the main storyline and killed the huge vaginalien (complete with Japanese-style tentacles) in the comically misnamed "Vault", but <em>then you put the game away</em> -- well, my friend, I'm sad to tell you that you missed out. All you got was the merest sniff of what the game had in store. And it smelled like... well, the ending was a little fishy, if you catch my drift.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://images.wikia.com/borderlands/images/a/a9/The_Destroyer.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="233" src="http://images.wikia.com/borderlands/images/a/a9/The_Destroyer.jpg" width="400" /></a></div>
<br />
See, you have to play the whole thing all the way through _twice_. Not with different characters, either. You have to play the <em>same exact character</em>, with all the loot and skills you built up on the first run-through. The second time around, the enemies will sort-of kind-of be leveled to you except not really. Not yet anyway.
<br />
<br />
OK then, what about after you finish it the second time? Ha! Still not done. Yes, it's true that you now have more than a sniff -- you're up to a taste. MMmmmm. But THEN you have to purchase at least two of the DLCs -- Knoxx for sure, and arguably Moxxi just for the bank -- and play those too. Now you're getting close to, uh, penetrating the secrets of Borderlands. (This appalling metaphor is 100% Gearbox's fault. I accept no responsibility.)
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGH2HpvXfUd2UBwoE4RofZ9RhDt6cmXVXNpxcFVoxd5cjjIlkLpmX-lp26XZ9ZwoBg8pOdMnnVw6kMaGf9ArBVWdZ65L-fEnQ7_SnT2xUDcnOKCyEV6YybmT-ZxYQxq_2t2yJU/s1600/Mad+Moxxi.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="231" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGH2HpvXfUd2UBwoE4RofZ9RhDt6cmXVXNpxcFVoxd5cjjIlkLpmX-lp26XZ9ZwoBg8pOdMnnVw6kMaGf9ArBVWdZ65L-fEnQ7_SnT2xUDcnOKCyEV6YybmT-ZxYQxq_2t2yJU/s400/Mad+Moxxi.jpg" width="400" /></a></div>
<br />
Finally, after all that -- in order to <em>truly</em> appreciate the greatness of Borderlands -- you have to stumble across not one, but TWO bugs in the game. Bugs without which everyone on earth would have basically forgotten the game by now, and it'd be a lovely historical footnote like BioShock or Fallout 3. A game to be played, sure, maybe a couple of times even, but shelved in the end. As opposed to the situation we have today, which is thousands of people playing it on- and offline around the clock, almost 4 years after the release.
<br />
<br />
If you didn't experience the end-game, then you didn't really play Borderlands. Paraphrasing <a href="http://borderlands.wikia.com/wiki/Marcus">Marcus Kincaid</a>, <i>"A Borderlands player without a lobster grudge is just a guy with a game."
</i><br />
<h2>
So What's Borderlands, you ask?</h2>
Oh, man. My sincere apologies to the six million or so of you who've played the game (according to Gearbox). But I guess some people didn't get the memo.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.gamersunity.de/img/sys/2009-45/thumbs/borderlands-wallpaper-5.625-391.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="250" src="http://www.gamersunity.de/img/sys/2009-45/thumbs/borderlands-wallpaper-5.625-391.jpg" width="400" /></a></div>
<br />
Borderlands is a multi-platform 3D shooter/RPG released in 2009 by Gearbox, published by 2K. It was a surprise hit, since it had no prior franchise titles. It featured an open game world developed apparently from scratch using a modded Unreal 3 engine. The game won numerous awards and garnered solid (though not earth-shattering) reviews. Following its release, over the next 18-odd months, Gearbox released four purchasable Downloadable Content (DLC) add-on packs. Together the four DLCs add up to an experience approximately the same size and scope as the original game. Which was decently big.
<br />
<br />
Borderlands is not MMO. It's 4-player co-op ("PvE" or players-vs-enemies). Which makes it a bit like the old Diablo games, except that in Diablo "co-op" meant "You log in and shout 'Hi Everybody!' and someone kills you instantly and takes every last goddamned shred of a possession you ever owned while mocking your ancestry." In Borderlands the online co-op play is far more civil. At worst, abusers might violate polite social convention, whereas in Diablo it was more like the Geneva Convention. Fortunately these days that kind of behavior tends to be confined to XBox Live, where it's so similar to Microsoft's internal culture that they haven't noticed anything unusual about it.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.atomicgamer.com/screenshots/game-3465/82069-800.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="245" src="http://www.atomicgamer.com/screenshots/game-3465/82069-800.jpg" width="400" /></a></div>
<br />
Borderlands plays a little like an Old West cowboy adventure, complete with massively overpowered six-shooters. Except it's in a sci-fi-ish scenario set on some backwater planet at the edge of nowhere -- a planet that a dozen or so arms manufacturers have exploited fully and are now using as a big trash heap, now largely overrun by bandits and hungry local fauna.
<br />
<br />
The game is distinguished by its use of a 3D rendering technique called cel shading, which imparts a cartoonish and often timeless look. Cel shading is somewhat controversial, but in Borderlands Gearbox somehow managed to make it gritty and ultimately appealing to the hardcore gaming crowd sending out their totally hardcore reviews from Mom's Basement Central.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://images.wikia.com/borderlands/images/6/69/Borderlands-Spiderant.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="http://images.wikia.com/borderlands/images/6/69/Borderlands-Spiderant.jpg" width="400" /></a></div>
<br />
Borderlands is also distinguished by its superb production values. It has solid voice acting (verging on greatness in the DLCs), outstanding character design, intriguing area design, memorable visuals, satisfying sound and music, clever writing, a generally smooth frame rate, and an acceptably low bugscape -- well, at least in single-player mode. We'll get to the Borderlands multiplayer connectivity shit sandwich in a little bit. But even then it's one of the tastiest poop meals you can spend your hard-earned money on.
<br />
<br />
Oh, and Borderlands has <a href="http://www.youtube.com/watch?v=UVCOoQIDrt0">Claptraps</a>. Claptraps alone give Borderlands enough character to make it franchise-worthy. To be fair, the game is pretty light on story and character development -- even for a shooter, where the bar can't get much lower. But they've set up just enough tantalizing back-story to give them plenty of space to develop these things properly in sequels. And in the meantime, Borderlands really delivers in the Claptrap department.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.co-optimus.com/images/upload/image/2009/borderlands_claptrap.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="http://www.co-optimus.com/images/upload/image/2009/borderlands_claptrap.jpg" width="400" /></a></div>
<h2>
</h2>
<h2>
Writing: good. Cut scenes: arghhh.</h2>
Another standout feature of Borderlands is its heavy bias towards fun, and towards actual, you know, <em>gameplay</em>. Which makes it unlike a lot of other titles today, which lamely try to buy you off with a bunch of nonplayable prerendered cutscenes, sometimes made even worse by embarrassingly juvenile dialog and character designs (hello BioWare, Lionhead, EA).
<br />
<br />
Folks: It's OK for the subject matter to be juvenile. It's OK for the characters to be juvenile. It's OK for the target audience to be juvenile. <b>But it's a fucking train wreck when the <em>writers</em> are juvenile</b>, because they'll alienate everyone above their own level of sophistication -- a demographic that just so happens to have the most disposable income to spend on games. I could rant about this for hours, but I can already tell this post is going to be huge.
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0qm_jxGmOiPxBpPgE7GvG_GFWiheppXPZZTwe1J6OlnhDuH0FjSja4A98yqNCKuTYf28zufzKHOjGp-79O6qibszmYKikSdxhwsRDNPNRJYfh_OBrP6JsqpwikSTCr9qIwANF/s1600/ass_effect.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0qm_jxGmOiPxBpPgE7GvG_GFWiheppXPZZTwe1J6OlnhDuH0FjSja4A98yqNCKuTYf28zufzKHOjGp-79O6qibszmYKikSdxhwsRDNPNRJYfh_OBrP6JsqpwikSTCr9qIwANF/s400/ass_effect.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">"<a href="http://freodom.blogspot.com/2011/11/mass-effect-2-is-white-supremacist-game.html" target="_blank">Ass Effect 2</a>"</td></tr>
</tbody></table>
<br />
Ah, me. Anyway, Borderlands has good writing. Or as the internationally celebrated and occasionally intelligible game critic Ben "Yahtzee" Croshaw of "Zero Enunciation" fame put it, "unnecessarily good writing". Ben evidently didn't care for Borderlands, but then again he didn't make it to the endgame. Which is good for the rest of us, because now he can find time to shit all over Mass Effect 3. At least I hope he does.
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://images.wikia.com/zeropunctuation/images/8/8a/Borderlands_1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="223" src="http://images.wikia.com/zeropunctuation/images/8/8a/Borderlands_1.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>Ben and his friends playing Borderlands</i></td></tr>
</tbody></table>
<br />
So where were we -- oh yeah, cutscenes. Gearbox knew they had a damn good game right out of the starting gate, so they didn't need to try to bluff up an artificial sense of money's-worthiness by padding it out with massive cutscenes. So the few cutscenes they DO have are all (a) short and (b) full of awesome. As it should be.
<br />
<br />
Don't get me wrong. Borderlands isn't perfect. There are precious few games in history that can make that claim. And heck, there's nothing wrong with a few imperfections. They can sometimes give a game more character! In fact it is in precisely that fun-loving spirit of character-inducing imperfection that Borderlands features several <a href="https://www.google.com/search?q=Borderlands+vaginas">truly colossal vaginas</a>. I'm not just talking about the guy who implemented their driving physics, either. Good guess though.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFiJzLeSUHo_hBaKVN_yEKpiqPWxvHC430MX4jMoOnF0WJtEHcMF5sepQgneJbfc5q-mOOjZNQeKhlTAsBspXEa1Q2mSMNpBIfycKS1_777bISge0pROCCSPqqROTFFf-D3_zxUQ/s1600/rakk_hive.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="230" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFiJzLeSUHo_hBaKVN_yEKpiqPWxvHC430MX4jMoOnF0WJtEHcMF5sepQgneJbfc5q-mOOjZNQeKhlTAsBspXEa1Q2mSMNpBIfycKS1_777bISge0pROCCSPqqROTFFf-D3_zxUQ/s400/rakk_hive.jpg" width="400" /></a></div>
<br />
But whatever. <b>None</b> of the awesome qualities or quirks of Borderlands really matters in the long run.
<br />
<br />
In the end, above all else, Borderlands will be remembered as the <b>only</b> single-player title since the Diablo games to capture the fun of a Diabloesque loot system. For almost ten years people tried and failed, and then Gearbox finally came along out of nowhere and got it more or less right.
<br />
<h2>
Is Borderlands an RPG?</h2>
Borderlands claims to be part shooter, part RPG. The RPG claim was IMO a minor marketing mistake, since people have preconceived notions about what it means, and Borderlands isn't an exact match. In fact I didn't play the game when it came out because I couldn't figure out what kind of game it was. It was only when I'd completely run out of stuff to play -- I, <a href="http://borderlands.wikia.com/wiki/Chaz#Trivia" target="_blank">Chazmina</a>, that is, and totally not that Stevey guy who's busy doing stuff for you -- that I started playing through old "Game of the Year" titles looking desperately for something that didn't suck. And it took me a while to see why they felt they could get away with calling it an RPG.
<br />
<br />
It turns out that Borderlands has classes and skill trees and skill points and experience points and levels and specializations and support for parties of adventurers ("vault hunters") with mixed and complementary skillsets. With that in mind, it's easy to see why 2K marketed it as part-RPG, knowing the game like they did.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://media.giantbomb.com/uploads/0/6414/1172208-borderlands_lilith_build_super.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="301" src="http://media.giantbomb.com/uploads/0/6414/1172208-borderlands_lilith_build_super.jpg" width="400" /></a></div>
<br />
But it might have been better to let the critics and players arrive at that conclusion, because it's missing many of the other elements people have come to expect of RPGs -- elements such as "the decisions I make affect the plot outcome", "I have meaningful, stateful, persistent interactions with individual NPCs", "I put on my robe and wizard hat", "People mock me in real life", and all the other things we've come to expect from role-playing games and gamers.
<br />
<br />
I think they might have done better initially by just marketing how fun it was. Instead it took a slow-burning word-of-mouth campaign before the sales really took off.
<br />
<h2>
You mentioned "fun"? I like "fun".</h2>
Borderlands is huge on the fun factor. You hardly realize it as you play it the first or even the second time, but the team at Gearbox put a lot of stock in <b>fun</b>.
<br />
<br />
For comparison, just look over at <a href="http://www.rage.com/gate/?return=%2F" target="_blank">Rage</a>, an Id Software title that came out last year. Rage is named for the emotion that new players feel when, after an hour of gameplay, they die for the first time and discover the game has no auto-save system. Rage (let's be honest here) copied a lot from the Borderlands crib sheet. Or they tried. But unfortunately all Id knows how to do well is graphics. So of course Rage has startlingly high frame rates and graphics that are more realistic than looking outside your basement window. As you play it you're all "wow man this is... uh, very <em>real</em>" as you play it. But it winds up being a disappointing (though gorgeous) slogfest.
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://www.gameguideblog.com/wp-content/uploads/2011/10/Rage-2011-Wallpaper-1024x640.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="250" src="http://www.gameguideblog.com/wp-content/uploads/2011/10/Rage-2011-Wallpaper-1024x640.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>Almost as pretty as Red Dead Redemption</i></td></tr>
</tbody></table>
<br />
Which is no surprise, since Id lost their divining rod for "fun" many, many years ago. You wind up playing through the game as a chore, out of nostalgia or professional respect. And today, just a few months after its release, nobody's playing Rage anymore. The fun it offers is ephemeral: typical fire-and-forget mediocre-shooter fun. It's really sad to watch Id sinking into irrelevance. Maybe they should just focus on selling their engine. Rage offers nothing at all in the addiction department, so if they were trying to mimic the success of Borderlands they did a piss-poor job of it.
<br />
<div class="separator" style="clear: both; text-align: -webkit-auto;">
<br /></div>
Then you have your RPGs. A lot of RPG-ish games these days like to focus on "realistic immersion", but it's hard to get right. RPG developers are always falling into this trap of trying to add "just enough" realism. But it's a slippery slope, and they add Weapon Repair and Realistic Ammo Limits and Bizarre Inventory Restrictions (<em>"you can carry 60kg of usable stuff, and 20 metric shit-tons of components"</em>) and Walking for Hours and all this other un-fun stuff. It's like they're trying to add a touch of Survival Horror to the game, but it just winds up making the gameplay irritating. RPG developers: let fun take precedence over realism, for cryin' out loud.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://vaystudios.com/img/Defined32-Close.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="210" src="http://vaystudios.com/img/Defined32-Close.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>Realism doesn't matter if the game is <b>fun</b></i></td></tr>
</tbody></table>
<br />
One developer who got the mix right for exactly one title: Bethesda's Fallout 3 was a perfect blend of gritty survival realism, shooter/looter joy and fiddly RPG mechanics. Their crowning achievement was the Dunwich Building, although probably only if you're a Lovecraft fan. But even if you missed that frightening little side mission, or didn't get to it until you were insanely overpowered (the downfall of every Bethesda game in history, and they still don't fucking get it right in "I'm so grossly overpowered now that I now let my horse deal with any pesky dragons" Skyrim), Fallout 3 is still an amazing game. (And New Vegas was an amazingly good <em>attempt</em> at being a great game, so it gets partial credit.)
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://www.wired.com/images_blogs/gamelife/images/2008/11/25/fallout3_dogmeat.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="147" src="http://www.wired.com/images_blogs/gamelife/images/2008/11/25/fallout3_dogmeat.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i><br /></i></td></tr>
</tbody></table>
It's no wonder everyone wishes for a cross between Borderlands and Fallout. Think of it: all the cerebral fun and horror of Fallout 3 combined with the visceral fun and humor of Borderlands. It can be done! Someone will do it. We might all be Diablogenarians by then, but it'll happen someday.
<br />
<br />
Anyhoo...
<br />
<h2>
Fun Isn't Enough</h2>
If you spend enough time in Borderlands you start to develop a picture of their design meetings:
<br />
<br />
<i>"Hey, wouldn't it be cool if we had, like, angry midgets?"
</i><br />
<i><br /></i><br />
<i>"Uh, don't they prefer to be called Little Dudes?"
</i><br />
<i><br /></i><br />
<i>"Not on Pandora. On Pandora they prefer to be called aaaaAAarguhgh as they shoot your ass with a shotgun so big that it throws them onto their backs."
</i><br />
<i><br /></i><br />
<i>"Woah! There's no *possible* way Legal will let it through, but it does sound fun! Let's go with it for now."
</i><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.bushmackel.com/pics/borderlands_midget.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="223" src="http://www.bushmackel.com/pics/borderlands_midget.jpg" width="400" /></a></div>
<br />
It's like Gearbox decided to take the FUN knob and turn that thing hard right until it breaks off, and ship whatever the hell emerges from that decision. And it works. The game is <em>unrelentingly</em> unrealistic from a typical RPG viewpoint, but you wind up forgiving and forgetting almost everything because the fun factor is through the roof.
<br />
<br />
Here's the problem, though: <b>fun isn't enough to create addiction</b>. Hell, if you want the latest proof, go play <a href="http://www.bulletstorm.com/home" target="_blank">Bulletstorm</a>. It is without question the highest fun-density ever packed into any game, EVER. Bulletstorm makes Borderlands look like a season-length National Geographic documentary series about kittens. But Bulletstorm blows your entire fun wad in one 10-hour sitting, and then it's over. You'll probably play it one more time, because you'll be thinking "Did I _seriously_ just do all that shit? Must... do... again!" But after the second playthrough, which is 100% identical to the first playthrough, you realize that's it. It's over. You wish there were more, but it basically kicks you out. You're done. Move on to the next game.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.gameranx.com/images/wallpapers/bulletstorm/12905324361080pbulletstorm_8.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="http://www.gameranx.com/images/wallpapers/bulletstorm/12905324361080pbulletstorm_8.jpg" width="400" /></a></div>
<br />
Fun alone won't keep people playing your game -- both offline and online -- <em>four years later</em>. Nope. In fact Fun has very little to do with the Magic Recipe for Addiction. All Fun can do for you is get people in the door.
<br />
<br />
We all know that MMOs keep players coming back day after day, long after the players have ceased to have any semblance of "fun" (at least in the usual sense of the word) while they're playing. Let's review how they do it. By incorporating these simple rules into your boring game, or your shitty website that everyone is calling a "ghost town", I'll show you how you'll be able to salvage something halfway decent out of the mess you've made.
<br />
<h2>
The Mechanics of Addiction</h2>
I could write a big full-featured post on this topic, but that would be realistic and totally not fun. So I'll just dump the highlights on you.
<br />
<a href="http://1.bp.blogspot.com/_Zme48fcULcQ/TCyflGvmrJI/AAAAAAAAAFo/6EXJYjGODzM/s320/smiley-classroom.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://1.bp.blogspot.com/_Zme48fcULcQ/TCyflGvmrJI/AAAAAAAAAFo/6EXJYjGODzM/s320/smiley-classroom.jpg" /></a><br />
A <em>Token Economy</em> is any system in which you are awarded meaningless but highly visible "tokens" for Good Behavior -- that is, for the behavior the creator of the system is trying to provoke in you.
<br />
<br />
<b>True But Apparently Little-Known Fact</b>: Token economies are among the most <a href="http://en.wikipedia.org/wiki/Token_economy">powerful drivers</a> of human behavior. They're used in grade schools, prisons, mental institutions and the military to incent people to act in certain ways. And it works, boy howdy does it ever work. Once they start handing out those gold stars, you'd shoot your own grandmother to get one.<br />
<br />
Some companies think they have the whole Token Economy thing figured out, so they create a Badge system or (equivalently) a Trophy system. Badge systems are what stupid people do when they think they've figured out Token Economies.
<br />
<br />
Hey, don't shoot the messenger here. I'm just reporting facts. It's what I'm known for.
<br />
<br />
<b>Token economies need *scored* tokens.</b> That's why badge systems are lame. They can never generate the addictive pull because there's no high-score list possible, other than the overall badge count. The count itself can be reasonably addictive if there are enough players -- think "number of Facebook friends". But that's Weaksauce Flavored Sauce Substitute <em>(Note: contains no actual Weaksauce)</em> compared to the addiction levels achievable by having multiple token categories, multiple high-score lists, and a tight bell curve for token rarity.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://upload.wikimedia.org/wikipedia/commons/e/e9/South_African_military_decorations_-_1975.gif" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="320" src="http://upload.wikimedia.org/wikipedia/commons/e/e9/South_African_military_decorations_-_1975.gif" width="156" /></a></div>
Some token economies let you purchase the tokens at a high cost. Some token economies even have physical tokens. Disney <a href="http://www.ebay.com/sch/i.html?&rt=nc&_nkw=disney+pin&_ipg=200&_sop=3">understands this</a>. So does <a href="http://www.ebay.com/sch/i.html?&rt=nc&_nkw=louis+vuitton&_ipg=200&_sop=3">Luis Vuitton</a>. Scroll through a few pages and try to figure out where those prices are coming from.<br />
<br />
Token economies are fragile. If Billy breaks into the teacher's desk after hours and starts handing everyone fistfuls of gold stars, they become worthless and the economy collapses, irrecoverably. In high-end fashion handbag terms, counterfeit products threaten to destroy the token value. In game terms: game balance is hard, and getting it wrong can tank the economy.
<br />
<br />
All this is just another way of saying that <b>rarity creates desirability</b>. It's hardwired into the human brain. There are multiple complementary parallelizable exploitable ways of creating rarity. Most people don't get this, though, and consequently they create nifty products and systems that ultimately fail to achieve any kind of stickiness: creations destined to be nothing but flashes in the pan.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.us-coin-values-advisor.com/image-files/rare-coins-index-last-12-mos-main-page.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="209" src="http://www.us-coin-values-advisor.com/image-files/rare-coins-index-last-12-mos-main-page.jpg" width="320" /></a></div>
If you don't already know all this stuff better than I do, then you know fuck-all about creating addiction, and it's no wonder your product's badge system isn't generating adoption or stickiness or 7-day actives or any of that other shit you're measuring.
<br />
<br />
It irritates me to the point of boiling rage that I have to explain this stuff -- that the people most companies put in charge of mission-critical initiatives are so completely fucking clueless, to the detriment of their companies <em>and</em> all the rest of us. So I'll stop here before I have a heart attack. You either get it, or you don't.
<br />
<br />
Gearbox gets it.
<br />
<br />
Well, sort of. I mean, it's kinda hard to tell. They definitely get <em>part</em> of it.
<br />
<h2>
Disallowing jumping is what Stupid Designers do</h2>
I need to relax a bit, so I'm going to time-out here for an utterly incongruous digression. This section has absolutely nothing to do with the rest of the post. But it has to be said.
<br />
<br />
<em>Jumping is fun</em>. Period. End of story. If playing your game involves manipulating a humanoid ragdoll in three dimensions, and it doesn't support jumping, then you suck. No, don't go pointing at Zelda. Zelda gets a bye because it's *Zelda* for christ's sake. But Zelda is un-fun exactly to the extent that it fails to support jumping, except off ledges which is kinda OK but not really true jumping.<br />
<br />
Practically the first thing everyone tries in a game is jumping. If the game doesn't let you jump, then people enter a Fuck You mode that can be hard (possible, but hard) to overcome.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://thecontrolleronline.com/wp/wp-content/uploads/2011/12/SSX.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="280" src="http://thecontrolleronline.com/wp/wp-content/uploads/2011/12/SSX.jpg" width="400" /></a></div>
<br />
You kinda don't want your players to enter Fuck You mode. Just sayin'. Yeah, I'm going out on a limb here, but I'll contend that it's probably a good idea <em>not</em> to make a game that puts people in Fuck Everything About This mode. If you're not exactly sure what that mode looks like, well, it looks like the <a href="http://www.youtube.com/watch?v=LVEPMTihlE0&feature=results_main&playnext=1&list=PL6AF59CB16D7A2A4A">Acornfilms Dead Rising 2 review</a>. Which I <i>heartily</i> recommend watching in its entirety, but for the impatient the most relevant section is from <a href="http://www.youtube.com/watch?v=LVEPMTihlE0&t=6m20s" target="_blank">6:20-6:45</a>.
<br />
<br />
If a game doesn't let you jump over a foot-high obstacle, then -- that's right, you've got the idea now -- Fuck This Game. It might be possible to recover and get people to enjoy it anyway, but you're working against a bad first impression. How fucking hard can it really be, game developers?
<br />
<br />
Borderlands (of course) lets you jump pretty high, on account of low gravity. In contrast with Rage, which lets you do this pathetic little fart-jump that accomplishes nothing except making you feel even more sorry for Id than you already felt -- and let's face it, you do feel pretty sorry for them.
<br />
<br />
Metroid -- now THAT was a game that let you *jump*. Metroid got a whole lot of things right. But right off the bat they got jumping right. You can jump <em>really high</em> in Metroid. And that's before you find the mods that make your jumping really start to kick ass. Like Borderlands, the Metroid franchise focuses on <b>fun</b> over <b>realism</b>, and on gameplay over lame cut scenes.
<br />
<br />
In fact Metroid is even better than Borderlands in some aspects, such as the important aspect of not springing a Surprise Vagina on you at the end of the game. (<em>No</em>, Samus doesn't count. Jeez people!)
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://obsoletegamer.com/wp-content/uploads/2011/11/metroid_ending.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="231" src="http://obsoletegamer.com/wp-content/uploads/2011/11/metroid_ending.jpg" width="400" /></a></div>
<br />
And of course let's not forget Super Mario Galaxy, critically acclaimed as one of the greatest games ever created, and it was basically a feature-length exercise in fancy new jumping physics and camerawork.
<br />
<br />
Make no mistake: jumping *<em>puzzles*</em> aren't for everyone. Especially when the camera management fucking blows so hard that it singlehandedly sinks the game at review-time, before it's even launched (hello and goodbye, <a href="http://en.wikipedia.org/wiki/Epic_Mickey">Epic Mickey</a>). Jumping puzzles are definitely not guaranteed to be slam-dunk in the Fun department. It comes down partly to personal taste and partly to execution quality in the game's design and mechanics.
<br />
<br />
But <em>everyone</em> likes jumping.
<br />
<br />
I'm not saying games where you can't jump can't be cool. I'm just saying jumping is fun. In case you care.
<br />
<h2>
The Gun Collector's Club</h2>
We're back on topic! w00t!
<br />
<br />
Most of the people playing Borderlands today are <b>collecting guns</b>. There's no other good explanation for what's going on. The game isn't particularly social -- communication is highly limited without mics, which most players don't use. The mission replay value is modestly high, but there's only so much any game can do before familiarity and boredom set in. The only tried-and-true way to keep people coming back is with a token economy.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://i241.photobucket.com/albums/ff181/Bhall_69/BorderlandsGuns.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="267" src="http://i241.photobucket.com/albums/ff181/Bhall_69/BorderlandsGuns.jpg" width="400" /></a></div>
<br />
Even supposedly pure-social, non-gaming environments are based on token economies. This is true of every successful ecosystem -- even Facebook -- because people crave status and recognition, and those needs generally derive from actions and events that are countable and rankable. Most sites that have succeeded in becoming addictive have an obvious token economy: karma, or star ratings, or anything along those lines to encourage users to keep contributing content. Sites that don't weave at least one token economy into their fabric are left wondering why nobody's showing up to their party. Or more accurately, people show up but they don't see any reason to stick around.
<br />
<br />
In the gaming world, traditional RPG-style experience points (XP) are countable, so a lot of games have global high-score lists for experience. But the smarter designers divide up their lists by in-game demographics, geography and other differentiators so that even if you have no hope of climbing the global high score lists, you can be one of the best in your area or specialization. The more subdivisions the merrier.
<br />
<br />
Unfortunately Borderlands caps XP -- you stop earning it when you hit max level. So no XP addiction for YOU.
<br />
<a href="http://images.wikia.com/borderlands/images/8/88/Weaponcolors.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="163" src="http://images.wikia.com/borderlands/images/8/88/Weaponcolors.jpg" width="200" /></a><br />
Instead, the primary addictive tokens in Borderlands are the guns. There are some other, weaker kinds of tokens in the game -- for instance there are collectable items dropped by Claptraps in the last DLC, and some of them are ultra-rare. So there's a small group of collectors off hunting those. But it's not as addictive as hunting for guns for a number of important reasons.<br />
<br />
Let's look a little closer.
<br />
<h2>
Ingredients for Addiction</h2>
Any game can <em>overlay</em> any number of token economies. There doesn't have to be just one. You can create token economies targeted at every kind of player, in the simplistic <a href="http://en.wikipedia.org/wiki/Bartle_Test">Bartle Test</a> sense of "kind".<br />
<br />
If you want to hook in <b>explorers</b>, just keep track of visited areas, missions completed and other countable explore-ish actions taken. If you want to hook in the <b>PvPers</b>, make a bunch of arenas, then keep track of a bunch of statistics about kills. (Um, or <a href="http://www.theverge.com/gaming/2012/3/10/2859950/diablo-iii-will-ship-when-its-not-done">not</a>, if you're Diablo III.) If you want to catch <b>socializers</b> in your web, keep stats on followers, likes/dislikes and all that happy social shit. For <b>builders</b>, keep stats on what areas they've built and how popular they are. Etc. You rope people in by counting stuff that they like to do and reporting it somehow -- preferably to everyone.
<br />
<br />
Token economies also can be created from (or emerge naturally from) big, complicated rules systems that are heavy on memorization and light on deductive reasoning. Obvious examples include the National Football League, the Linux operating system, the underground music scene, the comic book scene, your neighborhood Bible-study group and other paper-and-dice RPGs.<br />
<br />
Whenever a community framework of <i>any kind</i> -- gaming, sporting, social, technical, whatever -- is based on a huge system of fiddly rules and trivia to memorize, it attracts <b>mavens</b> who derive satisfaction and status from their knowledge of the system.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.youtube.com/watch?v=zng5kRle4FA"><img border="0" height="282" src="http://thenerdstalgia.com/wp-content/uploads/2011/06/Screen-shot-2011-06-10-at-4.09.15-PM-350x248.png" width="400" /></a></div>
<br />
Mavens create the pulse of a community. They set the beat. Some mavens generate new content. Others specialize in documentation. Some become critics and provide valuable reviews. Some become entrepreneurs within the economy and act as vendors, facilitators, go-betweens, fences or even thieves, depending on what's possible in the framework. Some mavens become hipsters and try to make the community seem more exclusive and prestigious by virtue of being insufferable (but ultimately valuable) dickheads.
<br />
<br />
A community only needs a small percentage of its members to be mavens in order to grow and thrive.<br />
<br />
If you count and score peoples' actions, and then stack-rank them in a set of high score lists, it sends the addictive pull soaring. Sometimes the stack ranking is even built right into the system. If you're a Freemason, then being a Master Mason is way better than being a lowly Apprentice. Oh sure, go ahead and laugh it up over their silly ranking system. Then go back to your day job and worry some more about your next promotion.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.caterpillar.org.uk/warning/btn32.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="280" src="http://www.caterpillar.org.uk/warning/btn32.gif" width="320" /></a></div>
<br />
Explicit stack-ranking is such a critical ingredient that without it the addiction dish pretty much fails. It's a catalyst. It's yeast for the tasty Token Loaf.<br />
<br />
But in token economies with <i>physical</i> tokens -- not just counts of actions or connections, but distinct physical or virtual items that you can collect and accumulate -- stack ranking isn't enough by itself. You also need <b><em>display cases</em></b>. Because -- follow my reasoning carefully here -- what the fuck good is collecting things if you can't show off your collection?
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.prlog.org/10162399-the-uniform-display-case-us-army-version.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="348" src="http://www.prlog.org/10162399-the-uniform-display-case-us-army-version.jpg" width="400" /></a></div>
<br />
Actually even without display cases collecting can still be fun, because it scatches that collector's itch, which has its roots in the fundamental pattern recognition activity our brains engage in to survive. People can always work around the social issue by talking about their collections in forums or whatever.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.prguitarman.com/photos/2010/Stuff/Pokemon/Collection/img255.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="320" src="http://www.prguitarman.com/photos/2010/Stuff/Pokemon/Collection/img255.jpg" width="231" /></a></div>
But letting people show off their collections makes it a whole different ball game. Communities will <em>always</em> find a venue for showing off their collections, even if you're too stupid to provide one for them. But if you feature it directly in the system, it concentrates everyone's focus within the system, making it inherently stickier. (In the sense of "measurable consecutive hours spent on the site.") If, on the other hand, you force people to wander off to eBay or a random forum to share and trade their collectibles, then you're letting your revenue stream walk out the door.
<br />
<br />
A display case in a game can be as simple as allowing you to look at someone else's inventory. Seriously, how hard is that, Gearbox? Players want this feature so badly that every day they risk losing their best items by dropping them and picking them up again just so others can see them flash by, you lovable dumb fuckers! How can you not know this by now?
<br />
<br />
A token display case can be <em>anything</em> and <em>anywhere</em>, as long as it has the player's name and hopefully pic attached to it somewhere.
<br />
<br />
Oh yeah. Pics. Fuck me, now there's a side-rant for you.
<br />
<br />
<b>TL;DR:</b> <b>Personalization is another <em>highly</em> key ingredient for addiction.</b> Everyone wants to make their avatar stand out as a unique reflection of their own personal bad taste. And everyone wants a fucking profile page. Christ, now we're getting into shit that's so painfully obvious that it makes my eyes twitch, but most companies <em>still</em> don't seem to get it.
<br />
<br />
In any case I won't talk too much about it today, except to observe that no matter how well you support custom avatars, you could be doing more, and it <em>will</em> make people happier. Doesn't even matter what you do, as long as it's more personalizable.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.blogcdn.com/www.joystiq.com/media/2007/01/evil_mii-425.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="285" src="http://www.blogcdn.com/www.joystiq.com/media/2007/01/evil_mii-425.jpg" width="400" /></a></div>
<br />
And even a little bit of personalization is infinitely better than nothing. Even the cookie-cutter new <a href="http://reckoning.amalur.com/">Kingdoms of Amatrope</a> is roughly a 1.5 on a 1-10 scale here -- but at least it's not a zero. Game designers have been offering personalization for at least <i>a thousand years</i>, but Borderlands is like a 0.2, god dammit. They let you change your shirt and hair color. Whoop.<br />
<br />
But let's just move on; we've got bigger fish to fry.
<br />
<h2>
The Borderlands Recipe</h2>
OK, we're finally ready to look at what's driving people to play Borderlands four years after the launch, and three and a half years after almost any other game would have disappeared into the annals.
<br />
<br />
We'll take a look at what Gearbox did right, and then I'll bag in a mean but friendly way on Gearbox for being total jackasses, and then we'll have cake.
<br />
<br />
The Borderlands token economy is based on <em>items</em>, 90% of which are guns. Weapons in games are an especially powerful kind of token because they're self-reinforcing: the better your weapons, the easier it becomes to collect more of them.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://images1.wikia.nocookie.net/__cb20100311122005/borderlands/images/a/a0/46_Liquid_Penetrator.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="http://images1.wikia.nocookie.net/__cb20100311122005/borderlands/images/a/a0/46_Liquid_Penetrator.jpg" width="400" /></a></div>
<br />
Borderlands has really clever system for generating randomized guns. The guns -- made by around a dozen manufacturers -- have a whole bunch of variable parameters: damage, range, accuracy, rate of fire, magazine size, reload speed, zoom, sway, recoil, bullet speed and "elemental" effects. Some rarer guns also have fancy custom effects -- stuff like increased critical-hit damage, ricocheting bullets, increased blast radius, multiple projectile effects, unusual bullet trajectories, you name it.<br />
<br />
Even disallowing a bunch of overpowered or nonsense combinations, when you multiply out all these dimensions you have between 8 and 17 million possible guns, depending on who's counting. That's a pretty good spread.<br />
<br />
If all you want to do is finish the game and move on, then you don't need to think much about guns. You only need a handful at most -- one each for short-range, medium-range and long-range, and maybe a couple of elemental effects for enemies that require them. Guns are leveled, so every once in a while you'll want to hit a BioShock-style vending machine and upgrade. But you can get through the game with nothing but a good submachine gun, a combat rifle and maybe a sniper rifle.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigyCM-iPItfRhBryrL93OWBz6b5UFsj66WfQw7vD0EzHNWTOYQEXhEFDLEsjZm0HXpjigqdY99fk4DOJ0mMOp81FpNQgVniHKwffKmXJzMc7_p6LO3Tfl3U1q831FJPStk9Omr/s1600/It+R+a+Vending+Machine!.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigyCM-iPItfRhBryrL93OWBz6b5UFsj66WfQw7vD0EzHNWTOYQEXhEFDLEsjZm0HXpjigqdY99fk4DOJ0mMOp81FpNQgVniHKwffKmXJzMc7_p6LO3Tfl3U1q831FJPStk9Omr/s400/It+R+a+Vending+Machine!.JPG" width="400" /></a></div>
<br />
To avoid running out of ammo in the final vaginal assault, I suppose it's a good idea to have one weapon in each of the game's seven ammunition categories. But even that's not strictly necessary, because there are guns and class mods that regenerate ammo for you, plus a fair amount of ammo just lying around in most places. Borderlands is definitely not Survival Horror. There's <em>plenty</em> of ammo.
<br />
<br />
So a lot of people (including me, the first time around) just notice in a vague way that their guns seem to be getter better as the enemies are getting stronger, and that's about it.
<br />
<br />
During my second playthrough a year later, I was surprised to learn online that each gun has half a dozen distinguished components. A gun's abilities derive from the assembly of its grip, body, barrel, scope and so on -- each of which has its own rarity and special effects. You can actually look at a gun from afar and figure out a ton about it.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://images3.wikia.nocookie.net/__cb20091118161802/borderlands/images/5/52/Rocket-Grenade_Launcher.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="271" src="http://images3.wikia.nocookie.net/__cb20091118161802/borderlands/images/5/52/Rocket-Grenade_Launcher.jpg" width="400" /></a></div>
<br />
Borderlands guns follow a normal distribution, with most guns being roughly average, but with occasional outliers that can be very powerful. Following Diablo's convention, the guns are color-coded by rarity: white, green, blue, purple, yellow, orange and dark orange, the latter being the most powerful. You don't start encountering random orange weapons until late in the game, and you quickly learn to stop and examine them closely whenever you find one.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://images.wikia.com/borderlands/images/f/f0/Borderlands_2009-11-01_00-24-57-42.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="http://images.wikia.com/borderlands/images/f/f0/Borderlands_2009-11-01_00-24-57-42.png" width="400" /></a></div>
<br />
Aaaaaand... that's pretty much it. That's the Borderlands recipe. At any rate, that's what they designed and set out to build. And it's not too bad. I'd say their recipe was blue, maybe bluish purple in the first release. Pretty potent.
<br />
<br />
But then, through a series of deliberate improvements and happy accidents, their token-economy recipe ripened into a deep, lush orange.
<br />
<h2>
Power-Up #1: Playthrough 2.5</h2>
A little-advertised and little-understood feature of the initial release is that there are <b>three</b> very different playthroughs of the game, each with its own signature characteristics.
<br />
<br />
The first playthough is the one everyone's familiar with. You start off level 1 without so much as a gun to your name, and by the time you wax the big beaver you're around level 35. Your best weapons are blue and purple, you've worked your way through about half the skill tree, and you're just starting to get comfortable with the system. And then -- boom, it's over.
<br />
<br />
They don't even give you any money or anything. Just a mercifully brief verbal thank-you from the most gratuitously annoying "Guardian Angel" in history -- not just gaming history, but <em>all</em> history, period. All she ever fucking says is lame variations of: "Now is the time for you to go do whatever it is you were planning to do next. Ta!" The real prize you get for finishing Borderlands is that she <em>finally</em> shuts the hell up.
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://i.neoseeker.com/mgv/368201-chautemoc/201/22/borderlands_20100815_15362747.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="250" src="http://i.neoseeker.com/mgv/368201-chautemoc/201/22/borderlands_20100815_15362747.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>"Now is the time for you to mute the fucking game for a while."</i></td></tr>
</tbody></table>
<br />
After that you can still go back and do all the side-quests you skipped, but it's pointless because you're now vastly overpowered. Not only is there no challenge, but the loot enemies drop is the same level they are, so there's no benefit. So at that point most people call it a day and move on.
<br />
<br />
<a href="http://images.wikia.com/borderlands/images/b/b0/BadMutha_Maniac2.PNG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://images.wikia.com/borderlands/images/b/b0/BadMutha_Maniac2.PNG" /></a>But Borderlands gives you the unusual option of playing the whole story again in what they call "Playthrough 2". You keep your loot, your experience, your skills, everything. All that resets are the quests and areas-unlocked. In Playthrough 2, the enemies are higher-level, and they get a <a href="http://borderlands.wikia.com/wiki/Midget#Nomenclature" target="_blank">little more creative with the enemy names</a>. But the balance is still a little dodgy, so initially you blaze through the missions, and enemy levels don't catch up with you for a while. When they do, it gets tough in a hurry.
<br />
<br />
I should note that in Borderlands, even though there are 69 possible player levels, there is a HUGE difference in power between any two levels. If you're level 37 fighting a level 38 bad guy, expect it to be a tough fight. If the bad guy is 2 levels up, expect pain. 3 levels up: expect death. 4 levels up: you, Sir, are on the wrong fucking side of the train tracks.
<br />
<br />
It winds up being self-balancing, because you breeze through the easy parts and then spend most of your time butted right up against the edge of what you personally can handle. If the main quest starts getting too hairy, you can detour to some side quests until you level up. You can even pretend you're doing it for the exploration value.
<br />
<br />
But the sudden sharp drop in enemy difficulty at the beginning of Playthough 2 is a bit weird, and probably causes a lot of people to abandon it. It's only if you persist that enemies eventually catch up with you.
<br />
<br />
<b>Here's the weird (and important) part:</b> once you finish the main storyline quest the second time, the game enters a magical and totally undocumented mode that players call <b>"Playthrough 2.5"</b>. In P2.5, all enemies are automatically advanced to level 48-52, with 50 being the max player level unless you buy the DLCs.<br />
<br />
In P2.5, the enemies are all by definition around as tough as you are, which means the loot they drop is as good as you'll find. So the game becomes both challenging <i>and</i> rewarding.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://newbreview.com/wp-content/uploads/2009/10/borderlands-002.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="222" src="http://newbreview.com/wp-content/uploads/2009/10/borderlands-002.jpg" width="400" /></a></div>
<br />
Um. But. OK... I know what you're thinking here. Why the ever-loving <em>fuck</em> wouldn't they make it challenging and rewarding <em>to begin with</em>? Why make you play through the whole game <em>twice</em>?
<br />
<br />
Well, that's a complicated question. Players like to feel a sense of increasing power as they progress through the game. If enemies always level up with you (which they do, in many games), then the question becomes <em>"Why the fuck am I leveling up in the first place?"</em> There are a gajillion ways to tackle this problem, all of them slightly unsatisfying. It's sort of the core design problem of RPGs.
<br />
<br />
Gearbox decided to let you have it both ways. The first time through, you power up slightly faster than your enemies, although never really in an unbalanced way. Later, once the loot-hunting end-game starts, enemies level up so you can feel challenged again and find better loot.
<br />
<br />
Why they make you play the game <em>twice</em> before it kicks into end-game mode is anyone's guess. Maybe it's because they wanted you to exhaust the skill tree first, and they'd already balanced it to get you halfway there on the first playthrough.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.blogiversity.org/blogs/willburns1/Screen%20shot%202009-10-22%20at%2012.15.24%20PM.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="295" src="http://www.blogiversity.org/blogs/willburns1/Screen%20shot%202009-10-22%20at%2012.15.24%20PM.jpg" width="400" /></a></div>
<br />
<br />
Whatever the reason, you have to play the Borderlands main storyline <em>twice</em> before it kicks into the P2.5 endgame. But at least it's there. <b>The game simply could not be addictive without it.</b> You have to have a good loot story for the looting to be any fun. And you can't just hand it out -- players have to work for it.<br />
<br />
All in all, the Borderlands initial release had a pretty decent story for the endgame. But "pretty decent" doesn't explain why four years later, at 3am on a random weekday, I can log into PSN -- well, assuming they're up, which is iffy, and assuming I still have enough money in the bank to pay my electricity bill after Sony gave my credit card info away, equally iffy -- and find unlimited players to adventure with.
<br />
<br />
Their system needed a few more power-ups before it would become the fully-matured beast that everyone will go back to playing the instant they finish Borderlands 2.
<br />
<h2>
Power-Up #2: The Bank</h2>
After a while, players started complaining loudly that they didn't have enough inventory slots.
<br />
<br />
And that's a valid complaint. Borderlands inventory slots have essentially no effect on game balance, particularly later in the game. Items have no size or weight, so it really is just a slot system. Towards the end of the game, you're only using the slots for hauling crap to vending machines to sell it. Fewer slots just forces players to do more trips. And in the endgame, inventory is <em>only</em> used for storing collectible guns that you'll never use, but you need a place to stash them.
<br />
<br />
So the endgamers were right -- Gearbox didn't give them enough slots.
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://hosting11.imagecross.com/image-hosting-21/358Borderlands-2009-11-23-23-05-06-56.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="250" src="http://hosting11.imagecross.com/image-hosting-21/358Borderlands-2009-11-23-23-05-06-56.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>"Honey, I'll uh, go get us a shopping cart."</i></td></tr>
</tbody></table>
<br />
That's kind of weird, when you think about it. Recall that Gearbox is focused on fun, not "realistic immersion", so arbitrarily limiting your inventory slots is kind of a boneheaded overture to the world of "grown-up" (and hence not-so-fun) RPGs. If your inventory contents actually affected the game balance in any meaningful way, then slot limiting would make sense. But the game's money-economy is laughably unbalanced, with high-level characters able to max out at 2 billion dollars legitimately in a few hours, blow it all by jumping off cliffs (more on that later), and make it all back again in another few hours.
<br />
<br />
So the number of inventory slots doesn't matter. Moreover their usefulness is seriously limited by the UI, which as Ben Croshaw rightly pointed out, forces you to spend inordinate amounts of time just scrolling around. More inventory items just makes it worse.
<br />
<br />
I suspect the folks at Gearbox were saddled with their preconceived RPG baggage, and they didn't think the issue through very well. It *is* satisfying (and balancing) to increase your carrying capacity as you progress through the main game. But lack of capacity is crippling in the endgame -- which is where everyone winds up in pretty short order, no matter how rich and detailed the regular game might be.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.mobygames.com/images/shots/l/480418-borderlands-mad-moxxi-s-underdome-riot-windows-screenshot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="http://www.mobygames.com/images/shots/l/480418-borderlands-mad-moxxi-s-underdome-riot-windows-screenshot.png" width="400" /></a></div>
<br />
<br />
So people bitched about it. Gearbox listened, and responded by <b>(a)</b> giving you a bank with not enough slots (max 42), and <b>(b)</b> adding a few new Claptrap rescue missions, each with a <em>random</em> chance of increasing your inventory slots. That's right, random. Hellooo, farming. And we're talking the absolute worst kind of shit-slog farming, where if the random chance doesn't trigger, you have to physically shut down the game before it saves your progress, then start the mission all over again after logging back in.
<br />
<br />
<a href="http://www.whitegadget.com/attachments/xbox-forum/52041d1300923090-borderlands-x-box-borderlands.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="320" src="http://www.whitegadget.com/attachments/xbox-forum/52041d1300923090-borderlands-x-box-borderlands.jpg" width="215" /></a>Gearbox got so carried away with setting up farming for their primary tokens that they started making people farm non-tokens too: absolutely essential shit that you need in order to participate in the token economy at all.
<br />
<br />
<b>Farming non-tokens is hella Not Fun.</b> <i>Forcing</i> people to do it is flat-out fucking retarded. So the very first thing everyone does when they figure out the endgame is download <a href="http://blmodding.wikidot.com/" target="_blank">WillowTree</a> -- a nifty open-source unsanctioned warranty-voiding player-file editor -- and bump up their inventory slots. Some people bump it to 72, the maximum value obtainable in legitimate (albeit heavily farmed) gameplay. Others shrug and say Fuck It, Gearbox is being fucking stupid here, gimme 999 slots and let me deal with the gray hairs as I scroll around painfully.
<br />
<br />
It's shit like this that makes me think that Gearbox only <em>kind</em> of gets it. But whatever; inventory and bank slots are fixable out of band with WillowTree, so it's not the end of the world.
<br />
<h2>
Power-Up #3: Pearlescents</h2>
Remember the Borderlands weapon color-coding scheme, where white means <em>shit</em> and orange means <em>the shit</em>? Well, your inventory sorts them by color, with better colors being higher up and thus more easily accessible.
<br />
<br />
Check this out: in the original Borderlands release there was a bug where some weapons were generated with off-scale rarity, so the game didn't know what color to assign them and they defaulted to white. Nobody at the time knew they they weren't orange. But there they were, right up at the top, leering at you with their big white grin like they knew somethin' you didn't.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://images1.wikia.nocookie.net/__cb20091105064127/borderlands/images/6/6c/Guncolorrarity.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="301" src="http://images1.wikia.nocookie.net/__cb20091105064127/borderlands/images/6/6c/Guncolorrarity.jpg" width="320" /></a></div>
<br />
These weapons were the subject of intense debate on the forums, and in the information vacuum they soon became shrouded in mystique. Players began calling them "Pearlescents". They of course became highly collectible.
<br />
<br />
Gearbox paid attention, and hit on the absolutely brilliant and game-changing idea of <b>formally supporting Pearlescents in their third DLC release</b>. They fixed the glitch and simultaneously introduced a tiny category of cyan-colored super-legendary weapons, one per manufacturer, with fancy names and fancy effects. And yes, they were actually called <a href="http://borderlands.wikia.com/wiki/Pearlescent">Pearlescents</a>.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://images.wikia.com/borderlands/images/8/8c/7ac64594.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="http://images.wikia.com/borderlands/images/8/8c/7ac64594.png" width="400" /></a></div>
These cyan weapons are ultra-rare. Insanely rare. Stupidly rare. <em>Irritatingly</em> rare. Taaaaaantalizingly rare. A lot of players never get close enough to sniff one of them. They go to all the potential drop points, and they shoot all the right bad guys, again and again, and after weeks on end they still may not have spotted one.
<br />
<br />
If you believe the Big Dogs at the fashion houses, the art houses, the back-rooms at Harry Winston's, the auction blocks at Sotheby's -- if you believe that rarity creates desire, then with this move Gearbox gave their player base a cyan-veined fucking priapism.
<br />
<br />
Borderlands gun collectors aren't dicking around with purples and oranges, noooooes. Rare as some of them are, it would never have been enough to keep the game thriving for four-plus years. Borderlands gun collectors are after the Pearlescents (or "Pearls" for short).
<br />
<br />
Funny thing is, as weapons they're not even that great, for the most part. The power ranges of the different colors have significant overlap, so only the top 15% or so oranges are better than the top purples. And only around the top 10% of all Pearlescents are better than the top oranges. So even when you find a Pearl you probably won't use it.
<br />
<br />
Doesn't matter. They're rare, and special, so people go looking for them.
<br />
<h2>
Power-up #4: The Armory Bug</h2>
Gearbox admits (again and again) that they fucked up the ending of the main game. Right from the opening scene they start promising you a treasure-filled Vault, and you chase it the entire game, only to find that "Vault" was just a euphemism for Hentai Boss Monster, or maybe some twisted metaphor for realizing you were abused as a child, or some shit like that.
<br />
<br />
They get it now. They're sorry.
<br />
<br />
And they made up for it. After putting out the obligatory and awesome zombie expansion pack ("<a href="http://borderlands.wikia.com/wiki/The_Zombie_Island_of_Dr._Ned" target="_blank">The Zombie Island of Dr. Ned</a>"), they released an epic DLC titled <a href="http://borderlands.wikia.com/wiki/General_Knoxx" target="_blank">The Secret Armory of General Knoxx.</a><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://oyster.ignimgs.com/ve3d/images/06/57/65764_BorderlandsTheSecretArmoryOfGeneralKnoxx-01_normal.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="223" src="http://oyster.ignimgs.com/ve3d/images/06/57/65764_BorderlandsTheSecretArmoryOfGeneralKnoxx-01_normal.jpg" width="400" /></a></div>
<br />
This DLC has a <em>very</em> different style and feel from the main game: an open road through the middle of a dusty sunken sea, with off-ramps to dunes infested with towering spiders, windbitten bandit camps, thriving midget colonies, junkyard strip clubs, corporate military installations, and the most hilariously awesome gay prison in gaming history.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://bulk2.destructoid.com/ul/165414-/Knoxx4-620x.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="http://bulk2.destructoid.com/ul/165414-/Knoxx4-620x.jpg" width="400" /></a></div>
<br />
The plotline involves overthrowing a surprisingly well-written general named Knoxx who's been consigned to the planet and -- to his lasting chagrin -- reports directly to a five-year-old named Admiral Mikey. Knoxx is charming and funny and deadly, and the DLC taken as a whole is one of the best-realized and most memorable mini-worlds ever produced in a video game.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://bulk.destructoid.com/ul/165414-/Knoxx6-620x.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="http://bulk.destructoid.com/ul/165414-/Knoxx6-620x.jpg" width="400" /></a></div>
<br />
The *entire* Borderlands endgame takes place in this DLC. It is the grand finale, the steady-state home base of operations for gun collectors everywhere. Even though there was one more DLC afterwards ("<a href="http://borderlands.wikia.com/wiki/Claptrap's_New_Robot_Revolution" target="_blank">Claptrap's New Robot Revolution</a>"), with a setting and story every bit as brilliant and distinctive as the previous two, it has ultimately failed to capture much long-term attention from endgamers because it has no Pearlescents. It <em>does</em> have its own new token economies, such as <a href="http://borderlands.wikia.com/wiki/What_a_party!" target="_blank">collecting rare robot parts</a>, but it's missing power-ups 4 and 5, so it's just not as much fun.
<br />
<br />
Looking again at the "Secret Armory" part of the Knoxx title, we see that Gearbox has once again promised a big vault full of loot -- but this time, they deliver. In a big way. The storyline winds up giving you three separate runs through Knoxx's armory, timed at two and a half minutes each, during which you can grab all the loot you can carry.
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://images.wikia.com/borderlands/images/f/f1/FOV101_armory_1.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="250" src="http://images.wikia.com/borderlands/images/f/f1/FOV101_armory_1.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>The Marcus Kindcaid "Super Sweep" looting spree</i></td></tr>
</tbody></table>
<br />
That's cool and all; it's blatant penance for the first game, and with it all is forgiven. But let's face it: 150 seconds isn't a very long shopping spree, even if you get three of them. Knoxx's Armory is <em>huge</em>. It's a four-story warehouse with elevators and ramps and movable platforms and crate-mazes, with the loot-chests scattered everywhere. It's got around 120 chests in all, with 20 being the super-rare kind that have a zillion-to-one chance of generating Pearls, but it's so vast that on any given run you'll only get to a small fraction of the loot.
<br />
<br />
Which means the Armory, as nice as it is story-wise, isn't such a great bargain for collectors. Or it <em>wouldn't</em> have been, except for the infamous "Armory Glitch".
<br />
<br />
It turns out -- and I'm one of the people who stumbled on this bug completely by accident -- that it's possible to fall through the floor at a particular spot on the way in, landing directly in the armory without arming the 2-minute timer, and then you can loot every single chest at your leisure.
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://i666.photobucket.com/albums/vv21/KSerge83/Borderlands%20Armory%20Glitch/Paydirt.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="300" src="http://i666.photobucket.com/albums/vv21/KSerge83/Borderlands%20Armory%20Glitch/Paydirt.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>After falling through the crack, it's a 40-foot drop into the Armory</i></td></tr>
</tbody></table>
It's the guiltiest gaming pleasure ever. You're transformed into a kid who can sneak into the Chocolate Factory while the oompa loompas are sleeping. And you can do it again the next night, every night, forever.
<br />
<br />
The bug is actually present throughout the game. At pretty much any edge joining elevated flat surfaces in Borderlands you can wiggle around and fall through the crack. Most of the time it's an incredible annoyance -- for instance you'll fall into an elevator shaft whose sole purpose is spawning bad guys, and the only way out is to kill yourself with grenades or exit the game.
<br />
<br />
But the bug turns the Armory into a Farmery, and that, friends, is the super-MSG that makes the Borderlands end-game recipe the most successful of its kind since Ye Olde Diabloe back before you were born.
<br />
<br />
Strictly speaking I suppose "farming" is the wrong word for it. It's not really farming if the vegetables can kill you. I think when you have to slug your way into a vault through a bunch of bad guys, and each time you run a genuine risk of dying, they call it "grinding". So fine, it's the Secret Grindery.
<br />
<br />
The "Armory glitch", as they call it on the forums, is the gateway drug that hooks you in and leads you inevitably to the Lobster Safari, up up up to the high mountaintop arena where you'll spend the rest of your free time forever and ever. You'll need to make a hundred or so runs on the armory before you're well-equipped enough to face the Lobster. But during that few weeks you'll be as happy as a heavily armed kid in a four-story military candy store.
<br />
<h2>
Power-Up #5: Crawmerax the Invincible and the "Ledge Glitch"</h2>
So here we are at the climax. We made it! There should be a popcorn vendor here.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://images.wikia.com/borderlands/images/d/da/Hit_the_weak_spot_small.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="http://images.wikia.com/borderlands/images/d/da/Hit_the_weak_spot_small.jpg" width="400" /></a></div>
<br />
Crawmerax the Invincible is a whale-sized one-eyed purple people eater that Gearbox introduced in DLC3 <em>specifically</em> for advanced multi-player co-op. He's introduced in a late side-mission accurately titled <b>"You. Will. Die."</b> You're not supposed to be able to kill him by yourself. He's intended as a challenge for groups of heavily-armed, experienced Vault Hunters who've grown weary of existence and want to die like they're playing Demon's Souls.
<br />
<br />
Crawmerax's location is heavily advertised with big road signs reading "Secret Final Boss Monster: 20 km", "Secret Final Boss Monster: exit now". When you arrive the earth shakes. Inside his lair is a cavernous "staging area" with some ammo/health vending machines, a few mutilated corpses, miscellaneous hastily dumped construction equipment, and an elevator ascending directly into the solid rock overhead. It looks as if the military has built just enough infrastructure to let all comers try to slay the beast, and so far no luck.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://images.wikia.com/borderlands/images/6/68/FOV101_craw_1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="250" src="http://images.wikia.com/borderlands/images/6/68/FOV101_craw_1.jpg" width="400" /></a></div>
<br />
The second critical Borderlands bug is the "ledge glitch". Like the Armory falling-through-the-floor glitch, it's a bug that makes it possible to farm Crawmerax... sort of. Its success rate is low enough that it can seem like he's farming YOU. The glitch involves running like mad to a sheer drop on the left, diving into a small nook just below the arena floor, squatting down, facing the corner like he's the Blair Witch, and hoping you survive the acid barf until he decides whether you're unreachable or you're his next ledge meal. It can go either way.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://images.wikia.com/borderlands/images/b/bf/Craw_banner.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="100" src="http://images.wikia.com/borderlands/images/b/bf/Craw_banner.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>"I'll have the bisque."</i></td></tr>
</tbody></table>
*IF* all goes as planned, and that's a pretty big if, then he starts roaring and displaying, but he won't attack you as long as you stay put. At that point it's <em>still</em> nontrival to kill him, because one of his total six vulnerable spots is on his back, facing away from you. There are whole strategy guides devoted to hitting that spot. I can tell you it's no picnic.
<br />
<br />
But the payoff. Oh, the payoff.
<br />
<br />
Players call Craw the "Lobster Piñata" on account of the vast amount of high-quality loot he drops when he dies. Except he doesn't really "drop" the loot so much as <em>explode</em> multicolored items in all directions, and for several loooong seconds, tokens simply rain from the sky.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://i45.tinypic.com/zva7w9.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="250" src="http://i45.tinypic.com/zva7w9.jpg" width="400" /></a></div>
<br />
<br />
If raiding the Armory is like refined highbrow shopping at Saks Fifth Avenue, then Crawmerax's death is like a bomb going off in a Toys R Us. It's like Santa's Sleigh plowing into the Hindenberg. It's absolutely spectacular to watch.
<br />
<br />
In multiplayer mode after the piñata explodes, everyone scrambles around like cockroaches in three distinct waves: first a mad dash to try to spot a Pearlescent, which <em>might</em> happen every forty or fifty kills, then a light circular sprint to check out the orange weapons, and finally a smooth comb through every item to look for upgrades and rare finds.
<br />
<br />
If there's nothing good, many players start dropping their Pearlescents and then "pretending" to find them by picking them up again. This is incredibly risky, as the co-op mode is <em>extremely</em> buggy when it comes to inventory manipulation, but people do it anyway because Gearbox didn't give them any other way to show off their prizes.
<br />
<br />
Then it's back to the cave entrance, out-and-back to make him respawn (thank you Gearbox for that, btw), and the hunt starts all over again. I've seen multiplayer runs on Crawmerax take anywhere from 20 seconds to over an <em>hour</em> of nonstop chaos, depending on luck and everyone's equipment. Heck, last night four of us tried six or eight times and just gave up. He's that tough.<br />
<br />
The "ledge glitch", which allows single players to have a chance of killing him, is a <b>critically important</b> part of the addictive cycle. Many players are reluctant to jump into online co-op play because they feel underequipped and under-experienced. Killing Crawmerax over and over can gradually improve your gear and confidence to the point where you're ready to try the co-op version.
<br />
<h2>
The Path of Least Resistance</h2>
Game developers always worry about players getting stuck in endgame ruts, going after the same boss monsters again and again for months or years. Players will eventually discover the easiest way to advance in the token economy, at which point all other routes become wasted effort.
<br />
<br />
The Diablo III team at Blizzard claims in interviews that they're all angsty about this, that it keeps them up at night. They're thinking waaaay too fucking hard about it. They need to man up and launch. People seem to like grinding OK, and the alternative of making all paths equally difficult is an impossible problem. Even if you made the game self-tuning via dynamic feedback loops, people would be pissed because it's nondeterministic, with stuff getting randomly nerfed or powered up without warning.
<br />
<br />
So just launch already.
<br />
<br />
Gearbox waited until DLC3 to introduce a grinder path, and it's interesting that they have <em>two</em> of them -- the Armory and Crawmerax. What's even more interesting is that they're approximately equal in terms of payoff over time. A legit player with decent equipment can raid the armory two, maybe three times an hour, and probably see an average of 8 oranges per run. The same player could ledge-glitch Crawmerax maybe 4 or 5 times an hour, and see maybe 4 oranges per run.<br />
<br />
In terms of legendary weapons per hour, the two grinds are surprisingly close (~20/hour), so it comes down to a matter of personal preference. Crawmerax will get you killed more often, but he seems to have a higher drop-rate for Pearlescents. Most player probably graduate from Armory runs to Crawmerax runs to multiplayer Crawmerax runs, and are eventually (after months of upgrades) able to "solo" him alone without using the ledge glitch.
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://images.wikia.com/borderlands/images/9/93/Legendary_weapons_-_Crawmerax_-_graph_OBY.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="195" src="http://images.wikia.com/borderlands/images/9/93/Legendary_weapons_-_Crawmerax_-_graph_OBY.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>This dude had way too much free time. I'm jealous.</i></td></tr>
</tbody></table>
<br />
Regardless of which path you prefer, the whole treadmill exists to try to collect rare guns. And we're talking guns that you can't really show anyone else, except indirectly via screenshots, <a href="http://www.youtube.com/watch?v=pA8oUm4Wgwo">videos</a>, or using the desperate gamble of dropping them in a live game. Players can usually tell which weapon you're wielding by its looks and firing pattern, but they can't see its stats.<br />
<br />
In the end, the underground collection scene is a little surprising for existing at all. It was half intentional and half accidental.<br />
<br />
I don't hear Gearbox talking about this stuff in interviews.<br />
<br />
We'll see.<br />
<br />
<h2>
Epilogue</h2>
Yeah, well, none of it prolly matters, since rumor has it they're finally going to announce the Diablo III release date.
<br />
<br />
Borderlands 2, we hardly knew ye.
<br />
<br />
<h2>
Appendix: Don't Do This</h2>
Hi-ho, since we're on the subject of Dos and Donts for creating an optimal gaming experience, I feel obligated to highlight some of the bigger fuck-ups present in Borderlands. I do this in the sincere hope that there's still time to fix them in the sequel. It's probably still a year away from launch, as I write this, so there's hope.
<br />
<br />
<b>"It's like White Christmas":</b> The Borderlands game world is a rich, thriving, multiculural melting-pot with thousands of white settlers and exactly one black guy. <em>One</em>. And he's from offworld. Jesus H. Teddy Fucking Roosevelt Christ on a sidecar, Gearbox -- that's *not* what we meant when we asked for a "token" economy. We were all horribly embarrassed to be members of the human race when Josh Whedon's Firefly series pulled this stunt, featuring a fully Asianized future without any actual Asians in it. And we thought: "Gosh, well, at least <em>now</em> nobody's ever gonna make <em>that</em> particular douchewit mistake again." What. The. Fuck.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://farm4.static.flickr.com/3094/2547277032_7dae47e0cf.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="179" src="http://farm4.static.flickr.com/3094/2547277032_7dae47e0cf.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>Firefly cast. Sigh.</i></td></tr>
</tbody></table>
<b>Hot buttons:</b> Having only four equipped-weapon slots in a world where <a href="http://lmgtfy.com/?q=weapon+wheel" target="_blank">EVERYONE ELSE</a> has figured out that you can squeeze in eight on a wheel -- that's pretty fucking disappointing.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.zeldainformer.com/images/news/Zelda_Skyward_Sword_1014_04.bmp" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="223" src="http://www.zeldainformer.com/images/news/Zelda_Skyward_Sword_1014_04.bmp" width="400" /></a></div>
<br />
<b><br /></b><br />
<b>Networking:</b> After spending years developing and polishing the single-player game, they apparently outsourced the multiplayer lobby and connectivity code directly to Sony's network-security team. What's wrong with Borderlands networking? Well, um, let's see... how about "everything". Yes, that sounds accurate. It crashes all the fucking time, randomly eats your precious inventory items, fails to give you even the slightest insight into what any given party is doing until you actually join their game, randomly goes full amnesiac about which character you were using and logs you in as a level-1 unequipped n00b... the list goes on.<br />
<br />
Once you're actually connected to a game and your party is firing on all cylinders, the experience is usually pretty smooth, as long as you <em>never</em> change anything in your equipped inventory slots. But finding and connecting to a reasonable group of players is just a miserable shit sandwich.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://i34.photobucket.com/albums/d133/dancedance9823/3-2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="223" src="http://i34.photobucket.com/albums/d133/dancedance9823/3-2.jpg" width="400" /></a></div>
<br />
<b><br /></b><br />
<b>Money S(t)inks:</b> Next time around they need a money sink. I know I ranted against too much immersive realism, but if you're going to make money THAT readily available, why bother with it at all? People have so much money in the endgame that it's customary to leave the Crawmerax arena after he's dead by <em>killing yourself</em>, since it respawns you about a hundred feet closer to the exit than if you take the elevator teleporter. When players are choosing to pay a hundred million dollars to save ten seconds of running, it's a good bet your game has a money problem.
<br />
<br />
<b>Invisible Wallet:</b> Oh, and speaking of money, you can't tell how much you have, nor how much anything costs, because they only have seven digit cash displays. By the end of the game every price over $10M is displayed as 9999999 dollars, and your wallet has 9999999 dollars. You can only see how much money you have when you die, and it tells you you lost 150323855 dollars (the max you can lose at once, 7% of 2^31-1).
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://images.wikia.com/borderlands/images/0/03/PPZ470_Detonating_Cobra_OBYF.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="http://images.wikia.com/borderlands/images/0/03/PPZ470_Detonating_Cobra_OBYF.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>"So how much?"</i></td></tr>
</tbody></table>
<br />
<b>Midgets:</b> You should be able to play as one. It would be utterly bad-ass. Haven't you guys read <em>Game of Thrones?</em>
<br />
<em><br /></em><br />
<h2>
FAQ</h2>
<span class="Apple-style-span" style="font-size: large;"><em><br /></em></span><br />
<span class="Apple-style-span" style="font-size: large;"><em>Q: Aren't you being a little hard on Gearbox's vagina?</em>
</span><br />
<br />
Hey, it's not <em>that</em> little. Nobody's ever complained bef... oh, I misread that.
<br />
<br />
<span class="Apple-style-span" style="font-size: large;"><em>Q: Doesn't modding ruin the economy?</em>
</span><br />
<br />
I mentioned that if Billy breaks into the teacher's drawer and distributes all the Gold Stars then they become valueless. And I also mentioned that there's a player-file editor called WillowTree, one that happens to be capable of creating any weapon the game can generate -- and many combinations that the game does NOT generate because they are overpowered and/or nonsensical.
<br />
<br />
<a href="http://www.infinitemonkeyproductions.net/wow_forum/borderlands_reditem_stats.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="223" src="http://www.infinitemonkeyproductions.net/wow_forum/borderlands_reditem_stats.png" width="320" /></a>So of course there are modders. And they show up with their modded weapons, and they throw them around in big piles, daring people to come on over to the Dark Side, to grow up and git yerself a <i>reeeeeal</i> gun.
<br />
<br />
Surprisingly, it doesn't seem to hurt anything. In fact in co-op Crawmerax missions it's common for someone to throw out a pile of modded shields of invincibility, and for at least one player to wear one. Otherwise it's too easy for everyone to die simultaneously, causing Craw to return to full strength. Even with a super-shield, you can easily be blasted off a cliff and die anyway, so it's not a <em>total</em> cheat. A high percentage of players have apparently decided that overpowered shields are part of the acceptable core infrastructure for otherwise legit gun-collecting. I <em>think</em> this implies that Gearbox didn't make their top-end legit shields powerful enough.
<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="http://pics.livejournal.com/boev_machin/pic/000a10q9/s640x480" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="320" src="http://pics.livejournal.com/boev_machin/pic/000a10q9/s640x480" width="216" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>Krom's Canyon. Just 'cuz.</i></td></tr>
</tbody></table>
I'm pretty sure if Borderlands were MMO, with persistent highly-visible score lists and real money on the line (because for MMOs there is <em>always</em> an exchange rate between game currency and real-world currency, whether the publisher likes it or not), then yeah: the modding would fuck everything up.
<br />
<br />
<b>But the Borderlands token economy is only just barely alive.</b> Gearbox hasn't done a damn thing to increase its addictive pull since releasing The Secret Armory of General Knoxx, the DLC that introduced all three of the critical enhancements that cemented the economy. So it's all basically honor-system. And the players are surprisingly honorable. You can tell immediately if -- and to what extent -- people are abusing the mods system. There are a lot of players out there who stay on the legit side to keep it sustainably challenging. Hell, some of them don't even use the ledge glitch. At least not when people are watching.
<br />
<br />
You can also tell that the modders are no longer having fun, because their behavior becomes bizarre -- the kind of shit you normally only see at the end of a Bethesda game, where once you've leveled up high enough you get completely bored and start jumping off cliffs naked to try to air-kill boss monsters with a single swing of a broomstick.
<br />
<br />
<span class="Apple-style-span" style="font-size: large;"><em>Q: Are you going to <a href="http://dearrhialto.wordpress.com/" target="_blank">bring Wyvern back</a>?</em>
</span><br />
<br />
Patience. I'm working on it.<br />
<br />
<br />
<i>Special thanks to Andrew Wilson for proofreading this post and making me take out stuff I'd regret.</i><br />
<i><br /></i><br />
<i>Oh, and I almost forgot -- Thank You, everyone at Gearbox, for creating such an incredible game!</i>Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com21tag:blogger.com,1999:blog-13674163.post-27862850873902897732011-07-27T12:33:00.001-07:002011-07-27T13:29:48.953-07:00Hacker News Fires Steve YeggeI woke up this morning...ish... to discover that Hacker News had finally had enough of me being at Google, so they forced me into early retirement.<br /><br />On Monday I was honored to be able to deliver a <a href="http://www.youtube.com/watch?v=vKmQW_Nkfk8">keynote talk at OSCON Data</a>. In the talk, I announce at the end that I am quitting a <span style="font-weight:bold;">project</span> that I had very publicly signed up for, one that I am not passionate about and don't personally think is very important to the human race. Though others clearly do, and that's a legitimate viewpoint too.<br /><br />But the power of suggestion can make you see and hear something entirely different. If, for instance, someone tells you that I gave the talk wearing a gorilla suit, then when you watch it, <span style="font-style:italic;">I will magically appear to be wearing a gorilla suit</span>. It's actually a gray jacket over a black shirt, but you will <span style="font-style:italic;">perceive</span> the jacket as the back-hair of a male silverback gorilla! And to be honest the talk could have benefited from the judicious application of a gorilla suit, so no harm there.<br /><br />Similarly, if someone on Hacker News posts that "<a href="http://news.ycombinator.com/item?id=2811818">Steve Yegge quits Google in the middle of his speech</a>" and links to the video, then you will watch the video, and when I say the word "<span style="font-weight:bold;">project</span>" at the <span style="font-style:italic;">end of my speech</span>, a magical Power of Suggestion Voice-Over will interrupt -- in a firm manly voice totally unlike my own quacking sounds -- with "Gooooooogle". And then you will promptly sink into a 15-minute trance so that the voice-over can occur in the <span style="font-style:italic;">middle of my speech</span> where Hacker News said it happened, instead of 96.7% of the way through the talk where it actually happened.<br /><br />I am going to harness this amazing Power of Suggestion, right here, right now. Here goes.<br /><br /><span style="font-weight:bold;">You are going to come work at Google! You are going to study up, apply, interview, and yes, you are going to work there! And it will be the most awesome job you've ever had or ever will have!</span><br /><br />I hope for your sake that this little experiment works, because Google is frigging awesome, and you'll love it here. And they'll be happy to have you here. It's a match made in heaven, I'm tellin' ya. It might take you a couple tries to get in the door, because Google's interview process -- what's the word I'm looking for here -- ah yes, their process <span style="font-style:italic;">sucks</span> at letting in all the qualified people. They're trying to get better at it, but it's not really Google's fault so much as the fault of interviewers who insist that you're not qualified to work there unless you are <span style="font-style:italic;">exactly like them</span>.<br /><br />Of course, there are interviewers like that wherever you go. The real problem is the classic interview process, which everyone uses and which Google hasn't innovated on, not really. It's like deciding whether to marry someone after four one-hour dates that all happen on the same day in a little room that looks kind of like a doctor's office except that the examining table is on the wall.<br /><br />The reason I haven't been blogging lately is that working at Google is so awesome that I just don't feel like doing anything else. My project is awesome, the people are awesome, the work environment is over-the-top-crazy-awesome, the benefits are awesome, even the corporate mission is awesome. "Organize the world's hardline goods in little brown boxes delivered straight to your doorstep" -- that's an awesome mission, yeah?<br /><br />Wait, sorry, that was a flashback to the Navy or something. "<span style="font-weight:bold;">Organize the world's information</span>" -- that's the one. It's a mission that is changing the course of human events. It is slowly forcing governments to be more open, forcing corporations to play more fairly, and helping all of us make better decisions and better use of our time.<br /><br />In that vein, the part of my brain that makes Good Decisions was apparently broken a few weeks ago, when I allowed myself to be cajoled into working on something that I wasn't passionate about. I am an eternal optimist, and I figured I could teach myself to be passionate about it. And I tried! I spent a few weeks pretending that I was passionate about it -- that's how I got through my Physics classes in college with A grades, so I know it's a mental trick that can sometimes work.<br /><br />But then I wrote my OSCON Data speech, in which I basically advise everyone to start working on important problems instead of just chasing the money. Or at the very least, go ahead and chase the money in the short term, but <span style="font-style:italic;">while you are doing that</span>, prepare yourself to help solve real problems.<br /><br />And after writing the speech I realized I'd completely failed to follow my own advice. I'm getting old and I only have so many "big projects" left that I can actually participate in. So in my mind it's a complete cop-out for me to take the easy path and work on a project that my company is excited about but I am not.<br /><br />Now, as it happens, I am in fact working on a very cool project at Google. It's not important in the same sense that curing cancer or getting clean water to impoverished cities are important. But it's a project that has the potential to revolutionize software development, and NOT through some new goddamn dependency-injection framework or web framework or other godawful embarrassing hacky workaround for a deficient programming language. No. It is a project that aims to turn source code -- ALL source code -- from plain text into Wikipedia. I've been on it for three and a half years, and I came up with the idea, and the team running with the idea is fantastic. The work may not be directly important, but it is an <span style="font-style:italic;">enabler</span> for important work, much like scaling infrastructure is an enabler.<br /><br />So I am happy to continue working on that project for now. Yes, at Google. I may even blog it up at some point. But I'm very serious about brushing up on my math and statistics, some of which I haven't applied directly in 20 years, and start focusing on machine learning problems. Particulary, if I may be so fortunate, the problem of curing cancer. I may not be able to participate directly for a few years, as I need to keep working and paying the bills just like you. But I'm studying hard -- I started up again a few days ago -- and I've demonstrated to myself quite a few times that if I do anything daily for a few years I can get pretty good at it.<br /><br />Anyway, I'm late for work. Isn't that nice? I like the sound of it. It has a nice ring to it: "I'm late... for my <span style="font-style:italic;">job</span>."<br /><br />So come work with me! Unless you are curing cancer, of course.Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com54tag:blogger.com,1999:blog-13674163.post-64150771787421635942011-07-22T03:36:00.000-07:002011-07-22T04:05:34.259-07:00eBay Patents 10-Click CheckoutSan Jose, CA (<span style="font-weight:bold;">Reuters</span>) — Online auctions cartel eBay (NASDAQ: EBAY) and its collections and incarceration arm PayPal announced that on July 21, 2011, the two companies had jointly been awarded United States Patent No. 105960411 for their innovative 10-click “Buy it Now” purchasing pipeline.<br /><br />The newly-patented buying system guides users through an intuitive, step-by-step process of clicking “Buy It Now”, entering your password, logging in because they signed your sorry ass out again, getting upsold shit you don’t want, continuing to your original destination, accepting the default quantity of 1 (otherwise known as “It”), committing to buy, clicking "Pay Now", entering a <span style="font-style:italic;">different</span> password than your first one, clicking "Log In" <span style="font-style:italic;">again</span> god dammit, declining to borrow money from eBay’s usury department, reviewing the goddamn purchase details since by now you’ve completely forgotten what the hell you were buying, and finally confirming the god damned payment already.<br /><br />The 10-click checkout system, known colloquially as 10CLICKFU -- which many loyal users believe stands for “10 Clicks For You” -- was recently awarded top honors by the National Alliance of Reconstructive Hand Surgeons. 10CLICKFU incorporates a variable number of clicks ranging from eight to upwards of fifteen, but eBay’s patent stipulates that any purchasing system that lies to you at least nine times about the “Now” part of “Buy It Now” is covered by their invention.<br /><br />The patent award came as a surprise to many analysts, since several of eBay’s related patent attempts had been rejected on the basis of prior art. In one well-publicized filing, eBay had tried to patent a purely decorative, non-operational “Keep me signed in” checkbox, but Sony’s PlayStation Network already had one just like it. And another eBay patent claim for excruciating page load times was rejected because the iPad App Store is still loading.<br /><br />But eBay’s boldest and potentially furthest-reaching patent attempt was for “100% Inaccurate Button Text”. The invention claim was based on several of their UI elements, but rested primarily on the “Buy It Now” button, which eBay claims contains enough inaccuracies to render it "complete bullshit." Their patent was rejected by the US Patent Office review committee on the grounds that the Firefox browser’s “Do this automatically from now on” checkbox has been complete bullshit for over fifteen years. eBay says they will appeal the ruling because the checkbox is not technically a button.<br /><br />eBay’s spokesperson Paula Smugworth announced that eBay will continue to innovate on ways to remind their users that monopolies can do whatever the hell they want. “Not that eBay is a monopoly,” she added. “But if we <span style="font-style:italic;">were</span> a monopoly, then we could do whatever the hell we wanted. I’m just sayin’.”<br /><br />eBay’s stock rose on the news, driven largely by anonymous shill bidders.Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com22tag:blogger.com,1999:blog-13674163.post-48436897200288274432010-12-01T13:16:00.000-08:002010-12-01T13:42:28.359-08:00Haskell Researchers Announce Discovery of Industry Programmer Who Gives a ShitThe worldwide Haskell community met up over beers today to celebrate their unprecedented discovery of an industry programmer who gives a shit about Haskell.<br /><br />On Wednesday, researchers issued a press release revealing that 27-year-old Seth Briars of North Carolina, a Java programmer at Blackwater accounting firm Ross and Fordham, actually gives a shit about Haskell.<br /><br />"Mr. Briars has followed every single one of our press releases for years," the press release stated. "Probably even this one."<br /><br />Haskell researcher Dutch Van Der Linde explained how they had stumbled on the theoretical possibility of Briars and his persistent interest in Haskell. "We knew that there are precisely 38 people who give a shit about Haskell," said Van Der Linde, "because every Haskell-related reddit post gets exactly 38 upvotes. It's a pure, deterministic function of no arguments -- that is, the result is independent of what we actually announce. But there are only 37 of us on our mailing list, so we figured there was a lurker somewhere."<br /><br />"That, or it was an off-by-1 error not detectable by our type system," Van Der Linde added. "But we don't, uh, like to dwell on, I mean with good unit testing practices we can, um... sorry, I need to get some water."<br /><br />As Van Der Linde stumbled off in a coughing fit, his fellow researcher Bonnie MacFarlane outlined their basic dilemma: "Finding a person who gives a shit about Haskell is an inherently NP-complete computer science problem. It's similar in scope and complexity to the problem of trying to find a tenured academic who didn't have the bulk of his or her work done by uncredited graduate students. So even though we suspected Briars existed, we needed a strategy to smoke him out."<br /><br />She explained the trap they set for Briars: "We crafted a fake satirical post lampooning Haskell as an unusable, overly complex turd -- a writing task that was emotionally difficult but conceptually trivial. Then we laced the post with deeper social subtext decrying the endemic superficiality and laziness of global industry programming culture, to make ourselves feel better. Finally, each of us upvoted the post, which was unexpectedly contentious because nobody could agree on what the reddit voting arrows actually mean."<br /><br />"And then we waited to see who, if anyone, would give a shit," she said.<br /><br />MacFarlane concluded, "Our elegant approach didn't work, so we hired a Perl hacker to go dig up the personal details on all 38 accounts that had ever upvoted a Haskell post, and the only one we didn't know was Seth Briars. So we reached out to him, and thankfully so far he hasn't threatened to sue us."<br /><br />Briars says he is pleased to have been recognized for his apparently unique shit-giving about Haskell. "I've been giving a shit about Haskell for a long as I can remember. I follow all their announcements and developments closely, just in case I ever get the urge to use the language for something someday."<br /><br />"It's a beautiful, elegant language," Briars observed as he busied himself cleaning a fingernail. "You'd be hard-pressed to find a more expressive and composable core. And they've made astounding advances over the years in performance, interoperability, extensibility, tooling and documentation."<br /><br />"I'm kind of surprised I'm the only person on earth who gives a shit about it," Briars continued. "I'd have thought there would be more people following the press releases closely and then not using Haskell. But they all just skip the press releases and go straight to the not using it part."<br /><br />"People see words like <em>monads</em> and <em>category theory</em>," Briars continued, swatting invisible flies around his head for emphasis, "and their Giving a Shit gene shuts down faster than a teabagger with a grade-school arithmetic book. I'm really disappointed that more programmers don't get actively involved in reading endless threads about how to subvert Haskell's type system to accomplish basic shit you can do in other languages. But I guess that's the lazy, ignorant, careless world we live in: the so-called 'real' world."<br /><br />Haskell researcher Javier Escuella remains hopeful that one day they may be able to double or even triple the number of industry programmers who give a shit about Haskell. "I believe the root cause of the popularity problem is Haskell's lack of reasonable support for mutually recursive generic container types. If we can create a monadic composition-functor wrapper that is perceived as sufficiently sexy by hardened industry veterans, then I think we will see an uptick in giving a shit, possibly as much as a full extra person."<br /><br />Haskell aficionado Harold MacDougal is not quite as sanguine as his colleague Escuella. "I doubt Haskell will ever be appreciated by the uneducated natives of this industry. As exciting as it is, the discovery of Briars should be considered an anomaly, and not as a sign that more people will ever give a shit. Programmers only seem to pay attention to things when there is humor involved."<br /><br />"We do have an experimental humor monad," added MacDougal. "But it doesn't seem to be getting much adoption. Haskell fans just don't see the need for it."<br /><br /><hr><br /><br /><small><b>MORE NEWS</b></small><br /><br />Previous article: <b>Perl Community Debating Adding Monads</b> The Perl lists are brimming with discussions about the value of adding monads to Perl. "We don't really know what they do, but it doesn't make sense _not_ to have something in Perl," said Perl hacker Landon Ricketts. <a href="http://www.perlmonks.org/?node_id=620692">Read more</a><br /><br />Next article: <b>Microsoft to Introduce Mutually Recursive Error Messages</b> Software giant Microsoft announced today the launch of their new REDRUM platform, an elegant system that allows Windows system error messages to shuffle blame around indefinitely by using continuation-passing. <a href="http://oreilly.com/catalog/9781565923560">Read more</a>Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com79tag:blogger.com,1999:blog-13674163.post-33398866373890338562010-07-28T13:40:00.000-07:002010-07-28T14:02:18.652-07:00Wikileaks To Leak 5000 Open Source Java Projects With All That Private/Final Bullshit RemovedEYJAFJÖLL, ICELAND — Java programmers around the globe are in a panic today over a Wikileaks press release issued at 8:15am GMT. Wikileaks announced that they will re-release the source code for thousands of Open Source Java projects, making all access modifiers 'public' and all classes and members non-'final'.<br /><br />Agile Java Developer Johnnie Garza of Irvine, CA condemns the move. "They have no right to do this. Open Source does <em>not</em> mean the source is somehow 'open'. That's my code, not theirs. If I make something private, it means that no matter how desperately you need to call it, I should be able to prevent you from doing so, even long after I've gone to the grave."<br /><br />According to the Wikileaks press release, millions of Java source files have been run through a Perl script that removes all 'final' keywords except those required for hacking around the 15-year-old Java language's "fucking embarrassing lack of closures."<br /><br />Moreover, the Perl script gives every Java class at least one public constructor, and turns all fields without getters/setters into public fields. "The script yanks out all that @deprecated shit, too," claims the controversial announcement.<br /><br />Longtime Java programmer Ronnie Lloyd of Austin, TX is offended by the thought of people instantiating his private classes. "It's just common sense," said Lloyd, who is 37. "If I buy you a house and put the title in your name, but I mark some of the doors 'Employees Only', then you're not allowed to open those doors, even though it's your house. Because it's really my house, even though I gave it to you to live in."<br /><br />Pacing and frowning thoughtfully, Lloyd continued: "Even if I go away forever and you live there for 20 years and you know <em>exactly</em> what's behind the doors — heck, even if it's a matter of life and death — plain old common sense still dictates that you're never, <em>ever</em> allowed to open them for any reason."<br /><br />"It's for your own protection," Lloyd added.<br /><br />Wesley Doyle, a Java web developer in Toronto, Canada is merely puzzled by the news. "Why do they think they need to do this? Why can't users of my Open Source Java library simply shake their fists and curse my family name with their dying breaths? That approach has been working well for all the rest of us. Who cares if I have a private helper function they need? What, is their copy/paste function broken?"<br /><br />Wikileaks founder Julian Assange, who coined the term "Opened Source" to describe the jailbroken open-source Java code, fears he may be arrested by campus security at Oracle or possibly IBM. The Wikileaks founder said: "Today the Eclipse Foundation put out a private briefing calling me a 'non-thread-safe AbstractKeywordRemovalInitiatorFactory'. What the fuck does that even mean? I fear for my safety around these nutjobs."<br /><br />The removal of '@deprecated' annotations is an especially sore issue for many hardworking Java developers. "I worked hard to deprecate that code that I worked hard to create so I could deprecate some other code that I also worked hard on," said Kelly Bolton, the spokesperson for the League Of Java Programmers For Deprecating The Living Shit Out Of Everything.<br /><br />"If people could keep using the older, more convenient APIs I made for them, then why the fuck would they use my newer, ridiculously complicated ones? It boggles the imagination," Bolton added.<br /><br />The Eclipse CDT team was especially hard-hit by the removal of deprecation tags. Morris Baldwin, a part-time developer for the CDT's C++ parsing libraries says: "We have a policy of releasing entire Java packages in which every single class, interface and method is deprecated right out of the box, starting at version 1.0."<br /><br />"We also take careful steps to ensure that it's impossible to use our pre-deprecated code without running our gigantic fugly framework," the 22-year-old Baldwin added. "Adding public constructors and making stuff non-final would be a serious blow to both non-usability <em>and</em> non-reusability."<br /><br />The Agile Java community has denounced the Wikileaks move as a form of terrorism. "It was probably instigated by those Aspect-Oriented Programming extremists," speculates Agile Java designer Claudia Hewitt, age 29. "I always knew they wanted to use my code in ways I couldn't predict in advance," she added.<br /><br />Many Java developers have vowed to fight back against the unwelcome opening of their open source. League of Agile Methodology Experts (LAME) spokesperson Billy Blackburn says that work has begun on a new, even more complicated Java build system that will refuse to link in Opened Source Java code. The new build system will be released as soon as several third-party Java library vendors can refactor their code to make certain classes more reusable. Blackburn declined to describe these refactorings, claiming it was "none of y'all's business."<br /><br />Guy Faulkner, a 51-year-old Python developer in Seattle, was amused by the Wikileaks announcement. "When Python developers release Open Source code, they are saying: Here, I worked hard on this. I hope you like it. Use it however you think best. Some stuff is documented as being subject to change in the future, but we're all adults here so use your best judgment."<br /><br />Faulkner shook his head sadly. "Whereas Java developers who release Open Source are code are saying: Here, I worked hard on this. I hope you like it. But use it exactly how I tell you to use it, because fuck you, it's my code. I'll decide who's the goddamn grown-up around here."<br /><br />"But why didn't they write that Perl script in Python?" Faulkner asked.<br /><br /><hr><br /><br /><small><b>MORE NEWS</b></small><br /><br />Previous article: <b>San Francisco Airport Announces That All Restrooms Near You Are Now Deprecated</b><br />The SFO port authority announced today that all airport restrooms located anywhere near you are now deprecated due to "inelegance". The newer, more elegantly designed restrooms are located a short 0.8 mile (1.29 km) walk from the International Terminal. <a href="http://www.flysfo.com/web/page/atsfo/passenger-serv/fam-serv/">Read more</a><br /><br />Next article: <b>Eclipse Sits On Man's Couch, Breaks It</b><br />New Hampshire programmer Freddie Cardenas, 17, describes the incident: "We invited Eclipse over for dinner and drinks. Eclipse sat down on our new couch and there was this loud crack and it broke in half. Those timbers had snapped like fuckin' matchsticks. Then my mom started crying, and Eclipse started crying, and I ran and hid in my bedroom." <a href="http://www.eclipse.org/community/news/eclipsenews.php">Read more</a>Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com71tag:blogger.com,1999:blog-13674163.post-27764752438439077332010-07-15T03:06:00.000-07:002010-07-15T04:25:10.678-07:00Blogger FingerWell! I've sure had a nice relaxing blog-free year. No worries, no haters, no Nooglers wandering by my office and staring at me through the window as if they expect me to crap in my hand and hurl it at them. Not that I wasn't tempted.<br /><br />Nope, it's just been peace and quiet and reading and coding and practicing my guitar and stuff. It's been awesome.<br /><br />And now that everyone's completely forgotten who I am, or whatever exactly I'd said that made them feel all butthurt inside -- as measured by my incoming email rate, which is finally near-zero -- I figure it's probably safe to get back in the water.<br /><br />I'm not really sure what my plans are going forward, other than staying employed at Google until the day comes when I need one of their comfy, brightly-colored caskets. Other than that, my plans are flexible. I'm feeling downright leisurely at the moment.<br /><br />I realize now that I was trying way too hard to change the world via blogging, and it made me care maybe just a <em>little</em> too much. This was bad for my mental and emotional health. Caring is fine. Lots of things are worth caring about. Very few of them merit sacrificing your health.<br /><br />Fortunately during my ad-hoc sabbatical I was able to gain some new perspectives by distancing myself a bit from the constant storm going on in the tech world.<br /><br />One nice perspective I gained is this: <b>There is nothing on this earth that can make everyone happy</b>. Reddit is a huge, living, breathing demonstration of this, since essentially no reddit post ever goes above maybe 80% approval, and a "good" post seems to hover around 65%-70% liked.<br /><br />That made me feel better about the haters. Haters abound. They're just a fact of life, part of the human condition. There's no need to waste energy hating haters.<br /><br />Another perspective I gained was that decorating your mansion with works of art you know nothing about is amazingly rewarding, as long as you can mix it up by leaping across rooftops and assassinating bad guys and hanging with your buddy Leonardo. I swear, if they ever make a movie about my life, the handsome and dashing actor who plays me, when asked on his deathbed which of life's pleasures had given him the greatest happiness, will say something cheesy that makes the audience ooh and aww with appreciation, but it'll be total Hollywood bullshit, because what I really will have said was "gaming".<br /><br />Yet another perspective I gained is that I now actually agree with everyone who complained that my blog posts were too long. Reddit has ruined my attention span for online material. There seems to be no such thing as too frequent, but there's definitely such thing as too long. So I'll be better about that.<br /><br />I used to have this pet theory that the length of my blogs is a big part of why they've been noticed at all. I mean, look at <a href="http://www.timecube.com/">this dude</a>. If he'd written only one or two crazy things, he'd be just another nutjob, but by dint of almost superhuman persistence he's managed to get the <em>entire world</em> to laugh at him.<br /><br />I was sort of aiming for getting people to laugh <em>with</em> me, but I used the same basic recipe as Time Cube Dude. And the formula seemed to be working, modulo the haters.<br /><br />However, Dave Barry -- my Personal Childhood Hero (66% liked!) -- always wrote his columns in chunks of 800 words, even if it necessitated inserting filler words such as "booger" and "legislative session" into his articles about wine tasting or car engines or bat guano, or whatever it was that caught his fancy that week.<br /><br />Overall it seems likely that post-length is less important than factors such as quality, consistency, passion, relevance, and legislative booger session.<br /><br />So I was originally thinking of writing up to a maximum of 800 words today, and I'm at about 500 now, but I've been really successful at my Not Caring Too Much Initiative, so... later! Nice chatting with ya. Cheerio!<br /><br /><b>HAHAHA DISREGARD THAT</b>...<br /><br />Just to ensure this post isn't <em>entirely</em> devoid of content I'll share something important that I learned last year.<br /><br />Here's what I learned: after Carpal Tunnel Syndrome, the second most common hand ailment is known as <a href="http://en.wikipedia.org/wiki/Trigger_finger">Trigger Finger</a>.<br /><br />Its more formal name is <em>digital tenovaginitis stenosans</em>, which is ancient Latin for "electronic hand inflamed vagina without writing", which I believe is why most people prefer to call it Trigger Finger.<br /><br />I have it, you know. Trigger Finger, I mean, not an inflamed vagina.<br /><br />Although Trigger Finger is "idiopathic", a fancy word meaning that doctors don't have a fucking clue what causes it, it is widely known in musical circles as a musician's injury. It happens to musicians who overpractice, usually in preparation for a recital, performance or recording session.<br /><br />I found all this out <em>after</em> being diagnosed with it.<br /><br />It is not idiopathic in my case. I have the benefit of hindsight, and I know exactly what caused it. It turns out that if you play a certain right-hand arpeggio on a classical guitar enough times -- where "certain arpeggio" here refers to Hector Villa-Lobos' Etude No. 1, and "enough times" is approximately 650,000 times in a 5-month period<sup><a href="#footnote1">[wtf?]</a></sup> -- you acquire Trigger Finger. That's not <em>precisely</em> what I was playing, but it'll serve.<br /><br />Trigger Finger is a painful, debilitating, demoralizing injury. I highly recommend not letting it happen to you. Your body will begin telling you when it's time to ease up on the practice sessions. Listen to your body when it says that.<br /><br />As for specifics, there's not much to tell. My hand started hurting. Then it hurt real bad for a month. Ibuprofen and cold/hot packs didn't help. It got steadily worse. Even quitting guitar altogether for another month didn't help. I could no longer use my right hand, and it was beginning to feel permanent. I wasn't even sure why it was happening. I was terrified and I began to despair.<br /><br />My Google doctor was great. She referred me to a specialist -- a hand surgeon. I told her I didn't really want to see a hand... S-word. I could barely say it aloud. She reassured me that seeing a specialist didn't necessarily mean surgery. They might have other tricks up their sleeves. So I decided to brave it.<br /><br />My first trip to the specialist only took about 15 minutes. She listened to my disoriented bleating, asked me a few questions, gently felt my hand here and there, and informed me that I had Trigger Finger. She said she was going to give me a cortisone shot. She was pulling out a giant needle as she told me this. It just sort of materialized from under the table, the way a knife appears in a bar fight. It was a very large needle. She explained calmly that the cortisone is a steroid that stays wherever you inject it. They use it on athletes to reduce inflammation from certain injuries.<br /><br />Then she stuck the giant needle all the way into the base of my right middle finger and squeezed. Compared to the pain of my trigger finger, the injection felt like a mosquito bite.<br /><br />She told me that I'd start feeling better in a week, and in a month I'd be pretty much all cleared up. If not, I should come back and see her for more treatment. And no, I wasn't going to lose my hand.<br /><br />It was kind of weird, but on my way back to my car I think someone had been cutting onions in the elevator. A lot of onions. In the last fifteen minutes my whole life had been handed back to me with an almost casual lack of concern. I was overwhelmed with onions.<br /><br />A month later I was back to see her. The cortisone had helped a lot. I gave it a 66% approval rating. She said she could give me another shot, or do surgery. This time I'd done my homework. I elected for surgery. That was back in September. It was an interesting story in its own right, but the upshot is she did great. And then after that there was a lot of physical therapy.<br /><br />I typed the word "September" in the previous paragraph three times before I got it right. The word doesn't even have letters that need my right middle finger. My right hand, which had shaped itself into an unusable, agonized claw between March and July, is still afraid to flex and extend my middle finger. It's up to, oh, a 95% approval rating now, which I believe is phenomenally successful. Who could ask more of a hand surgery? It could have been much worse. Much much.<br /><br />But that last 5% is rough. The haters in my hand are a constant reminder of the old pain. When I type, or play piano or guitar, my right-hand fingers twist and curl in elaborate, incomprehensible dances to avoid a pain that is for the most part no longer there.<br /><br />Yep, I think it'll be easier to keep my blog posts shorter going forward.<br /><br />Ironically, some good came out of the experience. I've switched to a sustainable new guitar style and a new repertoire, one I enjoy greatly. And I now pay much more attention to economy of motion in my typing. And I spend more time finding pain-saving Emacs shortcuts. It makes me wonder what I might have achieved had I focused on it sooner.<br /><br />Surgery notwithstanding, on the whole I still think it was a great year. The year was in fact more complicated and more painful than I've let on here, but that's life for ya.<br /><br />And now that I'm rested up, I believe I'm ready to start tech blogging again... in moderation, anyway. The rest and relaxation and research did wonders for me. I used to have lot of open, long-standing concerns about the future of programming and productivity, but my sabbatical last year finally brought me some <a type="amzn" asin="1934356336">clojure</A>.<br /><br /><hr><br /><br /><a name="footnote1">[1]</a> Talk about caring too much. I may explain this 650k figure in a future blog post if I can ever get over my embarrassment.Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com141tag:blogger.com,1999:blog-13674163.post-81015351966813865692009-05-18T02:47:00.001-07:002009-05-21T16:45:24.984-07:00A programmer's view of the Universe, part 3: The Death of Richard DawkinsWe're getting close to the end of my blog. After today's entry, I only have three left to write. After that, I'll only blog anonymously or (more likely) not at all.<br /><br />This is part three of five in my "Programmer's View of the Universe" series. I struggled for a while with how best to introduce the ideas in this installment, and ultimately opted for a short story.<br /><br />This is a science fiction short story. It's different from many other sci-fi stories in that it is set in the "near future", but it has realistic schedule estimates. So unlike 1984, 2001, The Singularity is Near and all the other sci-fi stories that grossly underestimated their project durations, this one is set 1000 years in the future. I.e., right around the corner.<br /><br />The story is disrespectful to pretty much everyone in the world. It will create a fantastic shit storm. This is probably a good time to point out that I don't speak for my employer. <em><font color="brown">[Edit: Yay for fiction! Apparently marking something as fiction placates people. Nice to know.]</font></em><br /><br />The story is 18 pages (PDF from Google Docs print preview). That's not unusual for my blog, but I went ahead and published it as a standalone document.<br /><br />I'd encourage you to enjoy it, but I'm old and embittered enough to know better. You probably shouldn't even read it. Just wait for someone to summarize it for you.<br /><br />Installments 4 and 5 will not be short stories; they will be regular old blog rants. In them I will further develop these ideas, and I will also attempt to clear up any gross misconceptions about the story, of which there are bound to be many.<br /><br />My final blog-rant entry is the only one I care about anymore. I've been working on it so hard that my fingers have started to fail. It's been tons of fun, aside from the chronic pain. It's about a neat programming language, and Emacs, and lots of other stuff. I can't wait!<br /><br />Oh yeah. Here's the <a href="http://docs.google.com/Doc?id=ddv7939q_20gw8h9pcx">link to the story</a>. I've never done a read-only Google Docs link before, so it's probably broken. Or editable. I don't know. We'll see.<br /><br />This week is going to suck. People are going to be mad. Maybe I should take a vacation and come back when the whining is finished. Can someone email me and let me know when it's all blown over?Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com161tag:blogger.com,1999:blog-13674163.post-54467680420431050702009-04-09T02:07:00.000-07:002009-04-09T09:32:32.443-07:00Have you ever legalized marijuana?Over the holidays I read a neat book called <a type="amzn" asin="006135323X">Predictably Irrational: The Hidden Forces That Shape Our Decisions</a>, by Dan Ariely. The book is a fascinating glimpse into several bizarre and unfortunate bugs in our mental software. These bugs cause us to behave in weird but highly predictable ways in a bunch of everyday situations.<br /><br />For instance, one chapter explains why bringing an uglier version of yourself to a party is guaranteed to get you more attention than other people who are arguably better-looking than you are. I personally do this all the time, except that I'm usually the ugly one. The same principle explains a ploy used by real-estate agents to get you to buy ugly houses.<br /><br />Another chapter explains the bug that causes you to be a packrat, and shows why you desperately hold on to things you own, even if you know deep down that they would rate lower than pocket lint on eBay.<br /><br />In any case, well, good book. I'm going to harsh on it a teeny bit here, but it's only one tiny part towards the end, one that actually has little to do with the rest of the research presented in the book. I still highly recommend it. It's only about a 4- or 5-hour read: beyond the reach of most social-network commenters, perhaps, but you can probably handle it just fine.<br /><br />So: about that harshing. Dan Ariely, who seems like a pretty fascinating guy in his own right, independent of his nifty book, says something that's kinda naïve towards the end. It doesn't <em>seem</em> naïve at all when you first read it. But naïve it is.<br /><br />Towards the end of the book — and I apologize here, since my copy is on loan to a friend at the moment, and you can't search inside the book on Amazon.com no-thanks to the book's publisher, so I can't double check the exact details — but towards the end, Dan works himself into a minor frenzy over what seems like a neat idea about credit cards.<br /><br /><b>Credit Card Buckets</b><br /><br />Dan's idea is simple and appealing: let's partition credit limits into "buckets". People are always maxing out their credit cards, and it leads to all sorts of financial misery, since the rates are always by definition just epsilon short of legal usury, so most people can never, ever pay down the debt.<br /><br />Dan's idea is more or less as follows: you divide up your credit card available balance into "buckets", where each bucket represents a type of expense. You might, for instance, have a bucket for rent and utilities, a bucket for alimony, a bucket for chocolate and desserts, a bucket for sports and leisure activities, a bucket for dining out, a bucket for home improvement, a bucket for groceries, and a bucket for discretionary spending, or "misc".<br /><br />Each bucket would have its own credit limit, and the sum of all the individual limits would be your credit limit. Let's assume you pay off your credit card entirely every month — not typical, but it simplifies the explanation. Each month, you'd have a certain amount of money to spend on each bucket, and you would not be allowed to spend more than the limit for that purchase type.<br /><br />OK, so that's the way I understood the proposal. There were several pages devoted to it, as I recall, so I may have missed a few nuances here and there, but I'm hoping I've captured the essence of it.<br /><br />As an aside, I read a short diatribe many years ago by a working mom whose kids were always asking her why they couldn't spend more money on entertainment purchases like video games. She was having trouble getting through to them, so one month she took her paycheck, went to a bank, and got it issued to her in 1-dollar bills. She took the bills home and piled them up on the table in front of her kids, who were amazed at the giant pile of money she had made. She then went through her budget with them, stacking the bills into piles by expense type: this many dollars for rent, this many for utilities, this many for groceries, this many for soccer uniforms, etc. At the end there were only a couple of dollars left, and the kids soberly realized that they needed to wait about 20 years and then start downloading games illegally online.<br /><br />Anyway, Dan's idea was kind of similar. In order to train consumers not to overspend in a given category, leading to overall overspending, they would be able to opt-in to a program that partitioned their credit, and presumably it would lead to much wiser, more deliberate spending.<br /><br />It seemed logical enough to me! It sounds similar to calorie-counting: I've found that explicitly keeping track of my calorie consumption each day does wonders for lowering my overall calorie intake. Along the same lines, I am 100% sure that if I had an explicit budget, rather than just a vague gut feel for how much I'm spending, then I would spend less money each month. We don't really have a proverb for this concept, but we do for the opposite: "out of sight, out of mind". If your budget is in plain sight, well then...<br /><br />Plus the idea of having <em>types</em> in my credit-card accounting was all the more alluring since I'm a programmer, and I "get" the idea of types. Types are great. Dan is effectively suggesting a strongly-typed approach to credit-card spending, so the programmer in me was all for it.<br /><br /><b>Evil Banks (as if there's any other kind)</b><br /><br />Unfortunately, this story has a sad, bitter ending. Normally I <em>would</em> want to add: "like all overly-strong-typing scenarios", but that would just be mean. So I'm not saying it!<br /><br />Dan goes on to explain that banks are far too evil, or at any rate far too self-serving to implement his incredibly cool idea. He actually took the idea to at least one bank, meeting with their board of directors and presenting the idea. How cool is that?<br /><br />Dan says the bank executives seemed to like his idea, and indicated that it might be a great way to get new customers. Credit cards are all pretty much the same (i.e., loan sharks), so they need to find ways to differentiate themselves. A nifty credit-bucketing program seemed like something marketing could run with. They all said they'd look into it and see about maybe implementing it.<br /><br />And then... nothing. They never implemented the idea! Not even a little prototype of it. Nothing.<br /><br />Dan theorized that profit margins, as always, are the culprit here. Even if banks could potentially sign up more customers on the promise of better spending control, there's a fundamental problem here, which is that credit cards make money for the banks <em>based on spending</em>. If consumers aren't spending as much, the banks won't make as much profit!<br /><br />Banks make money off credit cards in at least three ways: they charge the merchant a fee at point of sale, they charge you interest on the loan, and they charge you fees such as the overdraft fee when you inadvertently overspend your limit. All those ways require you to make purchases, and the last way actually requires you to overspend your limit — exactly what Dan's idea is trying to prevent!<br /><br />I suppose a truly evil bank might look at buckets as an opportunity to screw you on fees for each individual bucket. But Dan seems to think that on the whole, the fear of decreased margins — induced by the suddenly more rational consumer spending — is what is preventing banks from implementing his idea.<br /><br />At the time I was reading the book, I thought, well gosh, I <em>hate</em> banks. In fact, I don't even <em>use</em> a bank — I now use an investment brokerage that has banking services on the side. You don't have to be rich to do this, and it saves you from ever having to walk into a bank again. And if you choose this route, then whenever you walk into a bank you will immediately be struck by what amazing ghettos they are: little brick buildings with little vaults holding your little dollars, little lines to talk to little tellers who provide you with little help... they're awful. They stink. I detest banks; I've found the whole notion loathsome for at least ten years before hating them became globally fashionable a few months ago.<br /><br />So yeah. Dan had me. The banks are evil. That's why they aren't implementing his idea. Case closed.<br /><br /><b>The little winged nagging programmer angel on my shoulder</b><br /><br />So just like Memento Guy in L.A. Confidential, my mind wouldn't let the case close forever. (Why can I remember Rollo Tomasi but not the actor's name? Oh wait, Guy Pearce. Him.)<br /><br />For the next couple of weeks, Dan's situation replayed in my head like a bad song. I myself have given presentations to boards of executives in the past, usually presentations that had come to naught, and I felt a certain empathy with him.<br /><br />As it happens, I also served time in Amazon.com's Customer Service Tools group for four years, leading the group for the latter half of my stay, and I know a thing or two about credit-card processing. Not a whole lot, but definitely a thing or two.<br /><br />And I'm a programmer. Just like you. (You might not know it yet, but you are. Trust me.)<br /><br />The programmer part of me started wondering: how would I implement Dan's idea? What would it take to add "bucketization" to credit cards?<br /><br />And the programmer part of me started to get a sinking feeling in the pit of his... uh, its... stomach. It got the chills. And a fever. At the same time. Why? Because <em>it</em>, by which I mean "a part of me that wishes I could forget it", has been on software projects like that before.<br /><br />The little nagging voice in my head started enumerating all the things you would need to do, like counting so many sheep. First I imagined I worked at the bank, some poor schmuck of a programmer wearing a suit, working bankers hours and golfing every day at 3pm. So, you know, pros and cons. Then I imagined my boss coming in and saying: "Steve, we gotta implement Buckets. The board just approved it. Make it happen. Yesterday."<br /><br />Aw, crap. OK, what to do. First, we need a spec. So, like, I ask my boss a few preliminary questions:<br /><br /><ul> <li>Can customers control the buckets, or are they fixed?</li> <li>If fixed, how many are there? What are their names?</li> <li>Let's assume for the remaining questions that they are NOT fixed, since a predefined set of buckets would be "insanely stupid" and rejected by customers. So, how many buckets can a customer make? Min and max?</li> <li>Can customers give the buckets names? If not, do they have to use numbers?</li> <li>What characters can they use in the name? What's the maximum length? If we need to truncate the name in a printed statement, how do we truncate it?</li> <li>Can a customer change their buckets mid-month?</li> <li>Can a customer change their buckets between months? What if their balance is nonzero? Can they transfer balance between buckets?</li> <li>Can a customer change the name of a bucket? Do names have to be unique?</li> <li>Exactly <em>how</em> does a customer name a bucket? Online? Over the phone? By snail mail forms? Talking to bank teller? All of the above?</li> <li>Same question for all other configuration settings. How? Where?</li> <li>Do credit-card customer service reps have to know about the buckets? How much do they have to know? <em>(hint: everything)</em> Is there training involved? <em>(hint: yes)</em></li> <li>Do the customer-service tools have to be redesigned to take into account this bucketization?</li> <li>What about the bank's customer self-service website?</li> <li>What about the phone interactive voice-response tree?</li> <li>What about the software that sends email updates to the customer?</li> <li>What about the software that generates printed billing statements? How exactly does it represent the buckets, the individual spending limits and balances, the carry-overs from month to month, the transfers, the charge-backs, the individual per-bucket fees?</li> <li>What about the help text on the website? What about the terms and conditions? What about the little marketing pamphlets? Should they try to explain all this shit, or just do some hand-waving?</li> <li>Can a customer insert a new bucket into the list? How are the credit limits of the remaining buckets re-allocated? What if adding a new bucket puts one or more of the older buckets over the limit? Do we charge fees? Do we tell the customer they're about to be charged a fee right before they create the bucket? Is it, like, OK/Cancel? Do we send them a follow-up email telling them they just fucked themselves over? What exact wording do we use?</li> <li>Can a customer delete a bucket? What if there's money in it? What if it's overdrawn? How do we represent the overdraft fee in the database? How do we show the deletion event in their bill?</li> <li>Can a customer merge or consolidate buckets?</li> <li>What if a customer has an emergency situation, plenty of limit in other buckets, and they really really need to charge to a couple of buckets, but they want to avoid an overdraft fee? What do they do? Are the buckets mandatory or discretionary?</li> <li>How the hell do we even tell if they're buying "chocolate", anyway? The vendor doesn't tell us the purchase type. How do we know how to charge the right bucket? What if it's ambiguous? What if the buckets overlap? Does the customer need a point-of-sale interface for deciding which bucket to put the charge in? Can they do "separate checks" and split the charge into several buckets?</li> <li>Where are you going? Answer me!</li> <li>WHAT THE EVER-LOVING *FUCK* ARE YOU PEOPLE SMOKING? HUH? HAVE YOU EVEN THOUGHT ABOUT THIS PROJECT FOR MORE THAN A MILLISECOND? THE SPEC FOR THIS PROJECT WILL BE 5,000 PAGES! IT WILL TAKE THOUSANDS OF MAN-YEARS TO IMPLEMENT, AND *NOBODY* WILL UNDERSTAND HOW IT WORKS OR HOW TO USE IT, EVEN IF WE SOMEHOW MANAGE TO LAUNCH IT! IT'S FRIGGING IMPOSSIBLE! IT'S INSANE! __YOU__ ARE INSANE! I QUIT! NO, WAIT, YOU'RE FIRED! ALL OF YOU! AAAAAAAAAUUUGH!</li> </ul><br />The little nagging white-robed behaloed programmer whispering in my ear was getting pretty goddamned irritating at this point. And it asked a LOT more questions than the ones in the list above, which is merely a representative sample. My stress level began approaching what I might call "Amazon levels", and I don't even work there anymore. Thank God.<br /><br />But for all its downsides (e.g. as voiced by my brother Mike, who was in the Navy on an aircraft carrier in the Persian Gulf during the 1990s Gulf War, and later worked at Amazon, and declared after four years that Amazon was _way_ more stressful), Amazon did teach me a valuable lesson, namely that YOUR IDEA IS INSANE. It doesn't even matter what it is. It's frigging insane. You are a nut case. Because anything you try to do at Amazon these days involves touching a thousand systems, all of which are processing gazillions of transactions a second, and you want to completely redo the database schema, and <em>you don't even know the answers to these fucking questions, DO YOU?</em><br /><br />I suppose I should think of it as a valuable experience. If nothing else, I understand Complexity in a way most people will, mercifully, never have to.<br /><br />Anyway, I hope I've imparted the basic flavor of my thinking after having been totally bought into Dan's idea. Here's how I (now) envision the days just after Dan's meeting with the bank executives:<br /><br /><b>Day 1:</b> <em>(executives)</em>: Managers, we'd like you to look into this incredible new idea from Dan Ariely. We think it could revolutionize consumer credit-card spending in a way that makes everyone love us and sign up for our services, dramatically increasing both our profit margin and our overall customer satisfaction. And it's an incredibly simple idea!<br /><br /><b>Day 2:</b> <em>(managers)</em>: Programmers, project managers and marketers, we'd like you to flesh out this idea from On High, and give us some time estimates. We all know we only have a budget of about 2 months for ideas from the Board this year, so let's try to make it fit. How long will it take?<br /><br /><b>Day 3:</b> <em>(grunts)</em>: A billion years. We quit. Fuck you.<br /><br /><b>Day 4:</b> <em>(managers)</em>: Executives, we think it will take about 3 years. It's surprisingly hard. We wouldn't be able to do anything else at our current staffing levels. Should we move forward?<br /><br /><b>Day 5:</b> <em>(executives)</em>: Two <em>years</em>? Good lord! For a project that _might_ increase our profit margins and customer satisfaction, but could also cause customers to be so confused that we have to triple our Customer Service headcount? We don't think that sounds... well, reasonable. Although it would be a very interesting experiment, it's simply too expensive for us to attempt. Should we tell Dan? Well... it might be patentable, and we might be able to get around to it <em>someday</em> if there's a sudden glut of programmer expertise, so... maybe we'd better just sit on it for now. Who's up for golf?<br /><br />In reality I'm sure it went down a <em>little</em> different from that. For instance, they may have had the Day 5 meeting late, and then gone to a strip club instead of a driving range.<br /><br />But I'm pretty sure that aside from the mundane details, that's exactly how it went down. Because that kind of shit happened at Amazon pretty much every week I was there, for almost seven years. (And astonishingly, we actually managed to launch at least half those crazy ideas, by burning through people like little tea lights. But that's another story. Plus, no bank can execute like Amazon can. Banks just don't have the culture for it. Bless 'em.)<br /><br /><b>Legalization</b><br /><br />So. I'm two glasses of wine into this whine. I'm going to go get a third glass, mark the calories off in my spreadsheet, and then wrap up. If you don't know what's coming by now, then you're pretty stupid, but on the plus side you're an amazingly fast reader, so I'll go through the motions anyway.<br /><br /><gets third glass><br /><br />It doesn't actually matter what your stance is on the legalization of marijuana, for purposes of this little essay. You could be radically opposed to it on religious, moral, or "parental" grounds. Or you could be so radically in favor that you've been laughing hysterically and rubbing your hands together incessantly ever since you started reading this post. If you know what I mean. Or you could be somewhere in between, moderate and yet open-minded.<br /><br />It doesn't matter.<br /><br />This blog is about <em>complexity</em>, the bugbear that haunts software developers, program managers, project managers, and all other individuals associated with trying to launch new software projects and services.<br /><br />Dan Ariely would have made a great VP (that is, Vice President). If you think that legalizing marijuana is a black-and-white, let's just decide it and get the frigging thing legalized once-and-for-all issue, then you too have some VP blood in you.<br /><br />VPs have what my brother Mike refers to as "Shit's Easy Syndrome".<br /><br />You know. As in, shit's easy. If it's easy to imagine, then it's easy to implement. Programming is just turning imagination into reality. You can churn through shit as fast as the conscious mind can envision it. Any programmer who can't keep up is an underperformer who needs to be "topgraded" to make room for incredible new college hires who can make it happen, no matter what "it" happens to be, even if they have to work 27 hours a day, which of course they can because by virtue of being new college hires, they have no social lives and no spouses or significant others, and they probably smoke a lot of crack from being in the dorms so they can stay awake for weeks at a time.<br /><br />That's the kind of programmer we need at our venerable institution. And we are completely anti-slavery, for the record.<br /><br />Shit's Easy syndrome is, well, pretty easy to acquire. Heck, you don't even have to be a VP. Directors sometimes get it if they stay away from the code for too long.<br /><br />As for the rest of us, well, we ought to know better. YOU, as a frequenter of reddit and a programmer (wannabe or actual, it doesn't matter), <em>you</em> ought to know better.<br /><br />Let's ask our little naggy angel: what would it take to legalize marijuana?<br /><br />I don't know the answer, and I'm certainly no expert, but I've been on enough projects like this to know how to start asking the right questions.<br /><br /><b>What exactly do you mean... "legalization"?</b><br /><br />So... one minor, teeny-weeny almost insigificant caveat before I continue: I have smoked marijuana (and inhaled it, deeply) on more occasions than I can count. And yet I'm almost undoubtedly smarter than your kid that you're so goddamned worried about. I skipped three grades (3rd, 7th and 8th), entered high school at age 11 and graduated at age 14, took A.P. courses, had stellar SAT scores, was a U.S. Navy nuclear reactor operator, went to the University of Washington and earned a Computer Science degree, worked at major corporations like Amazon.com and Google for many years as a senior staff engineer and/or senior development manager, and now I'm an internationally famous blogger.<br /><br />I don't usually dwell on that, but today it's relevant. It's relevant because I've smoked a LOT of pot, and I dare you to prove that it has impaired me in any scientifically detectable way. We would debate, and you would lose; nevertheless I <em>double-dog dare you</em>.<br /><br />So, well, sure... from <em>that</em> perspective, yeah, I'm in favor of legalization. The laws are stupid. Legalize it, already. For cryin' out loud. Jeez.<br /><br />However, from a <em>programmer's</em> perspective (and keep in mind that I was also an Engineering Project Lead at Geoworks for 3 years, a Technical Program Manager at Amazon for a year, a Senior Development Manager at Amazon for about 5 years, and now I'm a plain-vanilla programmer with 3.5 years at Google, so I've done it all), the idea gives me the chills. And a fever.<br /><br />Because laws are pretty much like programs. You have to specify their behavior at almost (not quite, but almost) the same level of detail, using a language that's almost as crappy as our programming languages today — English. Or whatever your native language is: it sucks too. If you don't believe me, ask a lawyer. Or try to write a technical spec in your native tongue that the programmers don't ultimately poke full of holes.<br /><br />Aw, don't try. Don't even bother. Just trust me on this one. Today's natural languages are completely unsuitable for specificity, and "legalese", as much as we all love to ridicule it, is our collective best effort to permit being logical, specific, and unambiguous.<br /><br />I have more respect for The Average Reddit Commenter than I have for, well, the average commenter in any other forum, period, assuming that "period" is stevey-legalese for "except for LTU, news.ycombinator and their ilk, mumble mumble."<br /><br />But the Average Reddit Commenter has gone too far. Everyone these days, when debating the merits and demerits of marijuana legalization, seems to have completely overlooked the fact that it's HARD. It's a project of vast, nearly unimaginable complexity.<br /><br />Think about it. What kinds of laws do we have about alcohol and tobacco? Is it just one law each, saying "it's legal" or "it's illegal?" Of course not, and you're insulted that I asked such an obviously rhetorical question, yet intrigued by my line of reasoning. Admit it! How is marijuana similar to alcohol? How is it different? How is it similar and different to tobacco?<br /><br />Let's let the little angel ask a few preliminary questions, just to see where it takes us, shall we?<br /><br /><ul> <li>Is it legal to drink alcohol in a TV commercial? No? OK, what about marijuana, then? Can you smoke it in a commercial? Can you SHOW it? Can you talk about it? Can you show marijuana smoke at a party, without anyone actually being seen smoking it? Can you recommend its use to children under the age of 9? What exactly are the laws going to be around advertising and marijuana?</li> <li>Do we let everyone out of prison who was incarcerated for possession and/or sale of marijuana? If not, then what do we tell them when they start rioting? If so, what do we do with them? Do we subsidize halfway houses? Do we give them their pot back? How much pot, exactly, do they need to have possessed in order to effect their judicial reversal and subsequent amnesty? A bud? An ounce? A cargo ship full?</li> <li>Is it legal to sell, or just possess? If the latter, then how do we integrate the illegality of selling it into the advertising campaigns that tell us it's legal to own it?</li> <li>If it's legal to sell it, WHO can sell it? Who can they sell it to? Where can they sell it? Where can they purchase it? Are we simply going to relax all the border laws, all the policies, all the local, state and federal laws and statutes that govern how we prioritize policing it? All at once? Is there a grandfather clause? On what _exact_ date, GMT, does it become legal, and what happens to pending litigation at that time?</li> <li>Are we going to license it? Like state alcohol liquor licenses, of which there are a fixed number? What department does the licensing? How do you regulate it? Who inspects the premises looking for license violations, and how often? What, exactly, are they looking for?</li> <li>Is it OK to smoke marijuana at home? At work? In a restaurant? In a designated Pot Bar? On the street? Can you pull out a seventeen-foot-long water bong and take a big hit in the middle of a shopping mall, and ask everyone near you to take a hit with you, since it's totally awesome skunkweed that you, like, can't get in the local vending machine? If it's not OK, then why not?</li> <li>Can you drive when you're stoned? What's the legal blood-THC level? Is it state-regulated or federal-regulated? For that matter, what is the jurisdiction for ALL marijuana-related laws? Can states override federal rulings? Provinces? Counties? Cities? Homeowners associations?</li> <li>What exactly is the Coast Guard supposed to do now? Can illegal drug smugglers just land and start selling on the docks? Are consumers supposed to buy their marijuana on the street? What happens to the existing supply-chain operations? How are they taxed? Who oversees it?</li> <li>Can you smoke marijuana on airplanes? Can airplanes offer it to their customers in-flight? Is it regulated in-flight more like tobacco (don't get the smoke in other peoples' faces) or alcohol (imbibe as you will, as long as you don't "appear intoxicated"?) What about marijuana brownies? Are you allowed to eat it in areas where you're not allowed to smoke it?</li> <li>Can an airplane captain smoke pot? A ship captain? A train conductor? The driver of a car? An attendee at a Broadway musical? A politician in a legislative session? What is the comprehensive list of occupations, positions and scenarios in which smoking pot is legal? What about eating pot? What about holding it? What about holding a pot plant? What about the seeds?</li> <li>Speaking of the seeds, are there different laws governing distribution, sale and possession of seeds vs. plants vs. buds vs. joints? If so, why? If not, why not?</li> <li>What laws govern the transportation of marijuana in any form into or out of countries where it is still illegal? What policies are states able to enact? Is it OK under any circumstances for a person to go to jail over the possession or use of marijuana? If so, what are those circumstances?</li> <li>Are there any laws governing the use of marijuana by atheletes? U.S. military personnel? Government employees? Government contractors? U.S. ambassadors, in title or in spirit? What are our extradition laws? What do we do about citizens who are subject to the death penalty in countries like Singapore for the possession of sufficient quantities of what we now consider to be legal substances?</li> <li>What about derivatives? Are the laws the same for hashish? How do we tell the difference? What if someone engineers a super-powerful plant? How do the new laws extend to a potential spectrum of new drugs similar to THC?</li> <li>For driving and operating machinery, do we have legal definitions that are equivalent of blood-alcohol percentage, and if so, what are these definitions? How do we establish them? How do we figure out what is actually dangerous? How do we test for these levels? When they are established, do we we put up signs on all roadways? Do we update the Driver's Education materials? How do we communicate this change to the public? <li>How does legalization impact our public health education programs? Do they have to immediately retract all campaigning, advertising and distributed literature that mentions marijuana? How does legalization interact with the "Say no to drugs" programs? Do we need extra education to differentiate between a drug that is now legal (but wasn't before) and drugs that are still illegal? What's our story here? What about other drugs that are even less addictive and/or less intrusive than marijuana?</li> <li>Monsanto is eventually going to sue the living shit out of <em>someone</em> for using genetically-engineered pot seeds. Can they sue individuals with a single plant in their windowsill? <em>(answer: yes)</em> Will Oprah step in and help that beleaguered individual? <em>(answer: we'll see!)</em></li> </ul> I'm not an expert, and in fact I've gone to extra-special effort to avoid all possibility of being accused of having researched this subject. I know NOTHING about it.<br /><br />But the questions, they're bugging me. How the hell do we implement all this? Sure, it's "legal" in Amsterdam, or so they say. I've never been there, and I suspect their laws are way too vague for the overly-litigious United States of America.<br /><br />I hope it's obvious that we can't say "it's just like tobacco" (it's not) or "it's just like alcohol" (it's not), or (God help us) "it's just like doing alcohol and tobacco together, so take the intersection of their laws".<br /><br />Marijuana, whether you like the idea of legalizing it or not, is a <em>project</em>. It requires an <em>implementation</em>, and the implementation is a lot like that of a software project. The US federal government is analogous to "The Company", and the states are analogous to "The Teams" that comprise the company. Some of them have free time; some do not. Some of them agree with the overall goal; some do not. And every single miniscule little detail has to be worked out, and written up, and voted on, and approved, and then specified, and implemented, and enforced.<br /><br />And there will be bugs. And loopholes. And unexpected interactions. The best-laid plans will go awry.<br /><br />People will die. It's a certainty. Some people are going to die as a direct consequence of legalization of marijuana. I don't like it, and you don't like it, and most of us would probably argue that it shouldn't hold up legislation or legalization indefinitely... but we have to take it into account. Because if it's YOU who dies, smashed to death on the iceberg by your skunkweed-stoned ship captain, you're going to be REALLY pissed off. I guarantee it.<br /><br />Shit is NOT easy. Remember that. Shit is NOT easy. If you think it's easy, then you are being naïve. You are being a future VP. Don't be that way.<br /><br />Try to think about how you would implement it. Yourself. If your boss came to you and said: "Make it happen. Yesterday." Have you ever legalized marijuana?<br /><br />I haven't. But I wouldn't want to be the people who decide how to legalize it. Their asses are majorly on the line. Even more than ours were at Amazon.<br /><br />My advice: give it some time. Hell, give _Obama_ some time. Whether you still like him or not, he's not a frigging King, he's a President. He can't make stuff happen overnight by waving his magical sceptre. He just can't. I don't know what you were thinking, but "overnight" is a pipe dream, and "a few months" is <em>definitely</em> "overnight", in presidential terms.<br /><br />Moral of the story: Shit is <em>not</em> easy. Stuff takes time. Months. Years. Decades. It's OK! You'll still be here. Count your calories. Exercise. And you'll still be here to see it all happen.<br /><br />Patience. It's a wonderful thing. I can't wait!Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com136tag:blogger.com,1999:blog-13674163.post-27460322547192412592009-03-12T23:06:00.000-07:002009-03-13T03:51:35.115-07:00Story TimeSo I've got all these fancy blog posts planned. More than planned, actually — they're well underway. But it's also been a busy couple of months, so nothing's really ready yet.<br /><br />To make my schedule even worse, I kind of sort of got myself a little bit addicted to the writings of this one blogger. Normally I can't frigging stand blogs, including my own. Everyone always asks me what blogs I read, and I reply: "Um... do books count?" Which of course is greeted with blank stares, since nobody seems to read books anymore these days. Such a shame.<br /><br />Anyway, recently I tried another fling with reddit. I'm always getting addicted to reddit for a little while, and then coming off it cold-turkey for a month or two while I reassess whether I want to be spending my life reading pun threads and upmodded lolcat pictures and conspiracy theories.<br /><br />Not to mention the comments, which can individually sometimes be quite cool, but in aggregate only make sense when you read them aloud with the voice of the Comic Book Guy from the Simpsons: "I upvoted you for the appropriate uses of 'its' and 'their' in the link title, but downvoted you because your link actually appeared on a little-known German social networking site several hours ago. I feel it is important that you understand that this is not a zero-vote of abstention, but rather a single upvote and a single downvote cancelling one another out." If you read the comments with CBG's voice, a lot of them make a whole lot more sense.<br /><br />No matter how many times I quit, in the end I always wind up sneaking a peek at the reddit home page in a moment of weakness, and before long I'm hooked again, following the adventures of memes about narwhals and how is babby formed and all this other stuff I have zero context for, but occasionally it's hysterically funny. I'm guessing this is more or less what heroin is like. Some days I don't even shower.<br /><br />In any case, my latest reddit flirtation ultimately led to a secondary and more severe addiction to the writings of some dude who goes by the name "davesecretary". Yeah, him. He's a fucking genius. I started reading his old re-posted stories, got hooked and basically blacked out and lost about three days where nobody knew where the hell I was. Eventually I made my way to his stories about his trip to China. At that point I became actively envious of someone for I think the second or maybe third time in my adult life. I'm just not the jealous type normally, but damn that guy can <em>write</em>. He has a real gift.<br /><br />So now I'm flat-out jealous. I'm not going to make a secret of it, either, and pretend he's started some trend and I'm just a bigger light jumping on the bandwagon or some lame shit like that. No, he writes way better than I do, and I'm just going to copy him blatantly, like Terry Brooks rewriting Lord of the Rings line-by-line badly as Shitstorm of Shannara.<br /><br />I had actually already been thinking of publishing some true stories. In fact before I stumbled on davesecretary's tales, I had written down the story below about my brother Dave taking out the garbage, and I had been planning more. But Dave Secretary's stories were like the Great Pyramids next to the mud pueblo of my own ambition. He's like a force of nature; stories just surround the guy like gnats, and he spews out the funniest, most interesting possible versions of them imaginable.<br /><br />So while I work on my more technically ambitious blogs-in-progress, I thought I'd kick off a series of light reading. It's all true stories. None of it is anywhere near as cool as trying to find a box bigger than Kyle's goddamn cereal box, and nothing in them is anywhere near as funny as the line "Iodine, Dave." But at least these are my own stories, and they're kinda fun to write down once you get going on them. I probably have about a hundred stories like the ones here. I'll start with nine random stories that managed to get themselves written first, in no particular order.<br /><br />I hope you like at least one or two of them. If not, well, I'll read your complaints out loud as CBG, so we'll be even.<br /><br /><hr><br /><br />So this one time I'm when in my early 20s, a fragile time during which I'm as fat as a walrus, I'm spending way too much time thinking about the name of the Star Wars character "Dash Rendar". He's some random Star Wars dude, except he's not from one of the actual movies. He's only in the aftermarket books or games or toys or whatever.<br /><br />I have no idea where I'd heard the name "Dash Rendar", but I can tell you I absolutely loathed it. Lucas names are always a little hard to believe, but this one was just too much. I guess when you're that fat it's easy to get irritated by little things. At that point in my life I was genuinely annoyed that the famous one in the family hadn't been little Pete Rendar or Floyd Rendar or whoever else with a normal name. No. It just had to be frigging Dash.<br /><br />I'm telling you all this so you can understand my frame of mind for what happened next.<br /><br />I'm coming up the elevator from the underground garage because I'm too obese to take the stairs, and I'm in a surly mood on account of this stupidly-named character Dash Rendar. I'm trying to understand basic human nature here: how could anyone, even a nine year old, suspend disbelief for such an idiotic name?<br /><br />I'm all alone in the elevator, so on the spur of the moment I decide to pretend to be a nine year old and see if it makes any difference. So I make this muscle pose and say in my most ominous nine-year-old voice: "I am DASH RENDAR!"<br /><br />Right then the elevator doors open and fucking Dash Rendar walks in. I am not making this up. If you lined up ten thousand guys and asked a hundred people to point to the one whose name in real life was most probably Dash Rendar, they'd all point to this dude.<br /><br />He of course witnessed my little announcement, pose and all, and all I could do was stand there slack-jawed, trying to think of a clever and succinct way to say: "I was just sort of trying to understand what kind of idiot would call himself Dash Rendar, and itjustsortofrequiredmetoawfuck." Dash was staring at me with a look like he'd just stepped in dog shit, except when he lifted up his foot he'd found me there instead. I was embarrassed enough to wish I would die spontaneously, but not quite enough for it to actually happen. So instead I just walked out of the elevator and never came back there again. Ever.<br /><br />Afterwards I definitely established some new rules about what kinds of things I'll try in elevators.<br /><br /><hr><br /><br />My brother Dave used to work as a waiter at Applebees. This was back about 2 years after he'd graduated high school as a varsity football player, after which he'd settled into a life of comfort and no small quantity of pizza, and he'd blimped up to about 260 pounds. But it happened so fast that he wasn't entirely in touch with how much he actually weighed.<br /><br />One day before the restaurant opened they were having the usual employee meeting in the back room, where everyone stood in a circle while the manager ran them through the day's specials and whatever else waiters need to know for the day. Dave was sweating and getting a little fidgety, so he found a chair and sat down.<br /><br />The next thing that happened was a loud BANG, and everyone looked over at Dave, who was sitting on the floor with his mouth open in a big round 'O'. He had completely flattened the chair. It had once been a strong industrial solid metal chair, but now all four of its legs were sticking in different directions like a baby deer crushed under a UPS truck.<br /><br />Everyone was a little stunned, and the boss felt like he needed to say something to break the awkward silence, so he said delicately: "Gee, Dave, looks like it might be time to start cutting back a bit, eh?"<br /><br />Dave said later (after losing like 100 pounds) that at the time he didn't know what the FUCK the boss was talking about, and all he could think was: This chair is DEFECTIVE!<br /><br /><hr><br /><br />So that reminds me of this time I was watching the Seattle Seahawks play at their (at the time) brand-new stadium. We had sweet seats on the 50 yard line, in the second row back, and we were watching a pretty awesome game between the Seahawks and I think it was the Bears or the Giants. I forget which, but it was definitely one of the two. Probably the Bears, so let's go with that.<br /><br />Anyway, the row of seats in front of me was occupied by some Bears fans. There were about twelve of them, and every single one looked to weigh well over four hundred pounds. It was seriously the fattest row of people I've ever seen. They were squeezed up against each other in this tangle of arms and legs because they couldn't fit in their assigned seats that had been designed for ordinary three-hundred pound fatass football fans.<br /><br />Every time the Bears had a good play, they would all stand up and roar "YeaaaaAAAAH!!!!", and then they'd all sit down at the same time, on account of their arms being interlocked due to the aforementioned fatitude. They did this little act over and over for the first half of the game: leaping up, bellowing fiercely, then crashing down again as a unit.<br /><br />Finally I think just after half-time the Bears executed an especially good defensive play, and they all stood up and screamed wildly as usual. But this time when they crashed their asses down, they ripped the bleacher right out of the fucking concrete.<br /><br />There was this horrible thundering tearing noise like an earthquake, and the whole row of whales spilled forward onto their faces, with their twelve giant asses sticking up at the rest of us. It looked like a bomb had gone off. There were like twenty iron girders that had been ripped right out of the concrete. To say it was the most wonderful thing I'd ever witnessed would be a gross understatement. I think even now, ten years later, it might still be my favorite whale ass sticking up scene in my whole memory.<br /><br />To make it all even more sublimely beautiful, our beer splashed all over them in slow-motion because the cup-holders on the backs of their seats were thrown forward with the rest of the wreckage.<br /><br />Anyway, after a few seconds of general shock and hilarity, the ushers ran over to see if the herd were all OK. They had finally managed to extricate themselves and wipe most of the beer off, and they all stood up and started screaming: "This bleacher is DEFECTIVE!"<br /><br /><hr><br /><br />Once my family all went to this Chinese restaurant in Seattle's Chinatown. It was one of my favorites, and on this particular occasion I was there with my at-the-time girlfriend Stephanie. Stephanie just happened to be mainland Chinese, from Beijing. But unlike everyone else I've known from Beijing, who by and large seem like pretty laid back people, Stephanie was unusually sensitive to race and culture issues, and she kept a keen eye out for any perceivable slights or offenses. (Everyone else said she was just plain nuts, but I'm trying to be charitable here.)<br /><br />Steph asks my brother Mike what he's ordering. He ponders the menu a while and answers: "Fly lice." Stephanie instantly blows up at him: "What do you mean FLY LICE!!!1!!?? You're insulting our culture? We have 5000 years of Chinese culture and you Caucasians are always insulting us! I can't believe you made fun of it and called it fly lice! That's very rude! That's not how we pronounce it! I can't believe I'm so offended!" and so on in that vein. Everyone else in the family, including me, suddenly becomes tremendously preoccupied with trying to figure out how to do origami with our chopstick wrappers.<br /><br />Mike listens patiently, since let's just say this sort of outburst isn't wholly unfamiliar territory when Stephanie's around, and after she finally simmers down a little he says gently: "Uh... ok. Sorry."<br /><br />So the (Chinese) waitress comes around and is taking our orders, and when she gets to Mike, Stephanie starts hyperventilating a little in case she'll have sudden need of some extra screaming oxygen. Mike says in his blandest and most American possible voice: "I'd like to have the fried rice, please."<br /><br />The waitress nods happily and says "FLY LICE" really loud and writes it down on her pad. Mike, always an excellent poker player, manages to keep his face pretty straight, but I can tell his eyes are sparkling just a little. Meanwhile Stephanie's eyes have grown to the size of large saucers, and she hisses loudly: "She said FLY LICE!!!!!! Hee hee hee hee hee hee HEE HEE HEE HEE HEE!" Mike just shrugs, like, "hey man, I just want to not get yelled at anymore."<br /><br />We don't call it fly lice now, though, since you just never know.<br /><br /><hr><br /><br />When I was growing up the holidays were always crazy because I had over 20 aunts and uncles, and seventy or eighty first cousins, and of course we had to do a massive holiday family party at my grandparents' teeny house every year. Every family would bring a few friends, so there would be well in excess of 100 people running around in pure holiday chaos for a day.<br /><br />We used to do this secret-santa gift exchange where everyone would put their name in a hat a few weeks in advance, and then we'd all draw names, and that was the person you had to get a gift for. It wasn't split into adults and kids or anything like that.<br /><br />With an extended family that big, it's kinda hard to keep track of everyone, especially the new additions. It was also kinda hard to know what people actually wanted, and some of my uncles weren't too big on preparation. Uncle Harold was pretty much the worst at it. We loved him to pieces, but he just couldn't get the hang of the holiday party or the secret santa exchange.<br /><br />To give you the basic Uncle Harold flavor, one year he didn't show up because on the way to the party he decided to stop in a bar down the street, and he stayed there drinking bloody marys until midnight. Another year he was tasked with bringing the turkey, and he showed up at dinnertime with a frozen-solid turkey. And one year his secret santa target was Aunt Celie, who was infamously religious and pretty much nothing else, just religious, so Harold, totally at a loss, got her a picture of Jesus and taped fifty bucks to the back of it. All the adults were pissed off and all the kids thought it was pretty much the funniest goddamn thing in history. Either way, Uncle Harold just couldn't win at Christmas.<br /><br />But this one year he really outdid himself. I think I was about 11 or 12 years old, just old enough that I was no longer running around screaming like the next generation of cousins. So I got a chance to sit back and watch the party unfold. I'm sitting there watching Uncle Harold give his secret santa gift to one of my cousins, one of Aunt Diana's kids. For some reason Harold looks embarrassed.<br /><br />My dad had 11 brothers and sisters, all married with like 6 or 8 kids each, and there was no way any of us could really keep track. Usually we'd get in touch with an immediate family member and try to get some direction, but Harold, as usual, just winged it. I don't remember which cousin it was, but last time Harold had seen him, he was a baby, so Harold, thinking it through with his usual clarity, got him a jar of Gerber baby food.<br /><br />Unfortunately it turns out it's been like six years since Harold had last seen him. So my cousin opens up his present and I hear him saying to my aunt Diana: "Mom. Uncle Harold got me baby food." Diana was busy with two other crying kids, and she's not feeling particularly merciful, so she says: "Well go tell him thank you."<br /><br />So my cousin walks over to the tree where Harold is sitting, and he says: "Thank you for the baby food, Uncle Harold."<br /><br /><hr><br /><br />This story is more grim than funny, but it's a story. With sequels, no less!<br /><br />In college I lived in a big four-bedroom apartment with 3 roommates for about a year. It wasn't air-conditioned and during summer it was hotter than hell, and we all kept our windows open most of the time.<br /><br />I was just starting to drift off to sleep one night, laying on my back, when I heard this loud clicking noise. It sounded like snapping a ball-point pen out of its plastic cap, and the clicks were coming at roughly one per second in irregular bursts.<br /><br />The clicking noise was coming from between my eyes. Not, like, a foot away, or an inch away, but from directly between my eyes, near the bridge of my nose.<br /><br />I remember thinking: "That's odd. Whatever could be making that loud clicking noise coming from between my eyes? I've never heard anything like that before."<br /><br />Part of me was falling asleep, and was thinking "mufbmlflfsleeep go sleep go sleep mblfjust don't worry abouuuuuut it", or something like that.<br /><br />But there was this other little part of me, a tiny voice, that was busy thinking out loud to its tiny self: "It could be a spider."<br /><br />My falling-asleep part woke up a little at that point, and pondered it for a few dozen clicks. Then it replied: "No. No, it's not a spider, because that would just be TOO horrible to contemplate. So that's not what it is."<br /><br />And the tiny voice was like, "Well you come up with a non-spider explanation then. You can't just ignore loud clicking noises coming from BETWEEN YOUR EYES, dude."<br /><br />Falling-asleep voice: "Spiders don't make noise. So there."<br /><br />Tiny voice: "Small ones don't. Medium-sized ones don't. But we don't really know what BIG spiders do, now, do we? I bet a really big one could make a noise just like that."<br /><br />At this point, opinion-wise, I'd say I was coming down solidly on the side of falling-asleep voice, but neither the clicking nor the tiny voice would shut up, so I rolled over onto my side so I could think about it from a different angle.<br /><br />As I rolled over I felt something huge jump off my face. It landed on my arm and started dragging itself along with what felt like a dozen scrabbly legs, and of course I was like WAAAAAAAAAAAH!!!! and I jumped out of bed faster than any human being has ever done before, and turned on the light.<br /><br />Falling-asleep voice and tiny voice were both, like, "Oh, shit!" Sitting there next to my pillow was the biggest fucking wolf spider I've ever seen in my life. It had a good four inch legspan, and it was very specifically glaring at me. There was no mistaking its expression. It was really pissed off. At me.<br /><br />So I do the only thing I know how to do in Large Spider situations, which is to scream for help. "AAAH!!! A FUCKING SPIDER CRAWLED ON MY FACE!" My roommates' doors all started opening and they came out in their bath robes, all keenly interested to see the spider that had crawled on my face.<br /><br />Jacob is first to arrive. He takes one look at this massive glaring spider and says: "Holy shit dude, that thing's as big as a horse!" and leaves as fast as he came in.<br /><br />Dave shows up next. "Holy crap, that thing crawled on your face? You must be psychologically damaged!" Dave opts to stand and goggle at the giant spider, from a safe distance.<br /><br />Then Eric comes in and says: "Oh, it's just a hobo spider. I'll take care of it." He walks into the bathroom and grabs ONE square of toilet paper. Not a whole handful, no, just one teeny slice, like he's going to wipe the spider's butt with it or something, and he goes and grabs the spider with it. We can clearly see the spider's legs waving around outside the toilet paper, and Eric says: "I'll just put it outside your window."<br /><br />I of course was having none of that crap. "How the hell do you think it got on my face in the first place? Flush it! Now now now! Flush!" Eric, clearly disappointed, flushed the thing, and I'm certain that it now lives in the Seattle public sewer system, feeding on rats and stray dogs and plotting its revenge on me.<br /><br />My actions that day clearly angered the Spider God, because over the next few years I had several more spider-on-face incidents, at least one of which was way more dramatic than this one. It's a bit traumatic to tell both stories at once, though, so I'll wait a bit on that one.<br /><br /><hr><br /><br />When my brother Dave was around 14, our family lived in a house in Southern California. It was kinda rainy at the time, which is sort of unusual for where we lived. On this particular California winter day it was Dave's turn to take out the trash.<br /><br />Our city-issue trash bin was out in the carport, this sort of concrete alley next to the house where our big Dodge van was parked. The trash bin was on the other side of the van, next to a six-foot brick wall separating us from the neighbor.<br /><br />Dave grabs the trash bag from the kitchen and heads outside. He walks over to the trash bin, opens the lid and sees that it's completely full. There's a plastic garbage bag right at the top, and the lid just barely closes over it. Dang.<br /><br />It's starting to rain pretty hard, and Dave just wants to get his bag into the can and get back inside, so he figures he'll do the old Human Trash Compactor trick. He puts down his bag, climbs up on the brick wall, aims for the bag on top of the bin, and jumps.<br /><br />Usually this works pretty well, right? Your plummeting body weight smushes the bag down just far enough to make room for another bag, and you hop out unharmed.<br /><br />Unfortunately for Dave, it turns out that this time the bin was completely empty except for the bag on top. That one bag was full of trash, but it was so bulky that it hadn't fallen down into the bin.<br /><br />So Dave jumps off the wall, plunges all the way into the bin, and the lid slams shut on top of him. His momentum makes the bin tip over, and it wedges itself solidly between the van and the wall, trapping Dave inside with the garbage.<br /><br />Dave's no more claustrophobic than the next person, but the whole thing caught him a little off guard. In his mind's eye he'd been envisioning some sort of heroic plunge that would compress the garbage about a foot or so, followed by a dashing bounce onto his feet on the driveway. Instead, in just under one second he'd materialized sideways in a dark stinky fallen-over garbage can getting pounded by rain, and the lid wouldn't open.<br /><br />So he did the first thing that came naturally to him, which was to panic and kick and scream and thrash and flail and try to claw his way out. Pretty much what you or I would have done.<br /><br />However, he was really stuck in there pretty good, and nobody could hear him because of the rain, so it took about ten minutes of violent side-to-side heaving before he finally rolled the can out from between the van and the wall.<br /><br />So we're all sitting in our nice, comfy living room, watching TV. Dave has gone to take the garbage out. Ten or fifteen minutes later, after we've all totally forgotten about him, the front door bangs open and Dave barges in, wild-eyed and soaking wet and yelling at us at the top of his lungs. He's completely covered in garbage: coffee grounds, smears of leftover food, pieces of dirty paper, part of a banana peel on his shoe, brown slime all over his head and arms and legs and clothes, and he's screaming: "DIDN'T ANY OF YOU HEAR ME? I WAS STUCK IN THE GARBAGE CAN!"<br /><br /><hr><br /><br />Back when I started working at Amazon.com in 1998, the company was in this little building in downtown Seattle in kind of a bad part of town. I mean it wasn't terrible, but there were definitely some issues. There was a needle exchange across the street, which was cool and all, but a fair number of the drug users did their thing in the alley behind the building we were in. So you didn't really want to walk out the back door, or you'd run the risk of stepping on some dude with a syringe dangling out of his arm.<br /><br />The nearest place to eat was the Scaryaki joint across the street, next to the needle exchange. It was this teriyaki place that we all called the Scaryaki, since even though the food was pretty good almost nobody ever went there because it was scary. Usually we'd walk a couple blocks past it, which took us to Pike Place Market, which is also scary in its own way but is much less overtly threatening. Plus there's more culinary variety.<br /><br />One day my friend Jacob and I left the 'zon premises to eat at Matt's in the Market, which was this really awesome hole-in-the-wall joint that was Zagat rated and nobody knew about it and it was delicious.<br /><br />On the way back to work, we're crossing the street in front of the Scaryaki, and I can't help but notice Jacob's doing an unusual amount of looking at my ass and my legs. We're both straight, and as a consequence we usually try to avoid staring at each other's asses as a general rule, so I'm a little annoyed. But he's definitely checking me out with waaaay too much interest.<br /><br />I finally give him the Glare, and he says: "Where'd you get those pants from?"<br /><br />I shrug. "I don't know."<br /><br />Jacob is now really keenly eying my pants, which are these dark blue cotton slacks I've been wearing to work on and off for at least a few months now. He starts being more demanding. "No, really, where'd you get those pants from?"<br /><br />I'm like, "Dude, I don't know! Stop looking at me!"<br /><br />Jacob suddenly announces, in a really loud voice that everyone within a block or so can hear: "You're wearing my pants!" A couple of people in the vicinity, including some druggies and some coworkers, perk up with some interest as the argument unfolds.<br /><br />Me: "No I'm not! What are you talking about?" <em><walking faster></em><br /><br />Jacob: <em><running to catch up, pointing at my ass></em> "Those are my pants! Where'd you get them from? Those are mine!"<br /><br />Me: "I don't know where I got them! I just had them! They're mine, OK? Leave me alone!" I'm now running full bore for the doors, since I want to get back to work and out of this sudden surreal nightmare.<br /><br />And all the while I'm thinking to myself: "Where DID I get these pants? I never buy blue slacks. I found them in my closet one day, so they MUST be my pants. They fit pretty well, even though they're a little long. So they're mine! I must just not remember buying them!"<br /><br />We burst into the lobby, where everyone in the frigging company is on their way back from lunch, and Jacob is running after me yelling: "I recognize those pants you're wearing! Those are MY PANTS!" There was chaos for a while. It was pretty messed up.<br /><br />The funny thing is, it turned out they actually were his pants. He and our mutual friend Jeff and I had all gone out to El Gaucho earlier that year. It's the awesomeest steakhouse in Seattle, arguably in the whole country, and we all dressed up after work one day to eat there. I drove, and they threw their work clothes into the trunk of my car, which was almost completely filled with random crap. When they got their clothes out later, Jacob didn't see his pants or something, and I must have thrown them in my laundry when I finally cleaned my car, months later. One day I started wearing the pants without really thinking too deeply about their origins, and the rest was history.<br /><br />He probably could have picked a better way to tell me, though.<br /><br /><hr><br /><br />I remember when I was 23 years old, my dad decided to have one of those Dad to Son talks. He'd clearly thought seriously about it, because he sat me down and gave me one of those This Is Important looks. He said: "You know how sometimes they lose your file?"<br /><br />"Uh... what?"<br /><br />"You know, like when you call up to make a dentist appointment, and then when you get there they have no record of the conversation? Or you set up an account with the cable company, or whatever, and they can't find it later?"<br /><br />"Er... yeah, sure. That happens to everyone sometimes."<br /><br />My dad is like, "Nope. It happens to us way more often. And when it happens, tell them to look under 'W'."<br /><br />"What? 'W'?"<br /><br />"Yep. Ask them to look under Wegge instead of Yegge. 9 times out of 10 they'll find it."<br /><br />"WTF?"<br /><br />"Yep. It's weird. But when you spell our name, Y-e-g-g-e, a lot of people write or type 'W' instead of 'Y'."<br /><br />I'm thinking, "You waited until I'm 23 years old to tell me this?" But I'm also thinking: "Damn, are people really that stupid? And if so, how the hell didn't I notice this myself?"<br /><br />"Uh, thanks Dad. I'll keep an eye out!"<br /><br />So I watch. And listen. And sure as shit, he's absolutely right. The percentage is pretty high, like maybe 10% to 20% of the time. Someone (in person or on the phone) will ask me to spell my name, and I'll say 'Y', and they'll enter 'W'. A lot of the time I'll actually be in a position to watch them as they do it -- I'll be looking over the rental-car counter or whatever, and when you're looking at a keyboard upside-down from the side, you can see the 'W' as they type it.<br /><br />I have no idea how many years it took my dad to figure this out, but he's a pretty perceptive dude, and he was 43 when he told me. So we're talking about half a lifetime of watching people fuck up, and eventually realizing there's a pattern to it. Bravo.<br /><br />Nowadays I'll say "Yellow Echo Golf Golf Echo" if I'm on the phone, since it slows them down just enough to think about what they're typing. In person, when I can see what they're typing, I still say "Y", because I'm always dying to know if they're one of Those People. If they get it wrong, I'm like, "No, 'Y', not 'W'", and they always say: "Gosh, I have no idea why I did that!"<br /><br />Me neither. But I think it's because when you pronounce the letter Y, it starts with a W-sound, as in "WHY"? Sometimes it even seems as if the slower and more deliberately I say 'Y', the more likely they are to get it wrong, because my lips started off with this big 'W' sound.<br /><br />Anyway, the best one ever was at a Jiffy Lube. The Lube Supervisor Dude was asking me for my personal information and writing it all down on a form on his clipboard. He apparently felt he was better qualified to write my personal data on the form than I was, and to be fair, he had pretty good handwriting.<br /><br />He asked me to spell my name, and I said: "Y". He wrote "W". So far, so good. I really didn't want these fuckers to have my personal information just because they gave me an oil change, anyway.<br /><br />I said "e", and he wrote "i". Wow, this was new.<br /><br />I said "g, g" and he wrote "jj". Cool!<br /><br />I started to get kind of excited to see if he'd get every letter wrong. I said very carefully: "e", and he wrote "i", completing my last name as "Wijji". It was Jiffily Lubriciously Awesome. I told him "THAT'S EXACTLY RIGHT!", like he'd just won on Who Wants to be a Millionaire, and he gave me this big shit-eating grin with several missing teeth. It was nice to make him feel happy.<br /><br />But check it out: I told this story to my friend Gayle, whose last name has a bunch of doubled letters in it, including ending in "nn". She said that she'd found empirically that in order to get people to write "nn" at the end of her name, she had to say as many as four or five N's. She's gotten into the habit of spelling it with "enn enn enn enn enn" at the end so they'll write two of them.<br /><br />Go figure.<br /><br /><hr><br /><br />OK, last story for today.<br /><br />I've been a snowboarder for almost 20 years, but waaaay back in the old days I actually used to ski. I learned in my late teens, and I spent 2 or 3 seasons skiing, mostly using Navy rental equipment. I have all sorts of Navy stories to tell, but they'll have to wait for another day. Suffice it to say the Navy provided the rental equipment for this story.<br /><br />This was back when I was at the US Navy S5G Nuclear Reactor Prototype training plant in the middle of Idaho. "S" was for "Submarine", 5 was the reactor model number, and "G" was for General Electric, who had made the reactor. Hmmm, maybe I should tell some of my reactor stories someday... they're definitely interesting. Anyway, the reactor was at a Department of Energy site in the desert in the middle of Idaho, with a 60-mile bus ride from the nearest city, Idaho Falls. I'll leave it to your imagination as to why they located the experimental and training reactor facilities out in the middle of a desert, but your first three guesses are probably all more or less correct.<br /><br />Anyway, the downside to the program was that we were in Idaho, where the fun thing for residents to do is follow potato trucks in their cars, trying to hit dislodged potatoes. That's pretty much the pinnacle of entertainment in that fugging dustbin wasteland. The upside was that there were some really nice ski resorts nearby, so during winter we got to do a lot of skiing.<br /><br />This one time we were either at Jackson Hole or Grand Targhee. Both of them are in Wyoming, but only about 2 hours from town, so we usually went to one of them. The way the Navy Nuke program works during the prototype training phase is that you work 7 days on, 1 day off, 7 days on, 2 days off, 7 days on, 4 days off. This is actually a pretty doable schedule, and the once-a-month 4-day weekend was always something you planned in advance, since that's a lot of days for screwing around.<br /><br />A lot of my friends were skiers, and snowboarding was still pretty new back then. There was one dude, I think his name was Lundberg (everyone goes by last names in the Navy), who was learning to snowboard, and he pretty much spent all his time falling on his ass. So I decided to ski. The Navy pretty much takes care of all your needs, including ski rentals, so I went to the local Navy ski rental place and got hooked up with some rental skis, boots and poles.<br /><br />Unfortunately this one time, which was probably no more than my seventh or eighth trip ever, the rental guy adjusted my bindings too loose. Bindings are the things on your skis that hold your boots in place, and you don't want them to be too tight or you could break your legs in a bad crash. You want them to be tight enough to hold during an aggressive turn, but loose enough to pop out in a crash. And mine were way too loose.<br /><br />We ski for a few hours, and I've warmed up a little, so I want to work on my turns instead of snowplowing down steep stuff like a sissy. So I follow my friends to a black diamond. When I was skiing I would actually follow my friends pretty often, which was completely idiotic because they've been skiing since they were in the womb, and they look like frigging Olympians, and they're always dragging me off to slopes I have no business being on. But for some reason I still did it.<br /><br />This slope is a pretty steep black -- nothing for me today on my snowboard, but back then it was way out of my league. But it doesn't hurt that much to fall, so I went ahead and tried a real turn. Of course as luck would have it my binding popped out and I went flying straight down the slope on my face. It was so steep that I actually started picking up speed, and I wasn't experienced enough to know how to stop myself, so I just started sliding faster and faster downhill, face-first on my stomach.<br /><br />I remember being vaguely aware of people stopping and looking at me and pointing as I accelerated by them, but after a hundred feet or so I had picked up enough speed that I couldn't really do anything except this emit gurgling screaming noise. The snow was bunching up in my face and going in my eyes and up my nose and down my throat, not to mention down my shirt and pants, but all this plowing wasn't really slowing me down, so I wound up sliding about three hundred yards on my face. I didn't have any really good ski gear at the time, though. I was actually wearing baggy jeans, if you can believe it. So when I near the bottom stopped I was caked in snow and ice.<br /><br />Someone was nice enough to bring me my ski from way up the slope, and I hobbled into the lodge to grab a beer. I was sitting there and everyone was laughing and having a great time, but I was FREEZING. I went and sat by the big fire, but it wasn't helping. I started shivering badly. Finally I stood up and lifted up my shirt and about ten pounds of snow fell out, which sort of explained it.<br /><br />My brilliant friends told me that I should go back up for another run before we left for the day. They said you stay warm on the slopes, which is more or less correct if you're actually skiing and not being a human snowplow. I really wanted to warm up so I went back up for one more run.<br /><br />At the top I took a cat-track between slopes -- just a road, basically. The cat-track was running along the side of a steep roped-off drop-off with a bunch of trees. The cat track headed for the drop-off, then made a sharp turn to the left. When I got to the turn, I turned too hard, and my right binding snapped open and my ski shot out from under me, and of course I wiped out.<br /><br />I'm laying there on my back, thinking "man, I'm just not sure about this whole skiing thing", and an older guy skis up next to me and asks me if I'm OK. I'm still laying on my back, with my right leg bent awkwardly under me, trying to catch my wind. I tell him I lost my ski, and that I think it went over the edge. I ask him if he can see it.<br /><br />He skis over to the ledge. I lay there, sun shining on my face, snow on my back, freezing my balls off, just kind of trying to enjoy the moment as best I can while he assesses the situation. I look over and he's at the ledge, scanning the mountain. A bird flies by. Some time elapses -- probably no more than 30 seconds, but it seemed like eternity.<br /><br />I finally ask the guy, "Hey, can you see my ski?" He looks at me, then back down the mountain. Then he takes a deep breath as he decides how best to tell me. Finally he speaks.<br /><br />"Yep. She's still on the run."Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com42tag:blogger.com,1999:blog-13674163.post-32286672113655409572008-12-27T03:32:00.000-08:002009-01-01T15:52:46.760-08:00A programmer's view of the Universe, part 2: Mario KartThis is the second installment of a little series of discussions. They're not much more than that, just discussions. And I hope I'm <em>inviting</em> discussion rather than quenching it. But I'll be honest — the goal of this series is to pound a stake through the heart of a certain way of thinking about the world that has become quite popular. If my series fails in that regard, I hope it may still provide some entertainment value.<br /><br />Part one, <a href="http://steve-yegge.blogspot.com/2008/10/programmers-view-of-universe-part-1.html">The fish</a>, was about a twisty line and a fish's examination of it. Today we move to a twisty plane.<br /><br /><b>Embedded Systems</b><br /><br />There are many kinds of computer programs, and many ways to categorize them. One of the broadest and most interesting program categories is <em>embedded systems</em>. These systems are the centerpiece of today's little chat.<br /><br />Embedded systems are a bit tricky to define because they come in so many shapes and sizes. Loosely speaking, an embedded system is a little world of its own: an ecosystem with its own rules and its own behavior. So an embedded system need not even be a computer program: a fish tank is also a kind of embedded system.<br /><br />We call them <em>embedded</em> systems because they exist within the context of a <em>host system</em>. The host system provides the machinery that allows the embedded system to exist, and to do whatever it is that the embedded system likes to do.<br /><br />For fish tanks, the host system is the tank itself, which you may purchase from a pet store. A tank has walls for holding the water in, filters and pumps for keeping the water clean, lights for keeping the fish and plants alive a little longer, and access holes for putting your hand through to clean the tank. There's not much to it, really. The embedded system is everything inside: the water, the plants, the rocks, the fish, and the little treasure chest with bubbles coming out.<br /><br />For computer games, another popular kind of embedded system, the host system is the computer that runs the game: a PC, a game console, a phone, anything that can make a game exist for a while so that you may play it.<br /><br />Programmers have been building embedded systems for many decades: a whole lifetime now. It is a well-studied, well-understood, well-engineered subject. Gamers understand many of the core principles intuitively; programmers, even more so. But in order to apply all that knowledge <em>outside</em> the domain of embedded systems, we will need some new names for the core ideas.<br /><br />The most important name is the One-Way Wall. I do not have a better name for it. It is the most important concept in embedded systems. In lieu of a good name, I will explain it to you, and then the name will stand for the thing you now understand. It's the best I could come up with.<br /><br />But first let's dive into an embedded system and see what this wall looks like from the inside.<br /><br /><b>Mario Kart</b><br /><br />I will assume you've played <a type="amzn" asin="B000XJNTNS">Mario Kart</a>, or you've at least watched someone else play it. Mario Kart is the ultimate racing game in terms of sacrificing realism for fun. It bears so little resemblance to reality that it's a wonder it tickles our senses at all. The Karts skid around with impossible coefficients of friction, righting themselves from any wrong position, and generally making a mockery of the last four centuries of advances in physics.<br /><br />It's really fun.<br /><br />Mario Kart, like all games, has a boundary around the edge of the playing area. In Mario Kart you bump into it more often than in most other games, which is part of the reason I chose it to be our Eponymous Hero. If you want to win a race, you will need to become quite good at accelerating around corners, which means you will spend a fair amount of time bumping up against an invisible wall.<br /><br />You know the wall I'm talking about, right? It's invisible, so you can see right past it to the terrain beyond. But the wall is there, and you are not permitted to venture beyond it.<br /><br />In slower-paced games, when you arrive at the invisible map boundary, you will sometimes be told by the game: "You can't go that way. Turn back!" And since that is your only option, you comply. These invisible boundaries are non-negotiable.<br /><br />In other games, you may stop on contact with the boundary, or perhaps bounce off. But the boundary is always there.<br /><br />Imagine Mario and Luigi chatting about the you-can't-go-that-way wall. Their conversation might go something like this:<br /><br /><b>Mario:</b> "Luigi, my brother!"<br /><br /><b>Luigi:</b> "Maaaarioooooo!"<br /><br /><b>Mario:</b> "Yes, Luigi. I am a here. Tell me my brother, why is it that every a time I a spin around the cactus in the third a bend of the Desert a Track, I get a stuck and have to start accelerating from nooooothing?"<br /><br /><b>Luigi:</b> "Whaaaaa?"<br /><br /><b>Mario:</b> "Brother, the Desert a Track! It's Number a Three! You know the big a bend, where you have to slow down? I am always a forgetting to slow, and I just a stop. Just a like that!"<br /><br /><b>Luigi:</b> "I don't know, Brother. That Bowser, he is always a squishing me right before I hit that turn, and I am a flat like a pan a cake for a looooong a time."<br /><br /><b>Mario:</b> "What about that a time two races ago, where Wario hit you and you a spun around and you a headed for the hill, and you got a stuck and wailed for me?"<br /><br /><b>Luigi:</b> "Ah, that Wario! I will get him next a time!"<br /><br /><b>Mario:</b> "Why did you not just turn around then?"<br /><br /><b>Luigi:</b> "My Kart, it did not a move, no matter how I wailed."<br /><br /><b>Mario:</b> "That is a what I am a speaking of, Brother. Your Kart moves in all other places, but if you head for the hills, it just a stops a suddenly!"<br /><br /><b>Luigi:</b> "I a <em>hate</em> a stopping a suddenly."<br /><br /><b>Mario:</b> "Yes, Brother. So do I. But why can we not traverse that a part of the hill? What is on the other a side?"<br /><br /><b>Luigi:</b> "I think it is Track 4, Brother. They do not want you to wind up in the lake."<br /><br /><b>Mario:</b> "Whaaaa? Who is this a 'they'? And why can we a not see the lake from a Track a 3? I think there is a nothing there, Brother. Just a more hills."<br /><br /><b>Luigi:</b> "No, my brother. I think it is a Track a 4. Or maybe Track a 2. There must be a something there."<br /><br /><b>Mario:</b> "Perhaps you are a right, my brother. We should get a back to a racing now. We can talk a more about a this after the next a race."<br /><br /><b>Luigi:</b> "Yes, brother. I will a get that Wario this a time!"<br /><br /><b>Well-formed nonsensical questions</b><br /><br />In our Highly Realistic Dialog, Mario is wondering about the Invisible Boundary at the edge of the track. He asks his brother the seemingly obvious question: "What is on the other side?"<br /><br />As a gamer, if you pause to consider the question at all, the answer seems to be: "Nothing I care about." The invisible (or sometimes visible) Boundary seems just like any other wall. It is designed to keep you in where you're at, or out of where you're not, depending on your point of view.<br /><br />But the gamer's view is that the boundary <em>does</em> have another side. You have no way to get there, but it exists. For maximum gameplay immersion a game universe needs to appear consistent. Thus, when you peer through the wall it appears that the other side is just more of the same.<br /><br />To an embedded systems programmer, Mario's question is complete nonsense, like a fish asking the temperature of the water on the other side of the glass. There <em>is</em> no water on the other side, and for that matter a fish can't ask questions, so the question itself is based on nonsense premises.<br /><br />From a perspective within the Mario Kart universe, the "other side" of the Invisible Boundary is... a kind of nothingness. The embedded world quite literally <em>ends</em> at the boundary. The level designers usually try to make it appear as if the current world continues on, but this is an illusion. They discontinue the model's polygons beyond the line of sight. Put another way, the water stops at the edge of the tank.<br /><br />The nothingness beyond the Invisible Boundary of an embedded system is much deeper and more significant than simply not having objects there. In that nothingness there is no programmatic substrate upon which to <em>place</em> objects. If you were to ask: "What lies between Mario's head and Luigi's head," the answer may well be "nothing", since no game objects may overlap the line between their heads at the particular time you ask the question. But that "nothing" is qualitatively different from the "nothing" on the other side of the Invisible Boundary. Between Mario and Luigi there is a <em>space</em> – a place in their universe where objects and forces and logic apply, even if they are Mario Kart's twisted versions of physics and logic. That universe ends abruptly at the surface of the boundary.<br /><br />The question "What's on the other side" is well-formed in a strictly mathematical sense. You could project a line from Mario to the nearest boundary, and ask the more precise question: "If Mario is at distance D from nearest boundary B, what overlaps the point P obtained by extending the Mario-boundary line L to a distance D beyond B?" ("Whaaaaa?")<br /><br />The new formulation of the question is more well-formed, but it is every bit as semantically nonsensical.<br /><br /><b>What's really on the other side</b><br /><br />In the context of the Mario Kart universe, the system boundary truly only has on side, and Mario and Luigi are on that side. From their perspective, we can't meaningfully ask the question "What's on the other side?" However, there <em>is</em> a semantically significant interpretation of "the other side" of that invisible boundary. To get to this better answer we must leave Mario's universe.<br /><br />From the perspective of an embedded systems programmer, the entire Mario Kart universe is data structures laid out in the memory space of some machine. There is a program — the game engine — that <em>interprets</em> the data in a way that makes it appear to be a universe that is similar to ours in various intuitively appealing ways, yet different from our universe in various fun ways.<br /><br />It is very unlikely that the polygons and other level data are laid out in strictly contiguous order in the host machine's memory space. It is more likely that they are spread out, with gaps, like dominoes spilled on a tile floor.<br /><br />The question "What's on the other side?", when viewed from the perspective of a systems programmer, might be phrased more precisely and meaningfully as follows: "What occupies the memory spaces adjacent to the memory used by the Track Three Desert Level?"<br /><br />This is the same as Mario's question, but we had to step outside the Mario Kart universe in order to ask the question in a way that made sense. The Mario Kart universe, like most game universes and in fact most embedded systems, is designed to appear <em>complete</em>. There is apparently "stuff" beyond the boundary; you just can't go that way.<br /><br />When we step up into the host system and ask the analogous question about the "other side", both the question and the potential answers become much more complex: many orders of magnitude more complex, in fact. Fortunately, due to the Mario Kart system being so simple, increasing the complexity still gives us a vaguely accessible problem.<br /><br />Let's try to cook up an answer to this new, more complex question regarding the contents of the program memory space on the "other side" of the invisible wall.<br /><br />In terms of atomic program units (say, bits or bytes), the amount of memory used by a game like Mario Kart is actually high enough to defy our sense of scale. A game with hundreds of megabytes or gigabytes of runtime data has billions of discrete elements, which is too many for us to keep track of individually. One of the key jobs of a programmer is to organize their code so that the elements can be managed at human-CPU scale: up to seven "things" at a time. But this organization can't mask the fact that billions of individual elements exist, each with its own role to play in supporting the embedded system.<br /><br />Hence, even for a game as "simple" as Mario Kart, we're stuck imagining how it works internally. Even the programmers who built the game have only a dim and incomplete mental model of the game. It's like building a skyscraper, or a city: you build it a piece at a time, and it's rarely fruitful to try to picture everything inside of it simultaneously.<br /><br />Anything that goes wrong or is out of place in the program could take days to track down with state of the art tools. That's just how it is. We're not able to comprehend large problems all at once; we can only attempt it in small increments.<br /><br />Bearing this necessarily incomplete understanding in mind, what exactly <em>would</em> we find in the machine memory between the elements of Mario and Luigi's track mini-universe?<br /><br />The answer is familiar to programmers and perhaps surprising to non-programmers. The answer is: <em>almost anything</em>. It could be elements of a different program, or random garbage not being interpreted by any program, or supplemental data for the Mario Kart universe, such as language translation strings. Or Luigi could be right: it could be Track 4, pre-cached for the next race. Perhaps not exactly the way he's imagining it, but... close.<br /><br /><b>Moving beyond the Wall</b><br /><br />Our little visualization of the host system's memory raises another natural question: what would happen if you "zapped" Luigi's Kart across the boundary?<br /><br />This question isn't entirely as nonsensical as "what's on the other side?" With the proper programming tools, you might be able to observe changes in the machine memory as Luigi's Kart moves, and these changes might follow an observable pattern that leads to a relevant prediction of sorts.<br /><br />As just one possibility, Kart motion might be represented as shifting a Kart memory structure along the sequential indexes of an in-memory array. This arrangement is unlikely for performance reasons, but it's certainly possible, and should serve to illustrate the point. If you were to find that moving Luigi's Kart ten meters (in the scale of Luigi's track universe) resulted in a structure motion of 1000 memory addresses, then you might make the reasonable prediction that in another ten meters his representation would move another 1000 addresses.<br /><br />This might well put him beyond the end of the physical memory array. In most real-world scenarios this would be a bug, and would result in some sort of nasty response from the machine, such as a game crash. Or in a more user-friendly environment, the game engine might simply prevent his motion in that direction and move him back into his universe. This might put his Kart in an uncomfortable position, but it will at least be a <em>real</em> position according to the logic of the Mario Kart universe. At least the Kart won't have disappeared.<br /><br />However, it is <em>infinitesimally</em> possible that Luigi's Kart could be moved into another physical machine process in the host system — say, another instance of Mario Kart running on a time-sharing operating system or virtual machine — in such a way that Luigi and his cart physically disappear from the old universe (a process address space) and appear in the new universe (another process address space).<br /><br />Even if this infinitely remote possibility were to occur, chances are high that the sudden appearance of Luigi and his Kart would wreak local or even global havoc in the new universe. He might get lucky and crush Bowser into a pan-a cake, but more likely he would wind up stuck in a stone wall or a hill, unable to move. Or even more likely, his Kart's memory structure would be copied into a non-game area, such as the translation string section, and Luigi's sudden mysterious appearance might manifest as nothing more than garbled speech from one of the other characters.<br /><br />There are many possibilities, too many to imagine. The most important takeaway here is that no matter how <em>unlikely</em> Luigi's safe intra-universe migration may be, it is <em>possible</em>. With some external help from the host system, it would even be straightforward: perhaps no more than a few days work from a skilled programmer.<br /><br />In real-world programs sharing adjacent address spaces, it's improbable (but possible!) that migrating data from one process to another in a randomly chosen destination spot would have semantically meaningful results in the destination address space.<br /><br />To put it in simpler terms: under just the right circumstances, Luigi could teleport to the other side, and wind up in a different world.<br /><br /><b>Ghosts in the machine</b><br /><br />An embedded system is a little environment. My Betta in the previous installment of this little series lived in a very simple embedded system. Its most obvious difference from my own environment is that the tank was filled with water, while my room was filled with air. The differences from the host system can yield different behavior and different rules. In the fish tank, the rules are only a little different — for example, most things are more buoyant inside the tank. In a virtual embedded system, the rules and behavior might be <em>completely</em> different from those of the host system. It all depends on how the embedded system was designed to work.<br /><br />Every embedded system has a containing surface, which might be called its frontier. The frontier is <em>one-sided</em> in the sense that the rules of the embedded system may simply stop at the boundary: there is no "other side". In some embedded systems (such as a Euclidean space embedded in a non-Euclidean space), even the supposedly intuitive notion of extending a line across the boundary only has meaning on one side, the "inside" of the frontier.<br /><br />If there is a formal mathematical term for this One-Sided Frontier idea, I've yet to come across it, and I've spent quite some time looking. If you have any suggestions, let me know. The closest I could find are the spaces that are undefined in noncontinuous functions, such as the Tangent function for a value of 90 or 270 degrees. If you ask a question <code>f(x)</code> at these values of <code>x</code>, the answer is something like "You can't go that way."<br /><br />So the other side of the frontier is <code>undefined</code>. This statement has a deep, almost philosophical meaning to programmers. It does <b>not</b> mean "nothing"; no, no no! It means <em>anything</em>. It is a modern-day Wand of Wonder, a Bag of Tricks, a Horn of Plenty, a Box of Chocolates. You never know what you're going to get.<br /><br />If a computer program's code inadvertently reaches into <code>undefined</code>, then it has stepped across a mystical wall into another universe, and anything could happen. It's as if Gremlins have taken over the machine. Your PC speaker might beep erratically or continuously. Your video array might burst into random fireworks. Your printer might fire up and print out mysterious symbols. Your program may even continue to operate normally, but with the addition of unexplainable and unrepeatable phenomena.<br /><br />I have seen all these things happen. All C/C++/assembly programmers have seen bugs like this — program bugs with "crazy" behavior. The bugs are so spooky, and so hard to track down, that computers are now designed to be gatekeepers at the Wall, and when you try to step across they say STOP! It's much better to be blocked immediately than to let your program wander into <code>undefined</code>, where ghosts may take hold of your data in ways that may even cause your legal department to take notice. Stranger things have happened, as they say.<br /><br />This computerized gatekeeping is commonly called "protected mode". The computer checks every addressing instruction, and any time a program tries to access the memory area outside its own, things halt immediately. When you see the message "segmentation fault", or a blue screen of death, or some other sign that a fatal, unrecoverable program error has occurred, it is almost always because someone or something in the embedded system tried to escape.<br /><br /><b>Holes in the Wall</b><br /><br />From the perspective of a computer game, the system frontier is relatively uninteresting. It's not much different from any other obstacle.<br /><br />However, in other embedded systems these frontiers are almost mystical in nature. They provide endless fascination for computer scientists working in the domain of <em>programming languages</em>, which are yet another kind of embedded system.<br /><br />To see how it works, let's consider the situation in Mario Kart, which is simpler.<br /><br />In Mario Kart, most of the racers are computerized opponents, or AIs. A few of them, usually at least one, are controlled by people, using steering wheels or other physical controllers. Sometimes (e.g. in simulation mode) all the opponents may be computerized.<br /><br />In order for people to interact with the embedded system, there must be some way to send information back and forth across the system frontier.<br /><br />Coming from the Mario Kart side, we typically have video and sound signals. The embedded system generates these signals and sends them to your TV or device.<br /><br />Coming from the people side, we typically have motion input: which way to move the Karts. The signals begin as your physical motions: buttons you press, or in newer controllers, the direction you tilt the controller. They are sent from the host system to the embedded system and they generally produce observable effects in the embedded system. Like magic!<br /><br />Mario Kart is especially interesting because the external camera is physically present within the game. You can see it in the instant replays of your races: a little guy with a camera, floating behind you on a little cloud. The game designers have arranged things so that you can almost never see him while racing, because he's always flying behind your head, along your line of sight.<br /><br />But he's there. And in fact that camera is <em>always</em> there, in all video games. It's just that the Mario Kart designers have chosen to reify him as a cute little turtle guy with a camera, floating behind you on a cloud.<br /><br />The external camera is a one-way hole in our one-way wall: it sends data <em>out</em> of the embedded system. For all intents and purposes it also sends the sound data, since the sounds are usually scaled as a function of distance from the camera.<br /><br />During a race, there's a lot of stuff going on in the embedded system, and there's a lot of stuff going on in the host system. But they constrain their communication via mostly-invisible channels, and these channels are restricted in the kinds of communication they may convey. Your controller can send motion inputs, but (at least today) it can't send video data. And (at least today) the characters in the game can't control your arms and legs, the way you can control theirs.<br /><br />Hopefully now you should have a mental picture of this magical wall between an embedded system and the Great Undefined Beyond on the "other side" of its system frontier. The wall may have deliberate holes in it: channels, really. Information may flow across these channels in predefined ways. And the channels are almost always invisible to occupants of the embedded system.<br /><br /><b>Reflections</b><br /><br />I've spent a lot of effort telling you about a rather strange, twisty kind of surface: the frontier of an embedded system. This frontier exists for all embedded systems. It may have "holes" (data channels) in it. The number and nature of these channels is entirely up to the designers of the embedded system. Programmers sometimes call these channels "reflection" or "metaprogramming".<br /><br />The holes in the frontier may or may not be detectable within the embedded system itself. They may only be detectable within the host system. This, again, is up to the designers. For most of the embedded systems I know of, the channels are "well defined", in the form of application programming interfaces offered to either the embedded system or the host system for communicating across the frontier.<br /><br />But you need a channel for this kind of communication. Setting it up is usually not cheap. And setting up meta-reflection (in other words, being able to "see" the channel from within the embedded system or host system) is even more work.<br /><br />So most of the time, channels through the embedded system frontier are completely invisible and undetectable from inside the embedded system. They're quite real, and information flows either one way or both ways, but they <em>cannot</em> be detected from within the embedded system, and their behavior is intrinsically non-repeatable via experimentation.<br /><br />When data comes across these invisible channels, stuff "just happens" in the embedded system, with no clear indicator nor explanation as to why.<br /><br />In our discussion so far, I have intentionally blurred the distinction between the host system (such as a fish tank or a game console device) and the host system's host system (such as your bedroom or living room). But you've probably noticed by now that <em>all</em> host systems are embedded in some larger system. This is just the way things work. The fish tank is in your bedroom, which is a system embedded in a house, which is a system embedded in a neighborhood, embedded in a county, a nation, a continent, a planet, a solar system, a galaxy, a universe.<br /><br />It's perhaps not as clear in the case of fish tanks, but host systems often overlap and even cooperate. A city is composed of many interleaved subsystems. So is your body. It's not always a simple containment relationship. Systems are made of, and communicate with, other systems.<br /><br />But one way or another, <em>all</em> systems are embedded systems.<br /><br />There is no reason to assume that our Universe works any differently. Our Universe is a system; that much should be self-evident. It has boundaries; astronomers and astrophysicists have recently even determined that the boundary appears to be a dodecahedron.<br /><br />We are already painfully aware of the question "what happened before the Big Bang, if in fact the Big Bang occurred in the way all the evidence suggests", and its inherent nonsensicality. What happened before the beginning of Time? What lies beyond the end of the Universe?<br /><br />Programmers already know intuitively the answer to these questions. The answer is: <code>undefined</code> is there. <em>Undefined</em> is <b>not</b> the same as "nothing", no indeed. It's <em>anything</em>. And we can't go that way.<br /><br />This has ramifications for the way we think about things today.<br /><br />I believe I will have more to say about this soon. Right now I think I will go play Mario Kart: a game as fun as any I think I've ever played.Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com71tag:blogger.com,1999:blog-13674163.post-68434861524591857122008-12-25T06:39:00.000-08:002009-02-06T17:00:49.891-08:00Fable II: Arguably Better than Getting Your Head Crapped OnI finished <a type="amzn" asin="B000UU3SVI">Fallout 3</a> maybe six or eight weeks ago, and it was hands-down one of the best games I've ever played. A game like that gets you in the mood for more gaming, so I thought to myself: "Hey, I should plop down $160 for <a type="amzn" asin="B000FRVAD4">Fable II</a>!"<br /><br />Actually that's not <em>exactly</em> what I thought, but it's what happened. I bought the game for $60, fired it up, got up to the part in the intro where a bird craps on your head (yes, this is how it starts), and it locked up hard. Reset the XBox, tried again, and this time got as far as some guy selling snake oil gadgets before it locked up again. Snake oil, indeed.<br /><br />I tried playing for about an hour, with the game crashing every 3 to 5 minutes, and I finally went online to read about how it kills XBoxes and it's the Game of Death and blah blah blah, all interesting but not especially helpful. Eventually I stumbled across discussions of the "install to hard drive" option. Nobody actually said <em>how</em> to do it, so it took another hour of digging to deduce that you need to purchase a $100 wireless network adapter (or 100 feet of network cable, I guess). So I shut it down for the night, waited for the stores to open, forked over the $100, and installed the game to the hard drive. To Lionhead's credit the game never crashed again, making it significantly more stable than Oblivion or Fallout 3.<br /><br />I tried hard to like Fable 2. I didn't even need to like it $160. I would have settled for a $60 value. I vaguely remember liking Fable 1, although I can't remember anything about the game except for one neat scene where you had to escort two NPCs through a dark valley. One of the NPCs has been bitten by a Balverine (a werewolf), and the two argue the whole trip about whether he's going to turn. It's a funny conversation and the scene has a funny ending. Other than that, I just have vague recollections of shooting birds on the roof of some guild, and needing to get a 6-foot handlebar moustache for some side quest. The rest of it is basically a blank. But I had set some flag to the effect that "I liked it," and I wanted to like the sequel too.<br /><br />Unfortunately, with a few noteworthy exceptions that I'll call out in the "Highlights" below, the game is entirely forgettable. It's already fading from memory as we speak. It wasn't as bad as some people make it out to be. It's playable for a couple of days, and it has its fun moments. But it's not a very good game, and it's definitely not a very memorable game. This is sad, considering the amount of effort that went into its development.<br /><br />The no-spoiler synopsis of Fable 2 is that it's a bad Zelda clone. You can smell the desperation; there are dozens and dozens of direct rip-offs from the Zelda franchise. Heck, I wouldn't have minded a half-decent Zelda clone; they're some of the best games of all time. But Fable 2 misses the mark by a mile. The humor is juvenile bordering on imbecilic, the hints are hamfisted, the areas are small and cramped, the minigames are lackluster, the music is virtually nonexistent, and the story pacing is rushed and breathless. It's a cargo-cult copy of Zelda that winds up having no identifiable soul: forgettable across the board.<br /><br />I've given up every piece of Microsoft software and hardware I own except for the XBox, which I had been holding onto <em>just</em> for Fable 2. Now that it's come and gone like a bird crapping on my head, I'm giving up. No more XBox or PC games for me. Ever.<br /><br />Hence, Fable 2 cost me $160. I hope you got it for cheaper than that.<br /><br />Anyway, here's a quick rundown of the lowlights and highlights of the game, as I see them. Enjoy!<br /><br /><b>Lowlights</b><br /><br />1) <b>Humor.</b> Fable 2 tries hard to incorporate humor into the game — <em>too</em> hard. The writers use the trusty old "stopped clock" approach to humor, in which they inundate you with jokes, and 1 out of every 43,200 of them is funny. Amazingly, this perseverance leads to 3 or 4 genuinely funny ones, mostly near the entrance to the Crucible (arena). But by the time you get there, you've already tuned out all attempts at humor and have probably even tried killing yourself a few times. So they may fail to register.<br /><br />2) <b>Theresa</b>. The game features an old lady who watches everything you do and talks at you constantly. This starts in the very beginning of the game and lasts until the very end, with no option to turn her off. Your character can't so much as take a crap without Theresa piping in with helpful advice on which hand to use. "That is ancient paper. Be cautious." She uses some magical form of communication system that only breaks down in the fog — probably shortwave radio — and there's no way to turn the fugging thing off.<br /><br />I really hated Theresa.<br /><br />3) <b>Expressions</b>. You can't talk to people. Instead, the game gives you a series of increasingly repugnant forms of nonverbal communication. Initially you're limited to belching, farting, giving people the finger and making lewd pelvic thrusting motions, but as you rise in fame Theresa informs you that you've earned the right to use the "kiss my ass" expression. I am not making this up. I tried to avoid using expressions altogether, but the game forces you to do it once in a while. Made me want to take a shower.<br /><br />4) <b>Retinal blindness</b>. Fable 2 is nauseatingly saturated. They just don't know how to lay off the paint gun. There are a few OK-ish-ly tasteful areas, such as the big trees in Bower Lake, but most of the game is a frightully garish mix of lime greens, oranges, purples, reds, blues, and general oil-spill iridescence. It makes you color-blind fast, even if you didn't start that way. Finding anything onscreen is like trying to spot where someone threw up on a Matisse.<br /><br />5) <b>Linearity</b>: the game is unrelentingly linear for the first hour or two (a _long_ time), after which it settles into, well, linearity. The gameplay occasionally approaches the smashing-through-lines-of-baddies feel of Gauntlet Legends, which I liked, but mostly it makes you feel like a rat running a big maze, following a neverending golden trail of cheese.<br /><br />A major contributor to the linear feel, even after the game opens up, is the plethora of tiny little fences and obstacles that you can't hop. It makes it really hard to know where you can walk, and it feels like you're constantly bumping into things, because, well, you are. So the game is linear at all resolutions: high (the plotline), medium (most of the level designs) and low (the path designs). Linearity can cramp even the best of games — Kingdom Hearts comes to mind. It's just a bad way to design things. And Fable's linearity felt especially suffocating after just having finished Fallout 3, which is immense and wide-open.<br /><br />6) <b>Controls</b>. It's been a long time since I played a game whose controls were so accident-prone. Normally a game's controls take some getting used to, and then it's like driving a car. In Fable 2, even after days of play, I'd still be trying to hop a fence and wind up shooting the front door off a mansion, blowing boards everywhere and scaring the shit out of the villagers.<br /><br />Hell, even when I was trying to buy my final sword (this was a $50k sword I'd been looking for all day), I tried to bring up the "buy sword" menu for the blacksmith, and I accidentally wound up casting a massive Inferno spell, causing him to literally run screaming across town. It was weeks of in-game time before I saw him again. God dammit. They really should have had different controls in safe zones.<br /><br />7) <b>Elephantine mammary glands</b>. I don't know what planet these guys have been living on, but giant udders fell out of fashion at least two or three decades ago. Every single woman in Fable 2 had knockers significantly bigger than her head. It reminded me of my trip to Paris, where every statue of a woman is bare-breasted, presumably so that you can tell it's a woman — a practice which unfortunately suggests that there's really no other way to tell. Dumb French statue-making assholes.<br /><br />I mean, the people in Fable 1 were ugly — the main character worst of all. They all had this "I'm a programmer who never gets outdoors" look, and I expected (and got) no better from Fable 2. But I was really disappointed that every female in the game was a fugging dairy farm. I mean, someone with some taste and maturity should have a talk with these asshats, and explain to them what women actually look like. Or they should pick up a frigging Victoria's Secret catalog or watch a goddamn Target commercial or <em>something</em>. Jesus.<br /><br />The milk jug thing... it was really just too much. I have zero respect for those jerk-offs at Lionhead.<br /><br />8) <b>Ass-kissing</b>. This was probably the most serious problem with the game. It was a disease in Fable 1 that went malignant in Fable 2. Whoever designed these games was apparently neglected as a child, because the gameplay revolves around gaining "renown". Lionhead's hopelessly adolescent view of "renown" is that villagers should follow you around and say things like "yay!" and "hurrah!" It's even worse than I'm making it sound.<br /><br />They spent so much time coding this crap that they forgot to code <em>pushing</em> into the game: you can't push people out of your way. So as soon as you wander into a dead-end alley you're fucked: a bunch of people will crowd in after you asking for autographs and offering you gifts and all this sickening bullshit.<br /><br />To the game's credit, and I count this as a highlight, if you pull out your six-barreled rifle, take the safety off, and aim right at their heads, it clears everyone out pretty fast. You can imagine how desperate I was by the time I tried that approach. But they coded it correctly, bless 'em.<br /><br />9) <b>Too Easy</b>. The game just wasn't hard, period. There were no hard fights. I never died. I don't even know what happens when you die in Fable 2. I used a couple of Resurrection Phials, but only because I had become so lazy in combat that I didn't care anymore. This was a serious flaw in the game: it essentially removed the element of fear, which was the only emotion (other than disgust) that the game had a chance of evoking.<br /><br />10) <b>Demon Doors.</b> Oh man, oh man. These were probably the low point of the whole game. They made me want to puke. I would run past them as fast as I could so I didn't have to listen to their inane drivel. This was some of the worst game writing I've ever seen. I just don't want to talk about it.<br /><br />11) <b>Misguided innovation</b>. They really should have stuck with copying Zelda, and Kingdom Hearts, and Gauntlet: Legends, and all the other games they copied blindly, and badly. Because whenever they introduced something entirely new, it almost invariably sucked. Examples? OK. Sure. Since you asked, and all.<br /><br />How about the "innovation" that when you eat nearly any food (and it only takes a few bites), you bloat up to the size of Orson Welles, and the only way to get rid of it is to <em>eat celery</em>. No amount of exercise will make any difference, but eating a few bites of celery makes it go away. Innovative!<br /><br />Innovation: you can purchase almost any property in the game. Is this realistic? No. In reality, not everything would be for sale (and especially not posted on the front doors). Real-estate transactions wouldn't be instantaneous. You would need the owner present to buy something. Etc, etc. So given that this feature makes the game <em>less</em> realistic, what purpose does it serve? Is it fun? No. Buying real estate is about as fun as attending insurance seminars, so I don't know what the hell they were thinking. It <em>could</em> have been fun in the right setting, with suitable other human participants, in a Parker-Bros. Monopoly kind of way. Maybe. But slapping it on the side of an RPG and calling it innovation? It boggles the mind.<br /><br />And what about the the busywork jobs (blacksmith, woodcutter, bartender) for making gold? Um, dudes -- busywork jobs only exist in MMORPGs to <em>limit per-player CPU usage</em>. They're <b>not fun</b>. "Innovatively" bringing them into a single-player game was just flat-out brain damaged.<br /><br />12) <b>Nonexistent target audience</b>. What age group is the target market for this game? If you enumerate the possibilities, you arrive at the inescapable conclusion that the game was either (a) created <em>by</em> imbeciles, or it was (b) created <em>for</em> imbeciles, or possibly (c) all of the above.<br /><br />It's presumably not intended for kids, or you wouldn't be finding condoms in treasure chests, soliciting and obtaining sex from male and female prostitutes of all shapes and sizes, performing pelvic thrusts to solve quests, and so on.<br /><br />It's not for adults either, or you wouldn't be bombarded with the constant barrage of scatological humor, beginning with the bird shitting on your head, continuing with warnings about "extending the fart command and messing it up", and going pretty much straight downhill from there.<br /><br />Is it intended for teenagers, then? Poooossibly, but (a) that ignores their primary demographic, which is 30-year-olds, and (b) I don't know any teenagers that are <em>that</em> stupid, nor so hard-up for attention that they need AI villagers to yell "hurray!" whenever you pass them, even if you're in a graveyard at midnight.<br /><br />Dipshits. This game was designed by dipshits. The coding was great, the artwork was great, the sound effects were great; the details were for the most part rock-solid. But the creative direction was just inexcusably bad.<br /><br /><b>Highlights</b><br /><br />OK, I've been pretty tough on the game so far. Fable 2 did actually have a few genuine highlights worth calling out. You could even argue that these highlights make the game almost worth playing, in spite of all the crap you have to suffer through in order to get to them.<br /><br />1) <b>Banshees.</b> Fable 2's banshees are, in a word, awesome. I've been racking my brain trying to think of a VG monster as cool as these banshees in any game I've ever played. I'm coming up with a few ties, but nothing that beats them. The YouTube videos don't come close to doing them justice. Fable II is worth playing just to get to Wraithmarsh.<br /><br />The only real problem with the banshees is that since none of the combat is challenging (see Lowlight #9), they're nowhere near as scary as they could have been. But they're amazingly stylish. I'd call them an innovation, but I'm inclined to believe Lionhead stole their basic design from some other game, given all the other copying they've done. (The Fable 2 Trolls, for instance, are about as Zelda-clone-esque as you can get without inviting a lawsuit.)<br /><br />2) <b>Lucien's speech</b>, where he addresses the recruits. Really great speech. Riveting and convincing. Amazing how Microsoft-run studios that are so consistently bad at humor are so good at creating convincingly evil speeches about taking over the world.<br /><br />Actually the whole centerpiece drama in the tower was very nicely done. I have to give them credit for that part of the game: it was exceptional by any standard. It basically saved the game from being a total loss.<br /><br />3) <b>Hammer</b>. She's cool. Great voice acting, surprisingly good scripting, neat character, lots of depth. One of the better-realized VG supporting characters I've seen in many years.<br /><br />4) <b>The dog.</b> Apparently there was a lot of hype about the dog. Or so I hear, after actually having played the game. Whatever the hype, the reality is that it's a very believable dog. I especially liked how it would run ahead of you -- I've seen pets that follow you, but the dog would often anticipate your direction and run ahead, kinda turned back towards you like "c'mon! let's go!" I encountered no glitches with the dog; the coding was rock solid. Overall it was, well, very... doggy. And what more could you ask for in a dog, really?<br /><br />As a tribute to the believability of the dog, I'll offer a minor spoiler. (Skip ahead if you don't want a spoiler!) At the end of the main storyline, you are granted one wish. Your choices are: (a) get all the people who died back, (b) get your dog back, or (c) get a bunch of money. What I really wanted was a sort of amalgam of the 3 choices: I wanted my money back for this dog of a game. But when push came to shove, I picked the dog. I kinda missed him.<br /><br />5) <b>Architecture.</b> Overall the architecture was really nice. The only somewhat dubious exception was Bowerstone, which looks almost exactly like Euro Disney. I kept expecting Tigger to come waltzing around, cursing in French under his breath, just like he did on my real-life trip to Euro Disney a few years back.<br /><br />Other than the Euro Disney influence, which I could take or leave, I thought the architecture was nice throughout the game. I liked the waterfront town of Bloodstone. I liked the manors in Oakfield. I liked the gypsy wagons. I liked the vendor carts. I loved pretty much every creepy structure in Wraithmarsh. The overall look of the game was beautiful, once you got past the color-saturation problem, and the architecture was a huge contributor.<br /><br />6) <b>Fight music</b>. Unlike in Fable 1, most of the music in Fable 2 is forgettable background/atmosphere music. They didn't get Danny Elfman this time around, and it shows. The theme for Bower Lake is nice as far as it goes, which is exactly 2 chords over and over and over. But it's still OK. The rest of the music didn't leave any sort of impression on me at all, except for the fight music, which <em>almost</em> made up for everything. It was very good. There were at least two fight themes and both of them were cool. If only the rest of the music had been... present. It was like it wasn't even there. It phoned in its performance.<br /><br />Folks at Dorkhead studios: Zelda's music is one of the top five reasons for its success as a franchise. Same goes for Mario and Kingdom Hearts and Final Fantasy. Their music is always great, and it's always in your face. The music isn't muttering or mumbling; it's shouting. And they can get away with it <em>because</em> it's always great. Even when it's bad or annoying, which is rare, the music still anchors each place and event in the game in your memory, in a way that <em>only</em> music can. You guys really screwed the pooch on this one.<br /><br />7) <b>Mixed-tactic fighting</b>. They did a great job of setting things up so that you could use melee, ranged weapons and magic effectively in combat. It was refreshing to be able to switch styles in mid-fight: you could use your sword to kill everything near you, then start blasting everything ten feet or more distant with your rifle. Or you could clear a little space and cast a time-slowing spell, and then just start zinging around whaling on bad guys. The combat was never <em>hard</em>, but it was on the whole fairly <em>satisfying</em>.<br /><br />The downside of ultra-convenient access to melee, range weapons and spells was that you could effortlessly use them all simultaneously while trying to buy vegetables from a produce stall in the main market. I really wish they'd made it just a teeny bit harder to cast spells in public areas.<br /><br />8) <b>Well, damn. I can't think of a Highlight #8.</b> I thought of some more lowlights, though: long area loads, unresponsive controls during "scenes", only a handful of available spells, months of coding/design effort wasted on useless features like "groin shots" and tatoos...<br /><br />Oh, and the lack of control over when quest scenes actually unfold — they trigger from proximity to the relevant NPC rather than interacting with the NPC because, oh, that's right, <em>you can't interact with anyone</em> except to fart on them or give them the finger. Oops! So you're always accidentally wandering into a dungeon that triggers some quest, and there's no way out except to back <em>entirely</em> out of that phase of the quest, which may involve losing hours of your time, all because you walked through the wrong door. Damn that pissed me off.<br /><br />And how the hell do you sleep in an Inn? I never managed to figure it out. I'd wind up spending $10k for some hovel just to get a frigging bed to sleep in. It was amazingly bad UI design, if there even <em>is</em> a way to do it. If not, then their helpful tutorial message lied to me at least a dozen times.<br /><br />Argh. Well, this highlights section is going downhill in a hurry, so I think I'll end it here.<br /><br /><b>Better than a crap on the head?</b><br /><br />Maybe, maybe. But compared to Fallout 3, Fable 2 pretty much sucked. It had a couple of nice features, but they were drowned in an ocean of painfully adolescent design. Such a shame.<br /><br />I've tried to be fair here. I don't mean to discourage you from playing the game, since for all I know there's nothing better out there right now.<br /><br />If you do decide to play it, I hope I've set your expectations <em>very</em> low. That way, well, who knows? You might actually have some fun with it.<br /><br />But if you open even <em>one</em> of those Demon Doors I'll lose all respect for you.Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com16tag:blogger.com,1999:blog-13674163.post-68386814097925907462008-11-16T22:19:00.000-08:002008-11-16T23:01:46.867-08:00Ejacs: a JavaScript interpreter for EmacsSo! I have all these cool things I want to write about, but I broke my thumbnail. Can you tell that's a long story?<br /><br />See, this summer I got excited about playing guitar again. I usually switch between all-guitar and all-piano every other year or so. This summer I dusted off the guitars and learned a bunch of pieces, and even composed one. I was prepping for — among other things — a multimedia blog entry. It was going to have a YouTube video, and a detailed discussion of a wacky yet powerful music programming language you've probably heard of but never used, and generally just be really cool.<br /><br />And then it all came crashing down when I busted my thumbnail off. And I mean <em>off</em> — it broke off at least a quarter inch below where the nail and skin meet. Ick. I just accidentally jabbed my steering wheel, and that was that.<br /><br />I remember reading an interview with some dude who said he had punched a shark in the nose. He said it was like punching a steering wheel. So now I know more or less what it's like to punch a shark in the nose, I guess. There's always an upside!<br /><br />Anyway, that was going to be my magnum opus (literally: Op. 1) for the year, but it fell through for now. I'll have to revisit the idea next year. My thumbnail's back, but it's been at least 2 months since I touched my guitar, so I'll have to start over again.<br /><br />Work has been extraordinarily busy, what with having to collect all these Nuka-Cola Quantum bottles and so on. I'm sure you can imagine. So I haven't had much time to blog lately.<br /><br />But I do like to publish at least once a month, whether or not anyone actually cares. It's been about a month, or it feels that way anyway, and all I have to show for it is this box of Blamco Mac and Cheese.<br /><br />So I'm cheating this month.<br /><br />You know how on Halloween how you walk around in your costume holding your little bag and you say "trick or treat", and every once in a while some asshole does a trick instead of dumping half a pound of candy into your bag? And then he has to explain to all the dumbfounded and unhappy kids that "Trick or Treat" means that a trick is perfectly legal according to the semantics of logical-OR, and the kids remember that a-hole for the rest of their childhoods and avoid his house next year?<br /><br />Yeah.<br /><br />So I'm doing a trick this time. Hee. It's actually kind of fun when you're on the giving end.<br /><br />My trick is this: in lieu of saying anything meaningful or contemporarily relevant, I'm writing about something I did over a year ago. And there isn't much to say, so this really will be short.<br /><br /><b>Ejacs</b><br /><br />Around a year ago, I wrote a blog called <a href="http://steve-yegge.blogspot.com/2007/12/boring-stevey-status-update.html">Stevey's Boring Status Update</a>, mostly in response to wild rumors that I'd been fired from Google. Not so. Not yet, anyway.<br /><br />In that blog I mentioned I was working nights part-time (among other things) on a JavaScript interpreter for Emacs, written entirely in Emacs Lisp. I also said I didn't have a name for it. A commenter named Andrew Barry suggested that I should <b>not</b> call it Ejacs, and the name stuck.<br /><br />Ejacs is a port of <a href="http://mxr.mozilla.org/mozilla/source/js/narcissus/">Narcissus</a>. Narcissus is a JavaScript interpreter written in JavaScript, by Brendan Eich, who by pure unexpected coincidence is also the inventor of JavaScript. Narcissus is fairly small, so I thought it would be fun to port it to Emacs Lisp.<br /><br />It turns out Narcissus is fundamentally incomplete. It cheats. It's that trick guy on Halloween. Narcissus has a working parser and evaluator, but for its runtime it calls JavaScript. So it's kind of like saying you're building a car by starting from scratch, using absolutely nothing except for a working car.<br /><br />This meant I wound up having to write my own Ecma-262 runtime, so that the evaluator would have something to chew on. In particular, the Ecma-262 runtime consists of all the built-in properties, functions and objects: Object, Function, Array, String, Math, Date, RegExp, Boolean, Infinity, NaN, parseInt, encodeURIComponent, etc. A whole bunch of stuff.<br /><br />I basically did this by reading the <a href="http://www.ecma-international.org/publications/standards/Ecma-262.htm">ECMA-262 specification</a> and translating their algorithms into Emacs-Lisp. That spec is lousy for learning JavaScript, but it's absolutely indispensable if you're trying to <em>implement</em> JavaScript.<br /><br />I didn't know Emacs-Lisp all that well before I started, but boy howdy, I know it now.<br /><br />Emacs actually has a pretty huge runtime of its own — bigger than you would ever, ever expect given its mundane title of "text editor". Emacs has arbitrary-precision mathematics, deep Unicode support, rich Date and Calendar support, and an extensive, fairly complete operating system interface. So a lot of the porting time was just digging through Emacs documentation (also extensive) looking for the Emacs version of whatever it was I was porting. That was nice.<br /><br />I had big plans for Ejacs. I was going to make it a full-featured, high-performance JavaScript interpreter, with all the Emacs internals surfaced as JavaScript native host objects, so you could write Emacs customizations using object-oriented programming. It was totally going to be the "meow" part of the cat.<br /><br />And then I broke my thumbnail.<br /><br />Actually, what happened was js2-mode.<br /><br /><b>js2-mode</b><br /><br />After I got the interpreter working, I was at this crossroads. There were two big use cases: a JavaScript <em>editing</em> mode, or a JavaScript <em>Emacs development</em> mode. Both were going to be a lot of work.<br /><br />It turns out you really want the editing mode first, if possible, so that when you're doing all your JavaScript programming you have a decent environment for it. So I picked the editing mode.<br /><br />Unfortunately I found the Ejacs parser wasn't full-featured enough for my editing needs, since at the time I was working on my Rhino-based Rails clone and writing tons of <a href="https://developer.mozilla.org/en/New_in_JavaScript_1.7">JavaScript 1.7</a> code on the JVM.<br /><br />I spent a little time trying to beef up the parser, then realized it would be a lot faster to just rewrite it by porting Mozilla Rhino's parser, which is (only) about 2500 lines of Java code. Ejacs is something like 12,000 lines of Emacs-Lisp code, all told, so that didn't seem like a big deal.<br /><br />So I jumped in, only to find that while the parser is 2500 lines of code, the scanner is another 2000 lines of code, and there's another 500 or so lines of support code in other files. So I was really looking at porting 5000 lines of Java code.<br /><br />Moreover, the parse tree Rhino builds is basically completely incompatible with the Ejacs parse tree. It was richer and more complex, and needed more complicated structures to represent it.<br /><br />So after I'd ported the Rhino parse tree, what I really had was a different code base. I went ahead and finished up the editing mode, or at least enough to make it barely workable (another 5000 lines of code), and launched it. It was a surprisingly big effort.<br /><br />And it left poor Ejacs lying unused in the basement.<br /><br />So today, faced with nothing to write about, I figured I'd dust off Ejacs, launch it with lots of fanfare, and then you'd hardly notice that I cheated you. Right?<br /><br />You're not coming to my house next year. I can tell already.<br /><br />Anyway, here's the code: <a href="http://code.google.com/p/ejacs/">http://code.google.com/p/ejacs/</a><br /><br />There's a README and a Wiki and installation instructions and stuff. I can't remember how to put the code in SVN, and I'm having trouble finding it on the code.google.com site. As soon as I figure it out I'll also make it available via SVN.<br /><br /><b>Emacs Lisp vs. JavaScript</b><br /><br />In the interests of having <em>something</em> resembling original worthwhile content today, I'll do a little comparison of Emacs Lisp and JavaScript. I know a lot about both languages now, and a few folks mentioned that a comparison would be potentially interesting.<br /><br />Especially since I think JavaScript is a better language.<br /><br />So... the best way to compare programming languages is by analogy to cars. Lisp is a whole family of languages, and can be broken down approximately as follows:<br /><ul><li>Scheme is an exotic sports car. Fast. Manual transmission. No radio.</li><li>Emacs Lisp is a 1984 Subaru GL 4WD: "the car that's always in front of you." </li><li>Common Lisp is Howl's Moving Castle.</li></ul><br />This succinct yet completely accurate synopsis shows that all Lisps have their attractions, and yet each also has a niche. You can choose a Lisp for the busy person, a Lisp for someone without much time, or a Lisp for the dedicated hobbyist, and you'll find that no matter which one you choose, it's missing the library you need.<br /><br />Emacs Lisp can get the job done. No question. It's a car, and it moves. It's better than walking. But it pretty much combines the elegance of Common Lisp with the industrial strength of Scheme, without hitting either of them, if you catch my drift.<br /><br />Anyway, here's the comparison. Here's why I think JavaScript is a better language than Emacs Lisp.<br /><br /><b>Problem #1: Momentum</b><br /><br />A recurring theme is that Elisp and JavaScript both will both exhibit a particular problem, and there are specific near-term plans to fix it in JavaScript, but no long-term plans to fix it in Elisp.<br /><br />It's easier to resign yourself to a workaround when you know it's temporary. If you know the language is going to be enhanced, you can even design your code to accommodate the enhancements more easily when they appear.<br /><br />People are working on improving JavaScript. It's not happening quite as fast as I'd hoped earlier this year, but it's still happening. As far as I know, Emacs Lisp is "finished" in the sense that no further evolution to the language is deemed necessary by the Emacs development team.<br /><br /><b>Problem #2: No encapsulation</b><br /><br />Every symbol in Emacs-Lisp is in the global namespace. There is rudimentary support for hand-rolled namespaces using obarrays, but there's no equivalent to Common Lisp's <code>in-package</code>, making obarrays effectively useless as a tool for code isolation.<br /><br />The only effective workaround for this problem is to prefix every symbol with the package name. This practice has become so entrenched in Emacs-Lisp programming that many packages (e.g. <code>apropos</code> and the <code>elp</code> elisp profiler) rely on the convention for proper operation.<br /><br />The main adverse consequence of this problem in practice is program verbosity; it makes Emacs-Lisp more difficult to read and write than Common Lisp or Scheme. It can also have a non-negligible impact on performance, especially of interpreted code, as the prefix characters can approach 5% to 10% of total program size in some cases.<br /><br />The problems run slightly deeper than simple verbosity. Without namespaces you have no real encapsulation facility: there is no convenient way to make a "package-private" variable or function. In practice there's little problem with program integrity; it's hard for an external package to change a "private" variable inadvertently in the presence of symbol prefixes. However, it makes it annoyingly difficult for users of the package to discern the "important" top-level configuration variables and program entry points from the unimportant ones. Elisp attempts a few conventions here, but it's a far cry from real encapsulation support.<br /><br />JavaScript also lacks namespaces. They're being added in ES/Harmony, but in the meantime, browser JavaScript code typically uses the same name-prefixing practice as Emacs-Lisp.<br /><br />However, JavaScript has lexical closures, which provide a mechanism for creating private names. One common encapsulation idiom in browser JavaScript is to wrap a code unit in an anonymous lambda, so that all the functions in the code unit become nested functions that close lexically over the top-level names in the anonymous lambda. This trick is nowhere near as effective in Emacs-Lisp, for several reasons:<br /><ul><li>elisp is not lexically scoped and has no closures</li><li>elisp nested defuns are still entered into the global namespace</li><li>CL's <code>`flet'</code> and </code>`labels'</code> are only weakly supported, via macros, and they frequently confuse the debugger, indenter, and other tools.</li></ul><br />Some elisp code (e.g. much of the code in cc-engine) prefers to work around the namespace problem by using enormous functions that can be thousands of lines long, since let-bound variables are slightly better encapsulated. Even this is broken by elisp's dynamic scope:<br /><pre>(defun foo ()<br /> (setq x 7))<br /><br />(defun bar ()<br /> (let ((x 6))<br /> (foo)<br /> x)) ; you would expect x to still be 6 here<br /><br />(bar)<br />7 ; d'oh!</pre><br />So let-bound variables in elisp can still be destroyed by your callee: a dangerous situation at best.<br /><br />Emacs is basically one big program soup. There's almost no encapsulation to speak of, and it hurts.<br /><br /><b>Problem #3: No delegation</b><br /><br />One of the big advantages to object-oriented programming is that there is both syntactic support and runtime support for automatic delegation to a "supertype". You can specialize a type and delegate some of the functionality to the base type. Call it virtual methods or prototype inheritance or whatever you like; most successful languages support some notion of automatic delegation.<br /><br />Emacs Lisp is a lot like ANSI C: it gives you arrays, structs and functions. You don't get pointers, but you do get garbage collection and good support for linked lists, so it's roughly a wash.<br /><br />For any sufficiently large program, you need delegation. In Ejacs I wound up having to implement my own virtual method tables, because JavaScript objects inherit from <code>Object</code> (and in some cases, <code>Function</code>, which inherits from <code>Object</code>).<br /><br />Writing your own virtual method dispatch is just not something you should have to do in 2008.<br /><br /><b>Problem #4: Properties</b><br /><br />I wrote about this at length in a recent blog post, <a href="http://steve-yegge.blogspot.com/2008/10/universal-design-pattern.html">The Universal Design Pattern</a>. JavaScript is fundamentally a properties-based language, and it's really nice to be able to just slap named properties on things when you need a place to store data.<br /><br />Emacs Lisp only offers properties in the form of simple plists – linked lists where the odd entries are names and the even entries are values. Symbols have plists, and symbols operate a little bit like very lightweight Java classes (in that they're in the global namespace), but that only gets you so far. If you want the full JavaScript implementation of the Properties Pattern, you'll have to write a lot of code.<br /><br />And so I did. Your implementation choice for object property lists has a huge impact on runtime performance. Emacs has hashtables, but they're heavyweight: if you try to instantiate thousands of them it slows Emacs to a crawl. So they're no good for the default <code>Object</code> property list. Emacs also has associative arrays (alists), but their performance is O(n), making them no good for objects with more than maybe 30 or 40 properties.<br /><br />I wound up writing a hybrid model, where the storage starts with lightweight alists, and as you add properties to an object instance, it crosses a threshold (I set it to 50, which seemed to be about right from profiling), it copies the properties into a hashtable. This had a dramatic increase in performance, but it was a lot of work.<br /><br />I experimented with using a splay tree. I implemented Sleater and Tarjan's splay tree algorithm in elisp; Ejacs comes with a standalone <code>splay-tree.el</code> that you can use in your programs if you like. I was hoping that its LRU cache-like properties would help, but I never found a use case where it was faster than my alist/hashtable hybrid, so it's not currently used for anything.<br /><br />And then in the end, after I was done with my implementation, it was a <em>library</em> (at least from the Emacs-Lisp side of the house). It wasn't an object system for Lisp. It's only really usable inside the JavaScript interpreter, where it has syntactic support.<br /><br />You really want syntactic support. Sure, people have ported subsets of CLOS to Emacs Lisp, but I've always found them a bit clunky. And even in CLOS it's hard to implement the Properties Pattern. You don't get it by default. CLOS has lots of support for compile-time slots and virtual dispatch, but very little support for dynamic properties. It's not terribly hard to build in, but that's my point: for something that fundamental, you don't want to have to build it.<br /><br /><b>Problem #5: No polymorphic <code>toString</code></b><br /><br />One of the great strengths of JavaScript is the <code>toSource</code> extension. I don't know if they support it over in IE-land; I haven't been a tourist there in a very long time. But in real versions of JavaScript, every object can serialize itself to source, which can then be eval'ed back to construct the original object.<br /><br />This is even true for functions! A function in JavaScript can print its own source code. This is an amazingly powerful feature.<br /><br />In Emacs Lisp, some objects have first-class print representations. Lists and vectors do, for instance:<br /><br /><pre>(let ((my-list '()))<br /> (push 1 my-list)<br /> (push 2 my-list)<br /> (push 3 my-list)<br /> my-list)<br />(3 2 1)<br /><br />(let ((v (make-vector 3 nil)))<br /> (aset v 0 1)<br /> (aset v 1 2)<br /> (aset v 2 "three")<br /> v)<br />[1 2 "three"]</pre><br /><br />But in Emacs Lisp, many built-in types (notably hashtables and functions) do NOT have a way to serialize back as source code. This is a serious omission.<br /><br />Also, trying to print a sufficiently large tree made entirely of <code>defstruct</code>s will crash Emacs, which caused me a lot of grief until I migrated my parse tree to use a mixture of defstructs and lists. Note that simply typing the name of a defstruct, or passing over it ephemerally in the debugger, will cause Emacs to try to print it, and crash. Fun.<br /><br />The problem of polymorphic debug-printing (or text-serialization) is, I think, a byproduct of Emacs not being object-oriented. If you want a debug dump of a data structure, you write a function to do it. But Emacs provides a half-assed solution: it debug-prints lists very nicely, even detecting cycles and using the #-syntax for representing graph structures (as does SpiderMonkey/JavaScript). But it has no useful debugging representation for hashtables, functions, buffers or other built-in structures, and there's no way to install your own custom printer so that the debugger and other subsystems will use it.<br /><br />So it sucks. Printing data structures in Emacs just sucks.<br /><br />The situation in Ecma-262-compliant JavaScript really isn't that much better, although you can at least install your own <code>toString</code> on the built-ins. But any competent "server-side" JavaScript implementation (i.e. one designed for writing real apps, rather than securely scripting browser pages) has a way to define your own non-enumerable properties, so you can usually override the default behavior for things like <code>toString</code> and <code>toSource</code>.<br /><br />And all else being equal, at least JavaScript functions print themselves.<br /><br /><b>Emacs advantages: Macros and S-expressions</b><br /><br />Pound for pound, Emacs Lisp seems roughly as expressive as JavaScript or Java for writing everyday code. It shouldn't be that way. Emacs Lisp ought to be more succinct because it's Lisp, but it's incredibly verbose because of the namespace problem, and it's also verbose to the extent that you want to use the properties pattern without worrying about alist or hashtable performance.<br /><br />Elisp does have a few places where it shines, though. One of them is the <code>cl</code> (Common Lisp emulation) package, which provides a whole bunch of goodies that make Elisp actually usable for real work. Defstruct and the loop macro are especially noteworthy standouts.<br /><br />Some programmers are still operating under the (ancient? legacy?) assumption that the <code>cl</code> package is somehow deprecated or distasteful or something. They're just being silly; don't listen to them. Practicality should be the ONLY consideration.<br /><br />The <code>cl</code> package wouldn't have been possible without macros. JavaScript has no macros, so even though it has better support for lambdas, closures, and even (in some versions) continuations, there are still copy/paste compression problems you can't solve in JavaScript.<br /><br />Emacs Lisp has <code>defmacro</code>, which makes up for a LOT of its deficiencies. However, it really only has one flavor. Ideally, at the <em>very</em> least, it should support reader macros. The Emacs documentation says they were left out because they felt it wasn't worth it. Who are they to make the call? It's the users who need them. Implementer convenience is a pretty lame metric for deciding whether to support a feature, especially after 20 years of people asking for it.<br /><br />Elisp is s-expression based, which is a mixed bag. It has some advantages, no question. However, it fares poorly in two very common domains: object property access, and algebraic expressions.<br /><br />JavaScript is NOT s-expression based (or it wouldn't be a successful language, many would argue), but it does offer some of the benefits of s-expressions. JSON is one such benefit. JavaScript's declarative object literals (or as a Lisp person would say, "syntax for hashes") and arrays provide a powerful mechanism for designing and walking your own declarative data structures.<br /><br />JavaScript also has all the usual (which is to say, expected) support for algebraic operators. And unlike Java, JavaScript even got the precedence right, so it's not full of redundant parentheses.<br /><br /><b>Overall Comparison</b><br /><br />In the end, it comes down to personal choice. I've now written at least 30,000 lines of serious code in both Emacs Lisp and JavaScript, which pales next to the 750,000 or so lines of Java I've crapped out, and doesn't even compare to the amount of C, Python, assembly language or other stuff I've written.<br /><br />But 30,000 lines is a pretty good hunk of code for getting to know a language. Especially if you're writing an interpreter for one language in another language: you wind up knowing both better than you ever wanted to know them.<br /><br />And I prefer JavaScript over Emacs Lisp.<br /><br />That said, I suspect I would <em>probably</em> prefer <a href="http://clojure.org">Clojure</a> over Rhino, if I ever get a chance to sit down with the durn thing and use it, so it's not so much "JavaScript vs. <em>Lisp</em>" as it is vs. Emacs Lisp.<br /><br />I would love to see Emacs Lisp get reader macros, closures, some namespace support, and the ability to install your own print functions. This reasonably small set of features would be a huge step in usability.<br /><br />However, for the nonce I'm focusing on JavaScript. I've found that JavaScript is a language that smart people like. It's weird, but I keep meeting really really smart people, folks who (unlike me) are actually intelligent, and they like JavaScript. They're always a little defensive about it, and almost a little embarrassed to admit it. But they think of it as an elegant, powerful, slightly flawed but quite enjoyable little language.<br /><br />I tell ya: if you're a programming language, it's a very good thing to have smart people liking you.<br /><br />It doesn't make me smart, but I kinda like it too. Even though there's (still) a lot of hype these days about Java, and people tootling on about how Java's going to be the next big Web language... I just don't see it happening. There are too many smart people out there who like JavaScript.<br /><br />So enjoy the interpreter. Ejacs is just a toy, but I think it also shows a kind of promise. Scripting Emacs using JavaScript (if anyone ever actually implements it) could be really interesting. It could open up the world's most powerful, advanced editing environment to millions of people. Neat.<br /><br />In the meantime, it doesn't actually do squat except interpret EcmaScript in a little isolated console, so don't get your hopes up.<br /><br />Reminder — here's the Ejacs URL: <a href="http://code.google.com/p/ejacs">http://code.google.com/p/ejacs</a> - enjoy!<br /><br />And with that, I'm off to find some Nuka-Cola Quantum. I just wish those bastards hadn't capped me at level 20.Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com50tag:blogger.com,1999:blog-13674163.post-31677408350009115532008-10-28T21:52:00.000-07:002008-11-18T01:54:41.540-08:00A programmer's view of the Universe, part 1: The fishI write a column for computer programmers called "Stevey's Blog Rants." It's basically a magazine column — I publish to it about once a month. The average length of my articles is about 12 pages, although they can range anywhere from 4 to 40 pages, depending on how I'm feeling. But for precedent, don't think blogs: think of Reader's Digest. The blog format sets the wrong expectations.<br /><br />Hence, some people complain that my articles are too long. Others complain that I have not given my arguments sufficient exposition, and that my articles are in fact too short on detail to warrant any credibility. This is a lose-lose situation for me, but I keep at it nonetheless because I enjoy writing. Even if nobody were to read my blog, the act of writing things down helps me think more clearly, and it's engaging in the same way that solving a Sudoku puzzle is engaging.<br /><br />You should try it yourself. All it takes is a little practice.<br /><br />My blog topics vary widely, and sometimes I even venture outside the realm of programming. Programming is where I'm most comfortable, and it's also where people seem to ascribe to me some level of punditry: I'm not necessarily <em>right</em>, but even my greatest detractors grudgingly admit that I'm entitled to an opinion, by virtue of my having spent twenty years hacking day and night without any sign of wanting to give it up and turn into a pointy-haired manager.<br /><br />Even though I love both programming and to a lesser extent writing about it, there are also lots of non-programming topics I'd like to write about. Being a career programmer gives you an interesting perspective on issues not directly related to programming. You start to see parallels. So maybe I'll branch out some more and see how it goes.<br /><br /><b>The programmer's view</b><br /><br />The first thing you notice as a programmer is that it trains you — forces you, really — to think in a disciplined way about complex logic problems. It also gives you a big booster shot of confidence around problem-solving in general. Junior programmers tend to have very high opinions of themselves; I was no exception.<br /><br />In time, though, programming eventually humbles you, because it shows you the limits of your reasoning ability in ways that few other activities can match. Eventually every programmer becomes entangled in a system that is overwhelming in its complexity. As we grow in our abilities as programmers we learn to tackle increasingly complex systems. But every human programmer has limits, and some systems are just too hard to grapple with.<br /><br />When this happens, we usually don't blame ourselves, nor think any less of ourselves. Instead we claim that it's someone else's fault, and it just needs a rewrite to help manage the complexity. In many cases this is even true.<br /><br />Over time, our worldwide computer-programming community has discovered or invented better and better ways ways to organize programs and systems. We've managed to increase their functionality while keeping the complexity under control.<br /><br />But even with such controls in place, systems can occasionally get out of hand. And sometimes they even need to be abandoned altogether, like a dog that's gone rabid. No matter how much time and love you've put into such systems, there's no fixing them.<br /><br />Abandoning a system is a time of grieving for the folks who've worked on it. Such systems are like family.<br /><br />And there's a disturbing lesson at the tail end of such experiences. The scary thing is that it's very easy, as a programmer standing at the precipice of complexity, to envision systems that are orders of magnitude more complex, millions of times more complex, even unimaginably more complex.<br /><br />In the end, programming shows us how small we are.<br /><br /><b>The fish's view</b><br /><br />Long ago, I used to have a Siamese fighting fish, also known as a <em>Betta splendens</em>, or simply a "betta". You can buy these fish at almost any pet store. I kept my betta, who was a deep vibrant red, in a pretty little 15-gallon tank decorated with a resplendence of real freshwater plants. And for a while I think my betta was happy there.<br /><br />Like many Americans, I went through a phase in which I kept and ultimately killed many, many tropical fish. I didn't kill them intentionally; I wanted them to live and thrive. But keeping them alive for long is a challenge when you don't live in the tropics. So they might live for a few months or maybe a year, but they would always die prematurely. It was sad, and eventually I could no longer bear it, so I stopped keeping them.<br /><br />Of all my fish, my betta left the biggest impression on me. The betta is a remarkable fish in several ways. For one thing, bettas are physically beautiful, and when they are at full display, their fins expand, peacock-like, into a fluid rose shape that is undeniably dramatic.<br /><br />Bettas are also remarkable because they fight. They do not fight to establish a pecking order, as other fish do; they fight to kill. The males display their fins and then fight whenever they see another male betta, or even their own reflection, so they have to be kept alone and away from mirrors.<br /><br />But bettas, I think, are most remarkable for their intelligence. Of all of the hundreds of tropical fish I kept, only bettas displayed anything resembling intellectual curiosity.<br /><br />This really makes bettas some of the saddest stories in the tropical fish industry. Like other hobbyist fishes, they are stolen from their natural habitat and shipped overseas, or at best farmed in unsavory conditions. But unlike most other fish, bettas are also dyed to enhance their color. They are generally housed in tiny fist-sized bowls because of their ability to breathe air when necessary. And they are bred to express their fighting genes, and are often made to fight by their owners. Whereas other fish are kidnapped and sold, bettas are <em>abused</em>.<br /><br />But worst of all, I believe their high intelligence endows them with greater capacity for suffering than other fish species. They can suffer physically and emotionally, but as we will see shortly, they can also suffer intellectually.<br /><br />So bettas are a sad story.<br /><br /><b>My betta</b><br /><br />Here is the specific sad story of my betta, the fish that left such an impression on me.<br /><br />I had taken to lying on my bed and watching my betta for an hour or longer. The betta was the sole occupant of the tank in my bedroom. I had filled the tank with plants and copious natural light, so the effect was calming and serene. At times I almost envied the betta for the nice home I'd made for him.<br /><br />One day, after the betta had been in his new home for several weeks, I found him exploring. It was a most unusual exploration, and one that I will never forget.<br /><br />For the first few weeks, the betta explored the way you would expect any reasonably intelligent fish to go about the task. For the first few days he swam around to every nook and cranny of the tank, to make sure he had the lay of the land. Then for a few more days he experimented with staying put in different locations to see how he liked their feel.<br /><br />Just like people, most fish will soon find a spot or a path they like best, and they'll stay in that spot or on that path for the rest of their lives.<br /><br />But my betta was different. After his initial explorations he became restless. I'm no Fish Whisperer, but I could <em>tell</em> that he was restless. You would have thought so too. The betta started spending most of his time looking out of the tank, examining my bedroom. And he was clearly looking at specific things in the bedroom, not just "out there" in general. He would periodically swim around looking mildly agitated. He was acting like he wanted out.<br /><br />I did everything I could to placate him. I experimented with different fish foods. I changed the water weekly and monitored it carefully to keep its temperature and pH within acceptable ranges. I added more lights. I added more plants. I rearranged the plants. In desperation, I even added a little castle.<br /><br />Every time I tried something new, it would pique his interest for a little while. But in time, and faster each time than before, he would revert to his state of restlessness.<br /><br />I'd never seen quite this behavior in other fish, so already he was demonstrating what seemed to be above average intelligence.<br /><br />And then one day I found him engaged in an exploration that was altogether new.<br /><br />He wasn't exploring the tank. He'd already investigated its topology for weeks. This time, he was exploring the <em>nature</em> of the tank. That's what caught my attention, and not just for that day, but for the rest of my life.<br /><br />There was a twenty-inch vine in the tank that extended from the lower left back corner to the upper right front corner, along the diagonal of the main volume of the tank. The vine belonged to one of the many plants I'd put in there in the hopes of making it feel more like the Mekong river basin and less like a plexiglass tank in Seattle, Washington.<br /><br />The betta had his nose on the vine. He was floating just above it, twitching his fins slightly to stay in place, and he was keeping his eyes as close to the vine as possible while keeping it in focus: about half a centimeter to a centimeter. And he was traversing the vine.<br /><br />With the tiniest of motions, he was propelling himself along the vine towards the lower back right corner, keeping it under close scrutiny at all times. This excursion, from the halfway point to the end of the vine, took him perhaps three minutes. He was taking his time.<br /><br />When he got to the end of the vine, he remained rooted in place while he inspected the 3-inch-radius spherical volume at the end of the vine, which was truncated in three dimensions by the walls and floor of the tank. He spent about three or four minutes doing this inspection, evidently making sure the vine really did terminate in the corner, and did not escape the tank.<br /><br />After he had thoroughly scrutinized everything in the betta-sized vicinity of the vine's end, he turned back to the vine, nose pressed close, and began working his way along the vine in the other direction.<br /><br />At this point I sat down to watch, because if he was doing what I thought he might be doing, then... I didn't know what to think. I wanted to see it for myself.<br /><br />Over the next seven to ten minutes, he crept along the vine, never losing sight of it nor getting further than a centimeter from it, until he reached the upper-right front corner of the tank. He then proceeded to repeat his inspection of the volume at vine's end, ensuring himself that the vine terminated in the tank rather than protruding beyond the wall.<br /><br />But what if he had missed something?<br /><br />Sure enough, he turned and looked down the length of the vine for a time. And then he put his nose back on the vine and began again his long descent to the other end.<br /><br />He did this for five days.<br /><br />By the second day my amazement had turned to concern, and by the third day I felt utterly helpless. Here was an intelligent prisoner, my captive, exploring the mechanics of his prison with a thoroughness that only the imprisoned can afford, looking for an escape with deathly tenacity.<br /><br />But while purchasing my betta had been easy enough, returning him to his real home would be unthinkably difficult, and probably unsuccessful even if I'd tried. Returning him to the fish store seemed like a dead end; he could easily wind up worse off than he was now.<br /><br />So I concluded that there was nothing I could do. As he inspected the vine, I bit my nails, and timed passed in silence.<br /><br />After the fifth day he gave up. And then he did something that I still don't understand, even though I've heard about this kind of thing before, and even though I personally saw him do it: he died of unhappiness.<br /><br />It only took him a few days. He refused his food, he stopped moving, and to all external appearances he had become ill. But I knew better.<br /><br /><b>The lesson</b><br /><br />Whenever I find myself struggling against the tide of massive system complexity, I think of my betta. He had a big heart, a small brain, and a small range of sensory input. I watched him use them all as methodically as any programmer to reason his way through to a soundness proof of the inescapability of his prison.<br /><br />We like to think of ourselves as being pretty smart. Admit it. We do. But in the grand scheme of things we're intellectually little better off than that fish. We can easily find problems so complex that reasoning about them can take days or weeks of microscopic scrutiny, like my fish swimming along his vine.<br /><br />And we can just as easily envision problems thousands or millions of times more complex: problems beyond the reasoning abilities of any person, any group of people, or even our entire species.<br /><br />This has ramifications for the way we think about things today.<br /><br />I believe I will have more to say about this soon. Right now I need to go mourn my fish, whose soul shone as brightly as that of anyone I've known.Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com79tag:blogger.com,1999:blog-13674163.post-18840843968573804262008-10-20T03:06:00.000-07:002008-11-18T02:02:27.393-08:00The Universal Design Pattern<table border="0"><br><tr><td width="20%"> </td><br> <td><em>This idea that there is generality in the specific is of far-reaching importance.</em><br> — <em>Douglas Hofstadter, <a type="amzn" asin="0465026567">Gödel, Escher, Bach</a> </em></td></tr></table><br /><br /><em>Note:</em> Today's entry is a technical article: it isn't funny. At least not intentionally.<br /><br /><em>Update, Oct 20th 2008:</em> I've added an <a href="#updates">Updates</a> section, where I'll try to track significant responses, at least for a week or so. There are three entries so far.<br /><br /><h2>Contents</h2> <ul> <li><a href="#Intro">Introduction</a></li> <li><a href="#Three_Gr">Three Great Schools of Software Modeling</a> <ul> <li><a href="#Class_Mo">Class Modeling</a></li> <li><a href="#Relation">Relational Modeling</a></li> <li><a href="#XML_Mode">XML Modeling</a></li> <li><a href="#Other_Sc">Other schools</a></li> <li><a href="#Finding_">Finding the sweet spot</a></li> <li><a href="#Property">Property Modeling</a></li> </ul></li> <li><a href="#Brains_a">Brains and Thoughts</a></li> <li><a href="#Who_uses">Who uses the Properties Pattern?</a> <ul> <li><a href="#Eclipse">Eclipse</a></li> <li><a href="#JavaScript">JavaScript</a></li> <ul> <li><a href="#Pushing_">Pushing it even further</a></li> <li><a href="#The_Prop">The pattern takes shape...</a></li> </ul> <li><a href="#Wyvern">Wyvern</a></li> <li><a href="#Lisp">Lisp</a></li> <li><a href="#XML_and_">XML revisited</a></li> <li><a href="#Bigtable">Bigtable</a></li> </ul></li> <li><a href="#Overview">Properties Pattern high-level overview</a></li> <li><a href="#Represen">Representations</a> <ul> <li><a href="#Keys">Keys</a> <ul> <li><a href="#Quoting">Quoting</a></li> <li><a href="Missing">Missing keys</a></li> </ul> </li> <li><a href="#Data_Str">Data structures</a></li> </ul></li> <li><a href="#Inherita">Inheritance</a> <ul> <li><a href="#deletion">The deletion problem</a></li> <li><a href="#readwrite">Read/write asymmetry</a></li> <li><a href="#readonly">Read-only plists</a></li> </ul></li> <li><a href="#perf">Performance</a> <ul> <li><a href="#interning">Interning strings</a></li> <li><a href="#perfect">Perfect hashing</a></li> <li><a href="#Copy-on-">Copy-on-read caching</a></li> <li><a href="#Refactor">Refactoring to fields</a></li> <li><a href="#fridge">Refrigerator</a></li> <li><a href="#redacted">REDACTED</a></li> <li><a href="#Rolling_">Rolling your own</a></li> </ul></li> <li><a href="#Transien">Transient properties</a> <ul> <li><a href="#delete-remix">The deletion problem (remix)</a></li> </ul></li> <li><a href="#persistence">Persistence</a> <ul> <li><a href="#Query_St">Query strategies</a></li> <li><a href="#Backfill">Backfills</a></li> </ul></li> <li><a href="#Type_Sys">Type systems</a></li> <li><a href="#Toolkits">Toolkits</a></li> <li><a href="#Problems">Problems</a></li> <li><a href="#further-reading">Further reading</a></li> <li><a href="#updates">New Updates</a></li><li><a href="#Final_Th">Final thoughts</a></li> </ul><br /><br /><h2><a name="Intro">Introduction</a></h2><br />Today I thought I'd talk about a neat design pattern that doesn't seem to get much love: the <em>Properties Pattern</em>. In its fullest form it's also sometimes called the <em>Prototype Pattern</em>.<br /><br />People use this pattern all over the place, and I'll give you a nice set of real-life examples in a little bit. It's a design pattern that's useful in every programming language, and as we'll see shortly, it's also pretty darn useful as a general-purpose persistence strategy.<br /><br />But even though this pattern is near-universal, people don't talk about it very often. I think this is because while it's remarkably flexible and adaptable, the Properties Pattern has a reputation for not being "real" design or "real" modeling. In fact it's often viewed as a something of a shameful cheat, particularly by overly-zealous proponents of object-oriented design (in language domains) or relational design (in database domains.) These well-meaning folks tend to write off the Properties Pattern as "just name/value pairs" – a quick hack that will just get you into trouble down the road.<br /><br />I hope to offer a different and richer perspective here. With luck, this article might even help begin the process making the Properties Pattern somewhat fashionable again. Time will tell.<br /><br /><h2><a name="Three_Gr">Three Great Schools of Software Modeling</a></h2><br />Before I tell you anything else about the Properties Pattern, let's review some of the most popular techniques we programmers have for modeling problems.<br /><br />I should point out that none of these techniques is tied to "static typing" or "dynamic typing" per se. Each of these modeling techniques can be used with or without static checking. The modeling problem is <em>orthogonal</em> to static typing, so regardless of your feelings about static checking, you should recognize the intrinsic value in each of these techniques.<br /><br /><h3><a name="Class_Mo">Class Modeling</a></h3><br />You know all about this one. Class-based OO design is the 800-pound gorilla of domain modeling these days. Its appeal is that it's a natural match for the way we already model things in everyday life. It can take a little practice at first, but for most people class modeling quickly becomes second nature.<br /><br />Although the <em>industry</em> loves OO design, it's not especially well liked as an academic topic. This is because OO design has no real mathematical foundation to support it — at least, not until someone comes along and creates a formal model for side effects. The concepts of OOP stem not from mathematics but from fuzzy intuition.<br /><br />This in some sense explains its popularity, and it also explains why OOP has so many subtly different flavors in practice: whether (and how) to support multiple inheritance, static members, method overloading vs. rich signatures, and so on. Industry folks can never quite agree on what OOP is, but we love it all the same.<br /><br /><h3><a name="Relation">Relational Modeling</a></h3><br />Relational database modeling is a bit harder and takes more practice, because its strength stems from its mathematical foundation. Relational modeling <em>can</em> be intuitive, depending on the problem domain, but most people would agree that it is not <em>necessarily</em> so: it takes some real skill to learn how to model arbitrary problems as relational schemas.<br /><br />Object modeling and relational modeling produce very different designs, each with its strengths and weaknesses, and one of the trickiest problems we face in our industry has always been the object-relational mapping (ORM) problem. It's a big deal. Some people may have let you to believe that it's simple, or that it's automatically handled by frameworks such as Rails or Hibernate. Those who know better know just how hard ORM is in real-world production schemas and systems.<br /><br /><h3><a name="XML_Mode">XML Modeling</a></h3><br />XML provides yet another technique for modeling problems. Usually XML is used to model <em>data</em>, but it can also be used to model code. For instance, XML-based frameworks such as Apache Ant and XSLT offer computational facilities: loops or recursion, conditional expressions and setting variables.<br /><br />In many domains, programmers will decide on an XML representation before they've thought much about the class model, because for those domains XML actually offers the most convenient way of thinking about the problem.<br /><br />The kinds of data that work well with XML modeling tend to be poorly suited for relational modeling, and vice-versa, with the practical result that XML/relational mapping is almost as infamously thorny as O/R mapping.<br /><br />And as for XML/OO mapping, most of us tend to treat it as a more or less solved problem. However, in practice there are several competing ways of doing XML/OO mapping. The W3C DOM and SAX enjoy the broadest use, but they are both sufficiently cumbersome that alternatives such as JDom and REXML (among others) have gained significant followings.<br /><br />I mention this not to start a fight, but only to illustrate that XML is a third modeling technique in its own right. It has both natural resonances and surfaces of friction with both relational design and OO design, as one might expect.<br /><br /><h3><a name="Other_Sc">Other schools</a></h3><br />I'm not claiming that these three modeling schools are the only schools out there – far from it! Two other obvious candidates are Functional modeling (in the sense of Functional Programming, with roots in the lambda calculus) and Prolog-style logical modeling. Both are mature problem-modeling strategies, each with its pros and cons, and each having varying degrees of overlap with other strategies. And there are still other schools, perhaps dozens of them.<br /><br />The important takeaway is that none of these modeling schools is "better" than its peers. <b>Each one can model essentially any problem.</b><br /><br />There are tradeoffs involved with each school, by definition — otherwise all but one would have disappeared by now.<br /><br /><h3><a name="Finding_">Finding the sweet spot</a></h3><br />Sometimes it makes sense to use multiple modeling techniques in the same problem space. You might do a mixed XML/relational data design, or a class-based OO design with Functional aspects, or embed a rules engine in a larger system.<br /><br />Choosing the right technique comes down to <b>convenience</b>. For any given real-world problem, one or two modeling schools are likely to be the most convenient approaches. Exactly which one or two depends entirely on the particulars of the problem.<br /><br />By <em>convenient</em>, I mean something different from what you might be thinking. To me, a convenient design is one that is convenient for the <em>users</em> of the design. And it should also be convenient to <em>express</em>, in the sense of minimalism: all else being equal, a smaller design beats a big one. One way of looking at this is that the design should be convenient for itself!<br /><br />Unfortunately, most programmers (myself included) tend to use exactly the wrong definition of convenience: they choose a modeling technique that is convenient for themselves. If they only have experience in one or two schools, guess which techniques they'll jump to for <em>every</em> problem they face?<br /><br />This problem rears its head throughout computing. There's always a "best" tool for any job, but if programmers don't know how to use it, they'll choose an inferior tool because they think their schedule doesn't permit a learning curve. In the long run they're hurting their schedules, but it's hard to see that when you're down in the trenches.<br /><br />Modeling schools are just like programming languages, web frameworks, editing environments and many other tools: you won't know how to pick the right one unless you have a reasonably good understanding of all of them, and preferably some practice with each.<br /><br />The important thing to remember is that <em>all</em> modeling schools are "first class" in the sense of being able to represent any problem, and <em>no</em> modeling school is ideal for every situation. Just because you are most comfortable solving a problem using a particular strategy does not mean that it is the ideal solution to the problem. The best programmers aim to master all available techniques, giving them a better chance at making the right choices.<br /><br /><h3><a name="Property">Property Modeling</a></h3><br />With this context in mind, I claim that the Properties Pattern is yet another kind of domain modeling, with its own unique strengths and tradeoffs, distinct from all the other modeling schools I've mentioned. It is their first-class peer, inasmuch as it is capable of modeling the same broad set of problem domains.<br /><br />After we've finished talking about the Properties Pattern in exhausting detail, I think I'll have convinced you of the pattern's status as a major school of modeling. Hopefully you'll also start to have a feel for the kinds of problems it's well-suited to solve – sometimes more so than other schools, even your current favorite.<br /><br />But before we dive into technical details, let's take a brief peek at a fascinating comparison of Property-based modeling to class-based OO design. It's a non-technical argument that I think has some real force behind it.<br /><br /><h2><a name="Brains_a">Brains and Thoughts</a></h2><br />Douglas Hofstadter has spent a lifetime thinking about <em>the way we think</em>. He's written about it perhaps more than anyone else in the past century. Even if someone out there has beaten him in sheer quantity of words on the subject, nobody has come close to rivaling his style or his impact on programmers everywhere.<br /><br />All of his books are wonderfully imaginative and are loads of fun to read, but if you're a programmer and you haven't yet read <a type="amzn" asin="0465026567">Gödel, Escher, Bach: An Eternal Golden Braid</a> (usually known as "GEB"), then I envy you: you're in for a real treat. Get yourself a copy and settle in for one of the most interesting, maddening, awe-inspiring and just plain <em>fun</em> books ever written. The Pulitzer Prize it won doesn't nearly do it justice. It's one of the greatest and most unique works of imagination of all time.<br /><br />Hofstadter made a compelling argument in GEB (thirty years ago!) that property-based modeling is <em>fundamental to the way our brains work</em>. In Chapter XI ("Brains and Thoughts"), there are three little sections titled <u>Classes and Instances</u>, <u>The Prototype Principle</u>, and <u>The Splitting-off of Instances from Classes</u> that together form the conceptual underpinnings of the Properties Pattern. In these little discussions Hofstadter explains how the Prototype Principle relates to classic class-based modeling.<br /><br />I wish I could reproduce his discussion in full here — it's only three pages — but I'll have to just encourage you to go read it instead. His thesis is this:<br /><blockquote><b><em>The most specific event can serve as a general example of a class of events.</em></b></blockquote><br />Hofstadter offers several supporting examples for this thesis, but I'll paraphrase one of my all-time favorites. It goes more or less as follows.<br /><br />Imagine you're listening to announcers commenting on an NFL (American football) game. They're talking about a new rookie player that you don't know anything about. At this point, the rookie – let's say his name is L.T. – is just an instance of the class "football player" with no differentiation.<br /><br />The announcers mention that L.T. is a running back: a bit like Emmitt Smith in that he has great speed and balance, and he's great at finding holes in the defense.<br /><br />At this point, L.T. is basically an "instance" of (or a clone of) Emmitt Smith: he just inherited all of Emmitt's properties, at least the ones that you're familiar with.<br /><br />Then the announcers add that L.T. is also great at catching the ball, so he's sometimes used as a wide receiver. Oh, and he wears a visor. And he runs like Walter Payton. And so on.<br /><br />As the announcers add distinguishing attributes, L.T. the Rookie gradually takes shape as a particular entity that relies less and less on the parent class of "football player". He's become a very, very specific football player.<br /><br />But here's the rub: even though he's a specific instance, you can now use him as a class! If Joe the Rookie comes along next season, the announcers might say: "Joe's a lot like L.T.", and just like that, Joe has inherited all of L.T.'s properties, each of which can be overridden to turn Joe into his own specific, unique instance of a football player.<br /><br />This is called <em>prototype-based modeling</em>: Emmitt Smith was a prototype for L.T., and L.T. became a prototype for Joe, who in turn can serve as the prototype for someone else. Hofstadter says of The Prototype Principle:<br /><br /><blockquote><b>"This idea that there is generality in the specific is of far-reaching importance."</b></blockquote><br /><br />Again, it's a three-page discussion that I've just skimmed here. You should go read it for yourself. Heck, you should read the whole book: it's one of the greatest books ever written, period, and every programmer ought to be familiar with it.<br /><br />Hopefully my little recap of Hofstadter's argument has convinced you that my calling it the "Universal Design Pattern" might just <em>possibly</em> be more than a marketing trick to get you to read my blog, and that the rest of this article is worth a look.<br /><br />The Properties Pattern is unfortunately big enough to deserve a whole book. Calling an entire school of modeling a "design pattern" is actually selling it short by a large margin.<br /><br />Hence, if this article seems excruciatingly long, it's because I've tried to cram a whole book into a blog, as I often do. But look on the bright side! I've saved you a bunch of time this way. This will go much faster than reading a whole book.<br /><br />Even so, don't feel bad if it takes a few sittings to get through it all. It's still a lot of information. I considered splitting it into 3 articles, but instead I just cut about half of the material out. (Jeff and Joel: seriously. I cut 50%.)<br /><br /><h2><a name="Who_uses">Who uses the Properties Pattern?</a></h2><br />I assume I've already convinced you that this pattern is worth learning about, or you'd have left by now. I'm showing you these use cases not to draw you in, but to show you some of the very different ways the pattern can be used.<br /><br />We could probably find hundreds of examples, but I'll focus on just a handful of real-world uses that I hope will illustrate just how widespread and flexible it is.<br /><br />Before we start, there are two things to keep in mind. The first is that people have different names for this pattern, because even though it's quite commonplace, there hasn't been much literature on it. One paper calls it Do-It-Yourself Reflection. Another article calls it Adaptive Object Modeling. See <a href="#further-reading">Further reading</a> for the few links I could dig up.<br /><br />Whatever name you use, once you know how to look for it, you'll start seeing this pattern everywhere, wearing various disguises. Now that we have a common name for it, it should be easier to spot in the wild.<br /><br />The second thing to keep in mind is that the Properties Pattern <em>scales up</em>: you can choose how much to use it in your system. It can be anything from simple property lists attached to a few of your classes to make them user-annotatable, up through a full-fledged prototype-based framework that serves as the foundation for modeling everything in your system.<br /><br />So our examples will range from small ones to very big ones.<br /><br /><h3><a name="Eclipse">Eclipse</a></h3><br />One nice small-scale example of the pattern is the Eclipse Java Development Tools (JDT): a set of classes that model the Java programming language itself, including the abstract syntax tree, the symbol graph and other metadata. This is used by the Eclipse backend to do all the neat magic it does with your Java code, by treating Java source code as a set of data structures.<br /><br />You can view the javadoc for this class hierarchy at <a href="http://help.eclipse.org">help.eclipse.org</a>. Click on JDT Plug-in Developer Guide, then Programmer's Guide, then JDT Core, then <code>org.eclipse.jdt.core.dom</code>. This package defines strongly-typed classes and interfaces that Eclipse uses for modeling the Java programming language itself.<br /><br />If you click through any class inheriting from <code>ASTNode</code> you'll see that it has a property list. <code>ASTNode</code>'s javadoc comment says:<br /><blockquote>"Each AST node is capable of carrying an open-ended collection of client-defined properties. Newly created nodes have none. <code>getProperty</code> and <code>setProperty</code> are used to access these properties."</blockquote><br />I like this example for several reasons. First, it's a very simple use of the Properties pattern. It doesn't muck around with prototypes, serialization, metaprogramming or many of the other things I'll talk about in a little bit. So it's a good introduction.<br /><br />Second, it's placed smack in the middle of a very, very strongly-typed system, showing that the Properties pattern and conventional statically-typed classed-based modeling are by no means mutually exclusive, and can complement one another nicely.<br /><br />And third, their property system <em>itself</em> is fairly strongly typed: they define a set of support classes such as <code>StructuralPropertyDescriptor</code>, <code>SimplePropertyDescriptor</code>, and <code>ChildListPropertyDescriptor</code> to help place some constraints on client property values. I'm not a huge fan of this approach myself, since I feel it makes their API fairly heavyweight. But it's a perfectly valid stylistic choice, and it's useful for you to know that you can implement the pattern this way if you so choose.<br /><br /><h3><a name="JavaScript">JavaScript</a></h3><br />At the other end of the "how far to go with it" spectrum we have the JavaScript programming language, which places the Prototype Principle and Properties Pattern at the very core of the language.<br /><br />People love to lump dynamic languages together, and they'll often write off JavaScript as some sort of inferior version of Perl, Python or Ruby. I was guilty of this myself for over a decade.<br /><br />But JavaScript is substantively different from most other dynamic languages (even Lisp), because it has made the Properties Pattern its central modeling mechanism. It borrowed this heritage largely from a language called Self, and some other modern languages (notably Io and one other language that I'll talk about below) have also chosen prototypes and properties over traditional classes.<br /><br />In JavaScript, every user-interactible object in the system inherits from <code>Object</code>, which has a built-in property list. Prototype inheritance (think back to our example of the Emmitt Smith instance having been the prototype for the L.T. instance) is a first-class language mechanism, and JavaScript offers several kinds of syntactic support for accessing properties and declaring property lists (as "object literals").<br /><br />JavaScript is often accurately described as the world's most misunderstood programming language. Armed with our newfound knowledge, we can start to see JavaScript in a new light. To use JavaScript effectively, you need to gain experience with a whole new School of Modeling. If you simply try to use JavaScript as a substitute for (say) Java or Python, you'll encounter tremendous friction.<br /><br />Since most of us have precious little actual experience with property-based modeling, this is exactly what happens, and it's no wonder JavaScript gets a bad rap.<br /><br /><h4><a name="Pushing_">Pushing it even further</a></h4><br />In addition to how centrally you want to use the Properties pattern in your system, you can also decide how <em>recursive</em> to make it: do your properties have explicit meta-properties? Do you have metaprogramming hooks? How much built-in reflection do you offer?<br /><br />JavaScript offers a few metaprogramming hooks. One such hook is the recently-introduced <code>__noSuchMethod__</code>, which lets you intercept a failed attempt to invoke a nonexistent function-valued property on an object.<br /><br />Unfortunately JavaScript does not offer as many hooks as I'd like. For instance, there is no corresponding <em>__noSuchField__</em> hook, which limits the overall flexibility somewhat. And there are no standard mechanisms for property-change event notification, nor any reasonable way to provide such a mechanism. So JavaScript gets it mostly right, but it stops short, possibly for <a href="#performance">performance reasons</a>, of offering a fully-extensible metaprogramming system such as those offered by SmallTalk and to some extent, Ruby.<br /><br /><h4><a name="The_Prop">The pattern takes shape...</a></h4><br />Before we move on to other uses of the Property Pattern, let's put JavaScript (and its central use of the pattern) into perspective, by comparing it to another successful language.<br /><br />First: JavaScript is not my favorite language. I've done a <em>lot</em> of JavaScript programming over the past 2 years or so, both client-side and server-side, so I'm as familiar with it as I am with any other language I've used.<br /><br />JavaScript in its current incarnation is not the best tool for many tasks. For instance, it's not great for building APIs, and it's not great for Unix scripting the way Perl and Ruby are. It has no library or package system, no namespaces, and is missing many other modern conveniences. If you're looking for a general-purpose language, JavaScript leaves you wanting.<br /><br />But JavaScript <em>is</em> the best tool for many other tasks. As just one example, JavaScript is an <em>outstanding</em> language for writing unit tests — both for itself, and also for testing code in other languages. Being able to use the Properties Pattern to treat every object (and class) as a bag of properties makes the creation of mock objects a dream come true. The syntactic support for object literals makes it even better. You don't need any of the silly frameworks you see coming from Java, C++ or even Python.<br /><br />And JavaScript is one of the two best <em>scripting languages</em> on the planet, in the most correct sense of the term "scripting language": namely, languages that were designed specifically to be embedded in larger host systems and then used to manipulate or "script" objects in the host system. This is what JavaScript was designed to do. It's reasonably small with some optional extensions, it has a reasonably tight informal specification, and it has a carefully crafted interface for surfacing host-system objects transparently in JavaScript.<br /><br />In contrast, Perl, Python and Ruby are huge sprawls, all trying (like C++ and Java) to be the best language for every task. The only other mainstream language out there that competes with JavaScript for scripting arbitrary host systems is <a type="amzn" asin="8590379825">Lua</a>, famous for being the scripting language of choice for the game industry.<br /><br />And wouldn't you know it, Lua is <em>also</em> a language that uses the Properties Pattern as its central design. Its central <code>Table</code> structure is remarkably similar to JavaScript's built-in <code>Object</code>, and Lua also uses prototypes rather than classes.<br /><br />So the world's two most successful scripting languages are prototype-based systems. Is this just a cosmic coincidence? Or is it possible that a suitably designed class-based language could have been just as successful?<br /><br />It's hard to say. I've used Jython as an embedded scripting language for a long time, and it's worked pretty well. But I've personally come to believe that the Properties Pattern is actually better suited for <em>extensibility</em> than class-based modeling, and that prototype-based languages make better extension languages than class-based languages. That's effectively what's happening with embedded scripting: the end-users are <em>growing</em> and <em>extending</em> the host system.<br /><br />In fact I was convinced of it before I even knew JavaScript. Let's take a look at another interesting "Who uses it?" example: Wyvern.<br /><br /><h3><a name="Wyvern">Wyvern</a></h3><br />My multiplayer game <a href="http://www.cabochon.com">Wyvern</a> takes the Properties Pattern quite far as well, although in some different directions than what we've discussed so far. I designed Wyvern long before I'd heard of Self or Lua, and before I'd learned anything about JavaScript. In retrospect it's amazing how similar my design was to theirs.<br /><br />Wyvern is implemented in Java, but the root <code>GameObject</code> class has a property list, much like JavaScript's <code>Object</code> base class. Wyvern has prototype inheritance, but since I'd never heard of prototypes before, I called them <em>archetypes</em>. In Wyvern, any game object can be the archetype for any other game object, and property lookup and inheritance work more or less identically to the way they work in JavaScript.<br /><br />I arrived at this design after scratching my head for <em>months</em> (in late 1996) over how to build the ultimate extensible game. I wanted <em>all</em> the game content to be created by players, and I came up with dozens upon dozens of detailed use cases, in all of which I wanted players to be able to extend the game functionality in surprising new ways. In the end I arrived at a set of interleaved design patterns, including a rich command system, a rich hooks/advice system, and several other subsystems I'd love to document someday.<br /><br />But the core data model was the Properties Pattern.<br /><br />In some ways, Wyvern's implementation is more full-featured than JavaScript's. Wyvern offers more metaprogramming facilities, such as vetoable property change notifications, which gives in-game objects tremendous flexibility in responding to their environment. Wyvern also supports both transient and persistent properties, a scheme I'll discuss below.<br /><br />On other ways, Wyvern just made different decisions. One big one is that Wyvern's property values are statically typed. The property <em>names</em> are always strings, just like in JavaScript, but the values can be various leaf types (ints, longs, booleans, strings, etc.), or functions (a trick that wasn't easy in Java), or even archetypes.<br /><br />But despite the differences, Wyvern's core property-list infrastructure is a lot like that of JavaScript, Self and Lua. And it's been a design I've been fundamentally happy with for over ten years. It's met or exceeded all my original expectations for enabling end-user extensibility, particularly in its ability to let people extend the in-game behavior on the fly, without needing to reboot. This has proven extraordinarily powerful (and popular with the players.)<br /><br />Where Wyvern clearly got the pattern wrong was in its lack of support for <em>syntax</em>. As soon as I decided to use the Properties pattern centrally in my game, I should have decided to use a programming language better suited for implementing the pattern: ideally, one that supports it from the ground up.<br /><br />I eventually wound up using Python (actually, <a type="amzn" asin="0735711119">Jython</a>) for a ton of my code, and it was far more succinct and flexible than anything I wrote in Java. But I was foolishly worried about performance, and as a result I wound up writing at least half the high-level game logic in Java and piling on hundreds of thousands of lines of <code>getProperty</code> and <code>setProperty</code> code. And now the system is hard to optimize; it would have been much easier if I'd had a cleaner separation of game-engine infrastructure from "scripty" game code.<br /><br />Even if I'd done the whole game in Python, I'd still have had to implement a prototype inheritance framework to enable any object to be able to serve as the prototype for any other object.<br /><br />I realize I haven't really explained <em>why</em> prototype inheritance works so well, except for my brief mention of mock objects for unit testing. But to keep this article tractable, I had to delete several pages of detailed examples, such as "Chieftain Monsters" that could be programmatically constructed by adding a few new properties to any existing monster<br /><br />When I told you this pattern was big enough for a book, I meant a <em>big</em> book. Without the examples handy, all I can do is say that using JavaScript/Rhino (or Lua, once it became available on the JVM) might have made my life easier. Or heck, writing my own language might have been the best choice for a system that large and ambitious.<br /><br />In any case, live and learn. It's a lot of code, but Wyvern is still a properties-based, prototype-based system, and it has amazing open-ended flexibility as a result.<br /><br />We've been through the two big examples now (Wyvern and JavaScript). I'll close this "Who Uses It" section with just a few more key examples.<br /><br /><h3><a name="Lisp">Lisp</a></h3><br />Lisp features a small-scale example of the Properties Pattern: it has property lists for symbols. Symbols are first-class entities in Lisp. They're effectively the names in your current namespace, like Java's fully-qualified class names.<br /><br />If Java classes all had property lists, it would still be a small-scale instantiation of the Properties pattern, but it would open up an awful lot of new design possibilities to Java programmers. Similarly, Lisp stops short of making <em>everything</em> have a property list, but to the extent it offers property lists they're exceptionally useful design tools.<br /><br />Emacs Lisp actually makes heavy use of the Properties Pattern, inasmuch as essentially every one of its thousands of configuration settings is a property in the global namespace. It supports the notion of transient vs. persistent properties, and it offers a limited form of property inheritance via its buffer-local variables.<br /><br />Unfortunately Emacs doesn't support any notion of prototypes, and in fact it doesn't have any object-orientation facilities at all. Sometimes I want to model things in Emacs using objects with flexible property lists, and at such times I find myself wishing I were using JavaScript. But even without prototypes, Emacs gains significant extensibility from its use of properties for data representation.<br /><br />Keep in mind that there are, of course, big tradeoffs to make when you're deciding how much to use the Properties pattern; I'll discuss them in a bit. I'm not criticizing <em>any</em> of the systems or languages here for the choices they've made; all of them have been improved by their use of this pattern, regardless of how far they decided to take it.<br /><br /><h3><a name="XML_and_">XML revisited</a></h3><br />Earlier I described XML as a first-class modeling school. Now that we have more context, it's possible to view XML as being an instantiation of the Properties Pattern, inasmuch as it uses the pattern as part of its fundamental structure.<br /><br />XML's view of the pattern is two-dimensional: properties can take the form of either attributes or elements, each kind having different syntactic and semantic restrictions. Many folks have criticized XML for the unnecessary complexity of this redundant pair of property subsystems, but in the end it doesn't really matter much, since two ways to model properties is still better than zero ways.<br /><br />So far we've seen various policies around static checking: Eclipse (strong/mandatory), JavaScript/Lua (very little), and Wyvern (moderate).<br /><br />XML offers what I think is the ideal policy, which is that it lets you decide for yourself. During your initial domain modeling (the "prototyping" phase — a term now loaded with Even More Delicious Meaning), you can go with very weak typing, opting for nothing more than simple well-formedness checks. And for many problems, this is as much static checking as you'll ever need, so it's nice that you have this option.<br /><br />As some of your models become more complex, you can choose to use a DTD for extra validation. And if you need a really heavy-duty constraint system, you can migrate up to a full-fledged XML Schema or Relax NG schema, depending on your needs.<br /><br />XML has proven to be a very popular modeling tool for Java programmers in particular — more so than for the dynamic language communities. This is because Java offers essentially zero support for the Properties Pattern. When Java programmers need access to the pattern, the easiest approach is currently to use XML.<br /><br />The <a type="amzn" asin="0596001975">Java/XML combination</a> has proven reasonably powerful, despite the lack of syntactic integration and numerous other impedance mismatches. Using XML is still often preferable to modeling things with Java classes. Even Eclipse's AST property lists might have been better modeled using XML and a DOM: it would have been less work, and the interface would have been more familiar. And as for Apache Ant: JSON-style JavaScript objects for build files would have been exactly what they needed, but by the time they'd realized they needed a plug-in system, the damage was done.<br /><br />As Mozilla Rhino becomes better documented, and as more Java programmers begin to appreciate the usefulness of JSON as a lightweight alternative to XML, JavaScript may begin to close the gap. Rhino provides Java with the Properties Pattern much more seamlessly than any XML solution. I've already mentioned that it's superior (even to XML) for unit tests and representing mock test data.<br /><br />But it goes deeper than unit testing. Every sufficiently large Java program, anything beyond medium-sized, needs a scripting engine, whether the authors realize it or not. Programs often have to grow to the size of browsers or spreadsheets or word processors before the authors finally realize they need to offer scripting facilities, but in practice, even small programs can immediately benefit from scripting. And XML doesn't fit the bill. It's yet another example of programmers choosing a School of Modeling because they know it, rather than learning how to use the right tool for the job.<br /><br />So it goes.<br /><br /><h3><a name="Bigtable">Bigtable</a></h3><br />Last example: Google's <a href="http://labs.google.com/papers/bigtable.html">Bigtable</a>, which provides a massively scalable, high performance data store for many Google applications (some of which are described in the paper – click the link if you're curious.) This particular instantiation of the Properties Pattern is a multidimensional table structure, where the keys are simple strings, and the leaf values are opaque blobs.<br /><br />Hardcore relational data modelers will sometimes claim that large systems will completely degenerate in the absence of strong schema constraints, and that such systems will also fail to perform adequately. Bigtable provides a nice counterexample to these claims.<br /><br />That said, explicit schemas <em>are</em> useful for many domains, and I'll talk more about how they relate to the Properties Pattern in a bit.<br /><br />This would probably be a good time to mention Amazon's <a href="http://aws.amazon.com/s3/">Simple Storage Service</a>, but I don't know anything about it. I've heard they use name-value pairs.<br /><br />In any case, I hope these examples (Eclipse AST classes, JavaScript, Wyvern game objects, Lisp symbols, XML and HTML, and Bigtable) have convinced you that the Properties pattern is ubiquitous, powerful, and multifaceted, and that it should be part of any programmer or designer's lineup.<br /><br />Let's look in more depth at how it's implemented, its trade-offs, and other aspects of this flexible design strategy.<br /><br /><h2><a name="Overview">Properties Pattern high-level overview</a></h2><br />At a high level, every implementation of the Properties Pattern has the same core API. It's the core API for <em>any</em> collection that maps names to values:<br /><ul> <li><b>get(name)</b></li> <li><b>put(name, value)</b></li> <li><b>has(name)</b></li> <li><b>remove(name)</b></li></ul> There are typically also ways to iterate over the properties, optionally with a filter of some sort.<br /><br />So the simplest implementation of the Properties Pattern is a Map of some sort. The objects in your system are Maps, and their elements are Properties.<br /><br />The next step in expressive power is to reserve a special property name to represent the (optional) parent link. You can call it "parent", or "class", or "prototype", or "mommy", or anything you like. If present, it points to another Map.<br /><br />Now that you have a parent link, you can enhance the semantics of <b>get</b>, <b>put</b>, <b>has</b> and <b>remove</b> to follow the parent pointer if the specified property isn't in the object's list. This is largely straightforward, with a few catches that we'll discuss below. But you should be able to envision how you'd do it without too much thought.<br /><br />At this point you have a full-fledged Prototype Pattern implementation. All it took was a parent link!<br /><br />From here the pattern can expand in many directions, and we'll cover a few of the interesting ones in the remainder of this article.<br /><br /><h2><a name="Represen">Representations</a></h2><br />There are two main low-level implementation considerations: how to represent property keys, and what data structure to use for storing the key/value pairs.<br /><br /><h3><a name="Keys">Keys</a></h3><br />The Properties pattern almost always uses String keys. It's possible to use arbitrary objects, but the pattern becomes more useful with string keys because it trivially enables prototype inheritance. (It's tricky to "inherit" a property whose key is some opaque blob - we usually think of inheritance as including a set of named fields from the parent.)<br /><br />JavaScript permits you to use arbitrary objects as keys, but what's really happening under the covers is that they're being cast to strings, and they lose their unique identity. This means JavaScript <code>Object</code> property lists cannot be used as be a general-purpose hashtable with arbitrary unique objects for keys.<br /><br />Some systems permit both strings and numbers as keys. If your keys are positive integers, then your Map starts looking an awful lot like an Array. If you think about it, Arrays and Maps share the same underlying formalism (a <a href="http://en.wikipedia.org/wiki/Surjective_function">surjection</a>, not to put too fine a point on it), and in some languages, notably PHP, there isn't a user-visible difference between them.<br /><br />JavaScript permits numeric keys, and allows you to specify them as either strings or numbers. If your object is of type Array, you can access the numeric keys via array-indexing syntax.<br /><br /><h4><a name="Quoting">Quoting</a></h4><br />JavaScript syntax is especially nice (compared to Ruby and Python) because it allows you to use <em>unquoted</em> keys. For instance, you can say <pre>var person = {<br /> name: "Bob",<br /> age: 20,<br /> favorite_days: ['thursday', 'sunday']<br />}</pre> and the symbols <em>name</em>, <em>age</em> and <em>favorite_days</em> are NOT treated as identifiers and resolved via the symbol table. They're treated exactly as if you'd written: <pre>var person = {<br /> "name": "Bob",<br /> "age": 20,<br /> "favorite_days": ['thursday', 'sunday']<br />}</pre> You also have to decide whether to require quoting <em>values</em>. It can go either way. For instance, XML requires attribute values to be quoted, but HTML does not (assuming the value has no whitespace in it).<br /><br /><h4><a name="Missing">Missing keys</a></h4><br />You will need to decide how to represent "property not present". In the simplest case, if the key isn't in the list, the property is not there (but see <a href="#Inherita">Inheritance</a> further on).<br /><br />If a property is frequently removed and re-added, it may make sense to leave the key in the list with a null value. In some systems, you may need <code>null</code> to be a valid property value, in which case you'd need to use some other distinguished (and reserved) value for this micro-optimization to work.<br /><br /><h3><a name="Data_Str">Data structures</a></h3><br />The simplest property-list implementation is a linked list. You can either have the alternating elements be the keys and values (Lisp does this), or you can have each element be a struct containing pointers to the key and value.<br /><br />The linked list implementation is appropriate when:<br /><ul><li>you're just using the pattern to allow user annotations on object instances</li> <li>you don't expect many such annotations on any given instance</li> <li>you're not incorporating inheritance, serialization or meta-properties into your use of the pattern</li></ul> Logically a property list is an unordered set, not a sequential list, but when the set size is small enough a linked list can yield the best performance. The performance of a linked list is O(N), so for long property lists the performance can deteriorate rapidly.<br /><br />The next most common implementation choice is a hashtable, which yields amortized constant-time find/insert/remove for a given list, albeit at the cost of more memory overhead and a higher fixed per-access cost (the cost of the hash function.)<br /><br />In most systems, a hashtable imposes too much overhead when objects are expected to have only a handful of properties, up to perhaps two or three dozen. A common solution is to use a hybrid model, in which the property list begins life as a simple array or linked list, and when it crosses some predefined threshold (perhaps 40 to 50 items), the properties are moved into a hashtable.<br /><br />Note that we'll often refer to property sets as "property lists" (or "plists" for short), because they're so often implemented as lists. But it's fairly unusual for the order to matter. In the rare cases when it matters, there are usually two possibilities: the names need to be kept in insertion order, or they need to be sorted.<br /><br />If you need constant-time access and want to maintain the insertion order, you can't do better than a <a href="http://java.sun.com/javase/6/docs/api/java/util/LinkedHashMap.html">LinkedHashMap</a>, a truly wonderful data structure. The only way it could possibly be more wonderful is if there were a concurrent version. But alas.<br /><br />If you need to impose a sort order on property names, you'll want to use an ordered-map implementation, typically an ordered binary tree such as a splay tree or red/black tree. A splay tree can be a good choice because of the low fixed overhead for insertion, lookup and deletion, but with the tradeoff that its theoretical worst-case performance is that of a linked list. A splay tree can be especially useful when properties are not always accessed uniformly: if a small subset M of an object's N properties are accessed most often, the amortized performance becomes O(log M), making it a bit like an LRU cache.<br /><br />Note that you can get a poor-man's splay tree (at least, the LRU trick of bubbling recent entries to the front of the list) using a linked list by simply moving any queried element to the front of the list, a constant-time operation. It's surprising that more implementations don't take this simple step: an essentially free speedup over the lifetime of most property lists.<br /><br /><h2><a name="Inherita">Inheritance</a></h2><br />With prototype inheritance, each property list can have a parent list. When you look for a property on an object, first you check the object's "local" property list, and then you look in its parent list, on up the chain.<br /><br />As I described in the <a href="#Overview">Overview</a>, the simplest approach for implementing inheritance is to set aside a name for the property pointing to the parent property list: "prototype", "parent", "class" and "archetype" are all common choices.<br /><br />It's unusual (but possible) to have a multiple-inheritance strategy in the Properties pattern. In this case the parent link is a list rather than a single value, and it's up to you to decide the rules for traversing the parent chains during lookups.<br /><br />The algorithm for inherited property lookup is simple: <em>look in my list, and if the property isn't there, look in my parent's list. If I have no parent, return <code>null</code>.</em> This can be accomplished recursively with less code, and but it's usually wiser to do it iteratively, unless your language supports tail-recursion elimination. Property lookups can be the most expensive bottleneck in a Properties Pattern system, so thinking about their performance is (for once) almost never premature.<br /><br /><h3><a name="deletion">The deletion problem</a></h3><br />If you delete a property from an object, you usually want subsequent checks for the property to return "not found". In non-inheritance versions of the pattern, to delete a property you simply remove its key and value from the data structure.<br /><br />In the presence of inheritance the problem gets trickier, because a missing key does <em>not</em> mean "not found" – it means "look in my parent to see if I've inherited this property."<br /><br />To make the problem clearer, assume you have a prototype list called Cat with a property named "friendly-to-dogs", whose value defaults to the boolean <code>true</code>. Let's say you have a specific cat instance named Morris, whose prototype is Cat: <pre>var Cat = {<br /> friendly_to_dogs: true<br />}<br /><br />var Morris = {<br /> prototype: Cat<br />}</pre> Let's say Morris has a nasty run-in with a dog, and now he hates all dogs, so we want to make a runtime update to his friendly-to-dogs property. Our first idea might be to delete the property, since a missing key or null value are often interpreted as <code>false</code> in a boolean context. (This is true even in class-based languages like C++ or Java, in which a <code>hasFooBar</code> function will return <code>true</code> if the internal <code>fooBar</code> field is non-<code>null</code>.)<br /><br />However, Morris does not have a copy of "friendly-to-dogs" in his local list: he inherits it from Cat. So if your <code>deleteProperty</code> method does nothing but delete the property from the local list, he will continue to inherit "friendly-to-dogs", which will irk him (and you) endlessly until you figure out where the bug is.<br /><br />You can't delete "friendly-to-dogs" from the Cat property list, or all of your cats will suddenly become dog-haters, and you'll have outright war on your hands. (Note that in some settings this is <em>exactly</em> what you want to do, illustrating the inherent universal trade-off between flexibility and safety.)<br /><br />The solution for Morris is to have a special "NOT_PRESENT" property value that <code>deleteProperty</code> sets when you delete a property that would otherwise be inherited. This object should be a flyweight value so that you can check it with a pointer comparison.<br /><br />So to account for deletion of inherited properties, we have to modify our property-lookup algorithm to look in the local list for (a) a missing key, (b) a null value, or (c) the NOT_PRESENT tag. If any of these apply, the property is considered not present on the object. [Note: the semantics of null values are up to the system designer. You don't have to make <code>null</code> values mean "not there." Either way is fine.]<br /><br /><h3><a name="readwrite">Read/write asymmetry</a></h3><br />One logical consequence of prototype inheritance as we've defined it is that reads and writes work differently. In particular, if you read an inherited property, it gets the value from an ancestor in the prototype chain. But if you <em>write</em> an inherited property, it sets the value in the object's local list, not in the ancestor.<br /><br />To illustrate, let's add a "night-vision" property to our Cat prototype. Its value is expected to be an integer representing how well the cat can see in the dark. Let's say that the default value is 5, but our hero Morris has been eating his carrots, so we want to set his "night-vision" property value to 7.<br /><br />The <code>setProperty</code> code does not need to check the parent chain: it simply adds the key/value pair {"night-vision", 7} to Morris's local property list. If we set the property on Cat, then all cats would have Morris's super-vision, which isn't what we want.<br /><br />This asymmetry is normal. Back in our L.T. / Emmitt Smith example, when we were adding properties to L.T., we didn't want to modify Emmitt! It's just how the pattern works: you override inherited values by adding local properties, even when the override is a deletion.<br /><br /><h3><a name="readonly">Read-only plists</a></h3><br />Many implementations of the pattern offer "frozen" property lists. Sometimes (e.g. for debugging) it's useful to flag an entire property list as read-only. Ruby supports this via the "freeze" method on the built-in root <code>Object</code> class. In any sufficiently large, robust implementation of the Properties pattern, you should include the option to freeze your property lists.<br /><br />If you offer a "freeze" function, you should think about whether you want to offer a "thaw" as well. The decision depends on whether you want to offer programmers additional protection, or you just want to lock them up and throw away the key.<br /><br />My personal view is that Java programmers tend to overuse the "freeze" function when they start with Ruby. For that matter, they tend to overuse "final" in Java. I mentioned before the trade-off between <em>flexibility</em> and <em>safety</em>. When you use the Properties Pattern, you're consciously choosing flexibility over safety, and in many domains this is the right choice. In fact, safety can be viewed as a kind of optimization: something that should ideally be layered on, functioning behind the scenes rather than being interleaved with the user APIs and flexible data model.<br /><br />A nice (and simple to implement) compromise on safety and flexibility is to offer a <code>ReadOnly</code> property attribute, as JavaScript does. There are certain properties (such as the parent pointer) that are less likely to need to change as the system evolves, so it's probably OK to lock them down early on. Doing this on a property-by-property basis is much less draconian. Even better, you should consider making the <code>ReadOnly</code> property attribute non-inheritable, so that subtypes can choose their own policies without compromising the integrity of the supertypes.<br /><br />We're more or less done with inheritance: it's not very complicated. There are a few other inheritance-related design issues that I'll cover in upcoming sections.<br /><br /><h2><a name="perf">Performance</a></h2><br />Performance is one of the biggest trade-offs of using the Properties Pattern. Many engineers are so concerned with performance (and its attendant paradoxes and fallacies) that they refuse to consider using the Properties pattern, regardless of the situation.<br /><br />As it happens, the pattern's performance can be improved and mitigated in several clever ways. I won't cover all of them here, but I'll touch on some of the classics and one or two new approaches.<br /><br /><h3><a name="interning">Interning strings</a></h3><br />Make sure your string keys are interned. Most languages provide some facility for interning strings, since it's such a huge performance win. Interning means replacing strings with a canonical copy of the string: a single, immutable shared instance. Then the lookup algorithm can use pointer equality rather than string contents comparison to check keys, so the fixed overhead is much lower.<br /><br />The only downside of interning is that it doesn't help much when you're constructing a property name on the fly, since you still need to hash the string to intern it.<br /><br />That's not much of a downside, so as a rule, you should always intern your keys. A large percentage of property names in any system are accessed as string literals from the source code (or are read from a configuration file and can be interned all at once when the file is read), and interning works in these common cases.<br /><br /><b>Corollary</b>: don't use case-insensitive keys. It's performance suicide. Case-insensitive string comparison is really slow, especially in a Unicode environment.<br /><br /><h3><a name="perfect">Perfect hashing</a></h3><br />If you know all the properties in a given plist at compile-time (or at runtime early on in the life of the process), then you might consider using a "perfect hash function generator" to create an ideal hash function just for that list. It's almost certainly more work than it's worth unless your profiler shows that the list is eating a significant percentage of your cycles. But such generators (e.g. <a href="http://www.gnu.org/software/gperf/">gperf</a>) do exist, and are tailor-made for this situation.<br /><br />Perfect hashing doesn't conflict with the extensible-system nature of the Properties pattern. You may have a particular set of prototype objects (such as your built-in monsters, weapons, armor and so on) that are well-defined and that do not typically change during the course of a system session. Using a perfect hash function generator on them can speed up lookups, and then if any of them is modified at runtime, you just fall back to your normal hashing scheme for that property list.<br /><br /><h3><a name="Copy-on-">Copy-on-read caching</a></h3><br />If you have lots of memory, and your leaf objects are inheriting from prototype objects that are unlikely to change at runtime, you might try copy-on-read caching. In its simplest form, whenever you read a property from the parent prototype chain, you copy its value down to the object's local list.<br /><br />The main downside to this approach is that if the prototype object from which you copied the property ever changes, your leaf objects will have the now-incorrect old value for the property.<br /><br />Let's call copy-on-read caching "plundering" for this discussion, for brevity. If Morris caches his prototype Cat's copy of the "favorite-food" property (value: "9Lives"), then Morris is the "plunderer" and Cat is the plundered object.<br /><br />The most common workaround to the stale-cache problem is to keep a separate data structure mapping plundered objects to their plunderers. It should use weak references so as not to impede garbage collection. (If you're writing this in C++, then may God have mercy on your soul.) Whenever a plundered object changes, you need to go through the plunderers and remove their cached copy of the property, assuming it hasn't since then changed from the original inherited value.<br /><br />That's a lot of stuff to keep track of, so plundering is a strategy best used only in the direst of desperation. But if performance is your key issue, and nothing else works, then plundering may help.<br /><br /><h3><a name="Refactor">Refactoring to fields</a></h3><br />Another performance speedup technique is to take your N most commonly used properties and turn them into instance variables. Note that this technique is only available in languages that differentiate between instance variables and properties, so it would work in Java but not in JavaScript.<br /><br />This optimization may sound amazingly attractive, especially during your first round of performance optimizations. Be warned: this approach is fraught with pitfalls, so (as with nearly all performance optimizations) you should only use it if your profiler proves that the benefits will outweigh the costs.<br /><br />The first cost is API incompatibility: suddenly instead of accessing all properties uniformly through a single <code>getProperty/setProperty</code> interface, you now have specific fields that have their own getters and setters, which could potentially have massive impact in your system (since, after all, these are the most commonly-accessed properties). And unless you're using Emacs, your refactoring editor probably isn't smart enough to do rename-method constrained on argument value.<br /><br />You can mitigate the API problem by continuing to go through the <code>get/setProperty</code> interface, and have them check the argument to see if it's one of the hardwired fields. This will result in an increasingly large switch-statement (or equivalent), so you're trading API code maintenance for API simplicity. It also slows down the field access considerably, which offsets the performance gain from using a field.<br /><br />The next cost is system complexity: you have twice as many code paths through <em>every</em> part of your system that deals with the Properties pattern. Does inheritance still work the same? What about serialization? Transient properties? Metaprogramming? What about your user interface for accessing and modifying property lists? You face a huge code-bloat problem when you split property access into two classes: plist properties and instance variables.<br /><br />The next cost is accidentally getting it wrong: how do you know what the most-accessed properties are? You may do some runtime profiling and see that it's one set, but over time the characteristics of your system might change in such a way that it's an entirely different set. Realistically you will have to instrument your system to keep track, on a regular basis, of which properties are accessed at a rate beyond your tolerance threshold, so you can convert them to fields as well. This isn't a one-off optimization.<br /><br />But all these costs pale in comparison to the big one, which is extensibility: instance variables are set in stone. It will be a vast amount of work to try to give them parity with your plist properties: change-list notifications, the ability to override them or remove them, and so on. It's likely that you will wind up sacrificing at least some flexibility for these fields.<br /><br />So use this optimization with extreme caution.<br /><br /><h3><a name="fridge">Refrigerator</a></h3><br />The last performance optimization I'll mention is more about conserving memory than CPU. If you're worried about the overhead of a per-object property-list field, even if it's usually <code>null</code>, then you can implement property lists using a separate, external data structure.<br /><br />I don't know what it's normally called, so I'm calling it the Refrigerator, since you're basically putting yellow stickies all over it. The idea is that you don't need to pay for the overhead of property lists when very few of the objects in your system will ever have one. Instead of using a field in each class with a property list, you maintain a global hashtable whose keys are object instances, and whose values are property lists.<br /><br />To fetch the property list of an object, you go look to see if it's on the Refrigerator. The property list itself can follow any of the implementation schemes I discussed in <a href="#Represen">Representations</a> above.<br /><br />I first heard this idea from Damien Conway in roughly 2001 at a talk he gave. He said he was considering using it for Perl 6, and I thought it was pretty clever. I don't remember what he called it, and I don't know if he wound up using it, but consider this idea to be his gift to you. Thanks, Damien!<br /><br /><h3><a name="redacted">REDACTED</a></h3><br />Brendan Eich came up with astoundingly clever performance optimization for the Properties Pattern, which he told me about back in January. I was ready to publish this article, but I told him I'd hold off until he blogged about his optimization. Every once in a while he'd ping me and tell me "any day now."<br /><br />Brendan, it's <em>October</em>, dammit!<br /><br /><h3><a name="Rolling_">Rolling your own</a></h3><br />Needless to say, I've only scratched the surface on performance optimization of the Properties pattern. You can get arbitrarily fancy. The point I'm trying to get across is that you shouldn't despair when you discover your system is unacceptably slow after designing it to use the Properties pattern. If this happens, don't panic and throw out your flexibility – go optimize! The game of optimization can be fun and rewarding in its own right.<br /><br />Just don't do it before you need it!<br /><br /><h2><a name="Transien">Transient properties</a></h2><br />While implementing <a href="http://www.cabochon.com">Wyvern</a>, I discovered that making changes to a persistent property list is a wonderful recipe for creating catastrophes.<br /><br />Let's say some player casts a Resist Magic spell, which boosts her "resist-magic" integer property value by, oh, 30 (thirty percent). Then, while the spell is active, the auto-saver kicks in (writing her enhanced "resist-magic" property value out to the data store along with the rest of her properties), and then the game crashes.<br /><br />Voilà – the player now has permanent 30% magic resistance!<br /><br />It doesn't have to be a game crash, either. Any random bug or exception condition (a database hiccup, a network glitch, cosmic rays) can induce permanence in what was intended to be a transient change to the plist. And when you're writing a game designed to be modified at runtime by dozens of programmers simultaneously, you learn quickly to expect random bugs and exception conditions.<br /><br />The solution I came up with was transient properties. Each object has (logically speaking) <em>two</em> property lists: one for persistent properties and one for transients. The only difference is that transient properties aren't written out when serializing/saving the player (or monster, or what-have-you.)<br /><br />Wyvern's property-list system has typed values. I haven't talked about Properties Pattern type systems yet, but in a nutshell my property values can be ints, longs, doubles, strings, booleans, archetypes (which is basically any other game object), or "bean" (JavaBean) properties.<br /><br />My early experimentation yielded the interesting rule that non-numeric transient properties <em>override</em> the persistent value, but numeric properties <em>combine</em> with (add to) the persistent value.<br /><br />A simple example should suffice. If you have a persistent boolean property "hates-trolls" (and who doesn't, really?), and you accidentally ingest a Potion of Troll Love, then the potion should set a transient value of {"hates-trolls", <code>false</code>} on your character. It overrides the persistent value. There's no combining going on; it just replaces the original.<br /><br />However, for our "resist-magic" int property, if you put on a ring of 30% magic resistance, it should (by default) <em>add</em> to your current value, which may be a combination of innate resistance and resistances conferred from other magic items and spells.<br /><br />This numbers-are-additive principle applied pretty uniformly across my entire code base and property corpus, so it's built into the lookup rules for Wyvern's property lists. <code>getIntProperty("foo")</code> must get both the transient and persistent (possibly inherited) values for "foo" and add them before returning the result.<br /><br />I experimented with different approaches for representing transient properties. Originally I used a kind of hungarian-notation, prefixing transient property names with an @-character ("@foo") and keeping them in the same hashtable as the persistent properties. One advantage of "@" was that it was invalid character in XML attribute names, so it was originally <em>impossible</em> for me to accidentally serialize a transient property.<br /><br />Eventually I migrated to keeping them in separate (lazily created) tables. This made it easier to deal with interning names (not having to prepend "@" all the time) and generally made bookkeeping easier. I don't remember all the trade-offs involved with the decision anymore (it was about 7 years ago), so you'll have to retread that road yourself if you decide to offer transient properties in your system.<br /><br /><h3><a name="delete-remix">The deletion problem (remix)</a></h3><br />Transient properties introduce their own version of deletion problem. You can't just remove the property from the transient list, since the lookup algorithm will just look for it in the persistent list. And you don't want to remove it from the persistent list; that defeats the purpose of using transients.<br /><br />The solution is similar to what I did for deleting inherited properties: you insert a placeholder into the transient list saying "NOT_PRESENT", and as long as that placeholder is in the list, it's as if the object doesn't have the property.<br /><br />Note that this implies the existence of two similar API calls: <code>removeTransientProperty</code> for deleting a transient property from the transient list, and <code>transientlyRemoveProperty</code> for temporarily hiding a property from the persistent list.<br /><br /><h2><a name="persistence">Persistence</a></h2><br />Persisting property lists is a huge topic; I'll just touch on the basics.<br /><br />For starters, XML and JSON (and for that matter, s-expressions) are all perfectly valid choices for serialization format. You can probably imagine how this works, so I won't beat it to death.<br /><br />Text-based formats have big wins in readability, maintainability, and bootstrapping (you don't need to create special tools to read and write them).<br /><br />For performance – both for network overhead and disk storage – you might want to consider designing a compressed binary format. One easy way to test whether this will be a win for you is to take the intermediate approach of gzipping your data to see how well it compresses, and whether it produces a discernable blip in performance.<br /><br />Wyvern initially used a filesystem trie-like structure for storing its data, but as the number of distinct objects grew to several hundred thousand, I had to switch to a database.<br /><br />You can use an RDBMS, but you're in for a world of hurt if you try to map the Properties pattern onto a relational schema. It's a tricky problem, and probably isn't one that you want to solve yourself.<br /><br />I wound up using an RDBMS and just shoving the XML-serialized property list into a text/clob column, and denormalizing the twenty or thirty fields I needed for queries into their own columns. This gets me by, but isn't a happy solution.<br /><br />What you really want is a hierarchical data store optimized for loose tree structures: in a word, an XML database. At the time I was designing Wyvern's persistence strategy (1998-ish), XML databases were pure vaporware, and even after a few years they were still fairly exploratory and unstable.<br /><br />Today things are different, and there are many interesting options for XML databases, ranging from 100% free (e.g. Berkeley DBs) through 100% expensive (e.g. Oracle XML).<br /><br />You might also look into Object databases, but I've never heard of anyone coming through that experience with anything but battle scars to show for it.<br /><br /><h3><a name="Query_St">Query strategies</a></h3><br />Querying goes hand-in-hand with persistence: once you have a bunch of objects in a data store, you'll want to ask questions about them. Producing a High Score List is a good example: you want to compute a function of some set of properties across all the players in your database.<br /><br />If you're just using the filesystem, you're stuck with grep or its moral equivalent on Windows, which is likely to be painfully slow. So don't do that.<br /><br />If you're using an RDBMS, and you've serialized your property lists into a single row-per-object clob or text column, then you can use (My)SQL's LIKE and RLIKE operators, or their equivalents, to do free-text searches.<br /><br />However, your property lists are likely to be hierarchical (e.g. player inventory is a bag, which has its own properties <em>and</em> collection of objects it's holding), and free-text search doesn't understand hierarchy. So this approach is really just a faster version of grep.<br /><br />Querying is the biggest reason for using an XML database, since it gives you XPath and XQuery as expressive languages that work on XML data about as well (give or take) as SQL works on relational data.<br /><br />Because you have the advantage of working in "these days" (2008+) as opposed to "those days" (1998), you now have the interesting option of using JavaScript/JSON and <a href="http://jquery.com/">JQuery</a>. I don't know much about it, but what little I do know seems promising.<br /><br />One final approach, which may not scale very well unless you can find a way to parallelize it, is to simply load all the objects into an instance of your server, and use programmatic access to walk the objects and construct your query results manually. Although it requires some infrastructure to make it work (and to make it not crash your system, once you have enough objects), it has the major benefit of giving you a full programming language, which can be useful if you're doing a complex query and your XPath/XQuery skills aren't up to par.<br /><br /><h3><a name="Backfill">Backfills</a></h3><br />Data integrity, a.k.a. Safety, is one of the two biggest trade-offs (the other being performance) you make when you choose to use the Properties pattern. In the absence of a schema, your system is open to property-list corruption through bugs and user error (e.g. typos).<br /><br />Interestingly, in big companies I've worked at that have strong schema constraints, they <em>still</em> always seem to run into data-integrity problems, so it's not clear how much the schema is really helping here. A schema can certainly help with navigability and performance, but no schema can completely avert data corruption problems.<br /><br />As soon as you notice you've got bad data, you need to do what many people in the industry term a "backfill": you have to run through all the existing data and fix the problem. Sometimes this is as simple as running a SQL update on a single column. And sometimes it involves writing a complex program that painstakingly computes the inverse of whatever bogus operation created the bad data in the first place.<br /><br />And sometimes backfills require just winging it, since the lost data may be irrecoverable and you need to use heuristics to minimize the damage. No matter how you store your data and how careful you are about replicating it and backing it up, this kind of thing can happen at pretty much any scale.<br /><br />The Properties pattern doesn't really introduce anything new to the backfill landscape; all the usual options apply. You'll just need to be mindful that user error (especially mis-typed property names) can make backfills a bit more commonplace, so you should plan to spend a fair amount of time developing a convenient backfill infrastructure for your system.<br /><br />I should mention, embarrassing as it is, one other option, which I call "lazy backfill", and I've used it extensively. Sometimes I'll notice a data-corruption issue that needs fixing but doesn't really justify a day of my time to fix all at once. So I have a small subsystem in place for player logins and map loads: I iterate through the property lists looking for properties that I've flagged (hardwired in the code) as "bad data", and I call helper backfill functions on the fly to fix just that property list.<br /><br />This is obviously a hack, and it also imposes some minor performance overhead (probably not detectable via profiling) on logins and map loads, but I'll be honest: it's served me well, and I've fixed at least 20% of my data-corruption problems this way.<br /><br /><h2><a name="Type_Sys">Type systems</a></h2><br />I've already touched on this a little here and there. Eclipse's AST property lists use an interesting type system that provides a reasonable amount of metadata for each property, although (I think) it stops short of allowing properties to have their own property lists.<br /><br />JavaScript properties have a small, fixed amount of metadata. Each property has a set of flags. The flags include <code>ReadOnly</code> (can't modify the value), <code>Permanent</code> (can modify the value but can't delete the key), <code>DontEnum</code> (key doesn't show up in iterators but can be read directly), and others depending on the implementation.<br /><br />Wyvern has its own Java-like flavor of typed properties, largely because I implemented the system in Java long before the advent of auto-boxing, and I needed a convenient way of dealing with primitive types vs. object types. If I were to do it all over again, I probably wouldn't go that route. I <em>would</em> want some sort of scheme for metaproperties (aka "property attributes") — perhaps in a separate per-object metaproperty-list. But I'd simplify the interface and get rid of all the primitive-typed versions of all my has/get/set inherited/persistent/transient property calls.<br /><br />I won't go into any further detail about type systems, except to say that (a) you can use them to any degree you desire; there's nothing intrinsic to the Properties Pattern that precludes them, and (b) Duck Typing becomes fairly crucial to systems that are designed fully around the Properties Pattern, so if your language has any structural-typing support it'll help.<br /><br /><h2><a name="Toolkits">Toolkits</a></h2><br />Wyvern has a Map Editor that allows you to create and edit objects. Since all the game objects are property lists that use the prototype inheritance pattern, the conventional JavaBean approach doesn't work, since the JavaBeans API (which is more or less designed for this problem, except with instance fields) uses Java reflection, and my properties don't have individual getters and setters.<br /><br />Wyvern wound up with something very similar to a JavaBeans property editor, except it knows how to read, write and display my GameObject property lists.<br /><br />It wasn't a huge amount of work, but it's something you should keep in mind as you decide whether to use the Properties pattern in your system. If you need a GUI for object edits, you'll probably need to do some custom UI work.<br /><br /><h2><a name="Problems">Problems</a></h2><br />I've talked about the main problems imposed by the Properties pattern: performance, data integrity, and navigability/queryability. They're all trade-offs; you're sacrificing in these areas in order to achieve big wins in flexibility and open-ended future extensibility for users you may never meet. (You also get huge wins in unit-testability and prototyping speed, but I assume these benefits are obvious enough that I don't need to dwell on them.)<br /><br />One other problem is reversability: it's hard to back out of the Properties pattern once you commit to it. Once people start adding properties, especially if they're using programmatic construction of key names, you'll have trouble on your hands if you want to try to refactor the whole system to use instance fields and getters/setters. So before you use this pattern, you should put your system through a prototyping phase (ironically enough) to determine whether it will work out as it scales.<br /><br /><h2><a name="further-reading">Further Reading</a></h2><br />I wasn't able to find much, but here are some interesting articles and papers on the subject.<br /><br /><a href="http://martinfowler.com/apsupp/properties.pdf">Dealing with Properties</a> [Fowler 1997]<br /><br /><a href="http://en.wikipedia.org/wiki/Prototype-based_programming">Prototype-based programming</a> (Wikipedia)<br /><br /><a href="http://hillside.net/europe/EuroPatterns/files/DIY_Reflection.pdf">Do-it-yourself Reflection</a> [Peter Sommerlad, Marcel Rüedi] (PDF)<br /><br /><a href="http://en.wikipedia.org/wiki/Prototype_pattern">Prototype pattern</a> (Wikipedia)<br /><br /><a href="http://en.wikipedia.org/wiki/Self_%28programming_language%29">Self programming language</a> (Wikipedia) <br /><br /><a href="http://www.codeproject.com/KB/architecture/AOM_Property.aspx">Refactoring to Adaptive Object Modeling: Property Pattern</a> [Christopher G. Lasater]<br /><br /><h2><a name="updates">New Updates</a></h2><br /><br /><em>Oct 20th 2008:</em> The comments on the article are outstanding. People have pointed out lots of further reading, as well as well-known systems (Atom, RDF) that make extensive use of the pattern at their core. Thanks, folks! This is great stuff.<br /><br /><em>Oct 20th 2008:</em> Martin Fowler sent me a link to a 1997 paper he wrote on the topic. It's in the Further Reading section. Well worth a read. He mentions a few important considerations that I left out: <ul><li><b>The (static) dependencies problem</b> — your compiler can't generally help you with finding dependencies on properties, or even tell you what property names are used in your system. He suggests using a registry of permissible property names as one solution. I strongly considered that approach for Wyvern, but wound up relying on a mix of dynamic tracing and static analysis to get me the dependency graph, and it's been "accurate enough" for my needs. In particular, synthesizing property names on the fly happens about as (in)frequently as Reflection happens in Java. So for the most part the dependencies issue is as tractable as it is in Java: "tractable enough".</li><br /><li><b>Substituting actions</b> Fowler suggests that it's difficult to replace an existing property access with an action. This is only true in the Java implementation (hence, true for my game). In languages like Python, Ruby and JavaScript 1.5 that support property-access syntax for getters and setters this is a non-issue.</li></ul> Overall, Martin's take on the pattern is "avoid it when possible", which is sound (if conservative) advice. My take is that everyone's doing it anyway, so we should formalize it. I've used it as the central data model for my 500,000-line multiplayer game for 10 years, and I assert that the benefits vastly outweigh the problems. I also witnessed the pattern's use in <a href="http://steve.yegge.googlepages.com/is-weak-typing-strong-enough">Amazon's Customer Service Tools database</a> for some 5 years, and again, the benefits vastly outweighed the downsides.<br /><br />You just have to know what you're getting into before you dive in, which is sort of the point of my article.<br /><br /><em>Oct 20th 2008:</em> Fellow Googler <a href="http://www.eightypercent.net/">Joe Beda</a> mentioned that IE4 originally supported arbitrary attributes on HTML elements, which dramatically extended the flexibility for web developers. Today, no browsers support it, though <a href="http://ejohn.org/blog/html-5-data-attributes/">John Resig claims HTML 5 will fix this</a>. In the meantime, developers use fake css classes and hidden elements; it's a mess. I actually deleted a pretty large rant about this problem from the article. But yeah. It's a problem. When you don't provide the Properties Pattern to people, they find horrible workarounds, which is much worse than anything that can go wrong if you simply support it directly. (Joe mentioned that it posed serious technical problems with the cache, though, so I wouldn't assume it's trivial to add the support back in to browsers today.)<br /><br /><h2><a name="Final_Th">Final thoughts</a></h2><br />I haven't covered the whole landscape for this pattern. There are concurrency issues, access-control issues (e.g. in Wyvern, some properties, such as email address, can only be read by very high-level Wizards), documentation issues, and a host of other considerations.<br /><br />Let me summarize what I think are the key takeaways.<br /><br />First: this is a critically important pattern. I call it the "Universal" design pattern because it is (by far) the best known solution to the problem of designing open-ended systems, which in turn translates to long-lived systems.<br /><br />You might not think you're building that kind of system. But if you want your system to grow, and have lots of users, and spread like wildfire, then you <em>are</em> building exactly that kind of system. You just haven't realized it yet.<br /><br />Second: even though people rarely talk much about this pattern, it's astoundingly widespread. It appears in strongly-typed systems like Eclipse, in programming and data-declarative languages, in end-user applications, in operating systems, and even in strongly typed network protocols, although I didn't talk about that use case today. (Nutshell: a team I know using CORBA got fed up and added an XML parameter to every CORBA API call, defeating the type system but permitting them to upgrade their interface without horking every existing client. Bravo!)<br /><br />Third: it can perform well! Or at least, "well enough". The playing field for potential optimizations is nearly unbounded, and with enough effort you can reduce just about everything to constant time.<br /><br />Finally, it's surprisingly versatile. You can use it on a very small scale to augment one teeny component of an existing system, or you can go the whole hog and use it for everything, or just about anything in between. You can start small and grow into it as you become more comfortable with the pattern.<br /><br />The Properties Pattern is <em>not</em> "just" name/value pairs, although the name/value pair certainly lives at the heart of the pattern.<br /><br />If you believe Hofstadter, the Properties Pattern (using the Prototype Principle) is an approach to modeling that complements class-based modeling: both are fundamental to the way our brains process symbolic information.<br /><br />I suspect that if you've read this far, you'll start seeing the Properties Pattern everywhere you look. It's embedded in many other popular patterns and systems, from Dependency Injection to LDAP to DNS. As legacy systems struggle to evolve to be more user configurable and programmable, you'll see more and more of your favorite software systems (and, I hope, languages) incorporating this pattern to varying extents.<br /><br />Again: I wasn't able to find much literature about this pattern. If you happen to know of books, papers or articles that expound on it in more detail, please link to them in the comments!<br /><br />I hope you found this interesting and potentially useful. It was definitely more work than my typical posts, so if it doesn't go over well, I'm happy to go back to the comedy routine. We'll see.Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com122tag:blogger.com,1999:blog-13674163.post-86740010103871050992008-09-28T23:08:00.000-07:002008-09-28T23:16:39.037-07:00The Bellic School of Management TrainingI haven't been blogging much this summer. Mostly it's because all my free time has been spent engaged in an important research project called "What Would Niko Bellic Do?" I've been enrolled in a high-quality Management Scenario Simulator with the unconventional name "Grand Theft Auto IV", probably some sort of inside joke, and I've been going through all its Developer Management training courses.<br /><br />You know how these corporate training videos go, right? They set up some contrived scenario with actors you're supposed to identify with, and the actors have inane discussions about sexual harrassment or bribing government officials or stealing company equipment, and then you're asked to answer questions about whether it was OK for Bob to grab Sue's ass in that particular edge-case scenario.<br /><br />Seriously, I just took one of these courses at work. You'd think they're a joke, but no, they're considered Important Employee Training.<br /><br />Well, this Niko Bellic course isn't much different, just more fun. I finished the final management training session a couple weeks ago. And by truly amazing coincidence, right after I finished the final training mission, my blog made it into the <a href="http://www.noop.nl/2008/09/top-100-blogs-for-development-managers-q3-2008.html">Top 100 Blogs for Development Managers (Q3 2008)</a>.<br /><br />I personally thought that was a little weird, seeing as I've never written explicitly about software development management. Unless of course you count that <a href="http://steve-yegge.blogspot.com/2006/05/not-managing-software-developers.html">one time</a> where I wrote about how you don't need them.<br /><br />BUT, now that I've finished all the Niko Bellic Advanced Management Training courses, I'm officially a Certified Expert Dev Manager, capable of not only handling unforseen tricky management situations, but also teaching others about them! So now I think my blog being in that list is totally justified.<br /><br />All the training in the simulator was top-notch. A lot of it was ground I'd covered before in my days as an actual dev manager. I already knew how to react, for example, when my mafia boss told me he'd been betrayed by my previous manager, and I had to take him out. I'd already gone through stuff like that at Amazon dozens of times, so I breezed through those missions.<br /><br />But even though a lot of the training was pretty obvious stuff, which you come to expect from these corporate training courses, I was pretty impressed by the final Management Training Mission, in which I had to make an Important Management Decision about an employee in my care. I'm sure many of you have also completed this course.<br /><br />In the missions leading up to the finale mission, the "Grand Theft Auto" training software gradually unfolds a background story in which you and this other employee used to work together but had a disagreement that never fully reached closure. You have some trouble arranging a meeting with the employee to discuss it. But as luck would have it, after doing some especially fine work for one of your internal customers, they manage to convince the employee to meet with you, by gagging and handcuffing him and stuffing him on a plane in St. Petersburg and dropping him off at the docks in Liberty City. I had to admit, the simulator really nailed the value of building strong relationships with internal customers.<br /><br />You can imagine how hard it is to write an online corporate training seminar — I mean, you have to get all this material across but keep it entertaining so people actually remember the lessons. The fine folks at the consulting firm "Rockstar" have done an admirable job of presenting the content. They cover all of the HR-sensitive topics an aspiring manager needs to know about: conflicts of interest, employee drug abuse, bribery, sexual harassment, stealing company property, I mean the list goes on and on. It's amazingly thorough. These guys clearly have real-world software industry experience.<br /><br />And they managed to make it fun! They did this by using certain metaphorical devices, purely for dramatic effect. For instance, in the final mission, you're given a choice: you can either let the employee walk away, allowing bygones to be bygones, or you can execute him by shooting him in the head as he begs for his life. This playful enactment is obviously a metaphor for deciding whether to take the "next step" in dealing with a problem employee: whether or not to put him on a Performance Improvement Plan, or if that's already failed, whether or not to terminate employment. By generalizing it to a back-alley execution scenario, they've given you a way to apply the lesson across many similar situations. It's actually quite brilliant, and I'm sure other firms offering management training courses will take note.<br /><br />As you know, in many management situations there's no "right answer": it's a personal decision, and different managers will make different choices. Hence, the final mission, unlike all the other missions, has no "success" outcome: you just pick a course of action and you have to live with it, just like in real life. Regardless of whether you choose to forgive him for displaying a moment of weakness, showing that you understand he's human like the rest of us, or you pop a cap in his motherfucking partner-betraying ass, you goddamn mother fucker *blam* *blam* *blam* *blam* *BLAM* *click* *click* *click*, the other actors will empathize with you and feel your pain, since no tough decision comes without a few regrets.<br /><br />As it happens, the simulation software is remarkably open-ended. When I did the mission, I decided to let the employee walk, but then I accidentally ran him over with an 18-wheeler as I was driving out of the parking lot, killing him instantly. Oopsie! I confess I may not have been paying strict attention to some of the simulator's more obscure traffic rules by the end of the training sessions. Fortunately they weren't grading me on little driving slip-ups.<br /><br />The game chose to interpret my accidental crushing of the employee beneath the wheels of my semi as intentional termination. I like to think of it as modeling the "passive-aggressive" approach, in which you don't fire the employee personally; you have someone else do it, such as HR, or perhaps your boss. Some managers prefer the passive-aggressive approach.<br /><br />To illustrate why it's popular, I'll use an analogy from the restaurant industry. Have you ever noticed that at restaurants, your waiter doesn't bring your food? Other waiters always bring out your food, during which time your waiter is nowhere to be seen. This is so that if you become infuriated because you specifically ordered tartar sauce on the side, and after a 45-minute wait the chef seems to have emptied the entire bottle of tartar sauce on your fish sandwich in some sort of twisted artistico-culinary attempt to make it look like he threw up on it, then you <em>don't blame your waiter</em>. Instead, you unwittingly direct your anger at the person who brought your food, who makes sympathetic noises ("Gosh, I'm so sorry - I can't believe they messed that up!") and runs away, never to be seen again. After it's eventually resolved (by still other people bringing replacements out), your waiter finally rematerializes and apologizes for the kitchen screwup.<br /><br />Studies have shown that this yields higher tipping on average. That's why your waiter doesn't bring your food. Now you know.<br /><br />The passive-aggressive employee-discipline approach is a bit unusual, but some managers feel it's better to have the employee mad at "the company", since then the manager will then get better manager reviews from the employee. I'll write more about this technique in my upcoming "How to Be an Evil Manager" handbook. Should be quite popular! Maybe it'll even bump me up the Top 100 list.<br /><br /><b>Obliviating</b><br /><br />Now that I've finished those training courses, I have all these half-written blogs I need to finish up. Tons of cool stuff to talk about. Unfortunately they're all stalled in various stages because I spent the summer playing video games and doing a lot of bike rides. Sometimes you've just gotta do that, you know.<br /><br />GTA IV was a really good game, so after I finished it (which required some time before I got tired of experimenting with 5- and 6-star warrant levels), I started looking around for an equally good game. GTA IV was basically the best game I'd played since <a href="http://steve-yegge.blogspot.com/2006/05/oblivion.html">Oblivion</a>, which was two years ago. I don't play games that often (just a few each year), so I figured there <em>must</em> be something good out there.<br /><br />I looked and looked, and by golly, the best game out there was <em>still</em> Oblivion. Not only that, but in the intervening 2 years it's sprouted all sorts of mods and expansions like the "Knights of the Nine" and the "Shivering Isles". Neat.<br /><br />So after lots of searching (and not finding) a good follow-up to GTA IV, I stumbled on a used copy of the <a href="http://www.elderscrolls.com/games/obgoty_overview.html">Game of the Year</a> edition. Since nothing else out there today looked anywhere near as good, I started playing Oblivion again.<br /><br />Remember how a few months ago I finally turned off my last Windows box? (I switched 100% to using Mac clients and Linux servers: at home, at work, and on the road.) Well, technically to restart Oblivion where I left off, I was going to have to turn my XBox 360 back on, which felt like cheating. But the used Game of the Year edition I found was for the PS/3.<br /><br />Decisions, decisions...<br /><br />Actually it wasn't much of a decision at all. I'm on a low-Microsoft Diet, meaning I avoid Windows whenever I possibly can. Given that I'd gone for 3 or 4 months or so without interacting with a single Windows installation, I really didn't want to turn on my XBox. Plus it's summer and I don't need a space heater yet.<br /><br />So I went for it. Even though my previous character had finished the main quest and was waiting for her imperial dragon armor, I decided to start from scratch and play the whole thing over again on the PS/3, but this time continue on to the expansions after finishing the main quest.<br /><br />I tell ya: if Oblivion had come out today rather than 2+ years ago, it would <em>still</em> win the Game of the Year. It's just the best game, period.<br /><br />To be honest, I kind of miss Morrowind (the previous game in the series). I find myself missing it even when I'm playing Oblivion. I think it may have actually had more personality than Oblivion. But I'm into the Shivering Isles expansion now, and it's <em>way</em> better than Oblivion on personality; I'm enjoying it even more than the main game.<br /><br />I still find it weird that they don't re-make classic games. Don't you? I mean, they re-make classic movies, so why not games? Sure, we've got game franchises like Zelda and Mario and Donkey Kong — Nintendo is awesome at building character brands. But even Nintendo doesn't re-make great old games; they just carry forward characters and ideas and themes. But that just makes me miss the old games!<br /><br />For instance, Zelda 64: Ocarina of Time — my God, that was a fantastic game. One of the best ever. Ditto for Final Fantasy X. You could argue that was the best game ever, and I wouldn't try to pop a cap in your motherffffffffffff — sorry, sorry, management training getting the better of me. Let's just say I wouldn't try to argue with you. It was a uniquely great game.<br /><br />But both of those games, great as they were, could stand a make-over: updated graphics, updated game physics and mechanics, updated for newer platforms, etc. They'd be playable and (presumably?) popular today, and it would surely be a lot easier than trying to design a new game from scratch, so studios could release these remakes in "dry spells" between flagship game releases for extra revenue.<br /><br />It seems like it'd be better than releasing crap games that they <em>know</em> suck. Which is most of them.<br /><br />I don't know. Maybe people just prefer new games, and that's that. Maybe I'm different. But nearly 40 years as a gamer has convinced me that truly great games only come along every few years, and the rest are all just filler. I'd rather go back and re-play a favorite game, and get that rush of nostalgia, than fill the void by playing crap.<br /><br />Ultima IV — now <em>there</em> was a game. Once or twice a decade a truly extraordinary, groundbreaking game comes along, and I remember when Ultima IV was that game. I would <em>love</em> to play it again, except this time without having to swap the City Disk with the Wilderness Disk and listen to my Commodore 64 crunch away for minutes on end.<br /><br />One of the really key elements to the overall atmosphere of a game is the music, and most great games were great — in large part, I think — due to their music. Zelda 64, Final Fantasy X, Kingdom Hearts, Ultima IV, they all had great music. Sometimes the composer was limited by the sound system - in Ultima IV's case, <em>very</em> limited. But I have all of Ultima IV's music on my iPhone, converted from MIDI files. Dead serious. I <em>love</em> those tunes. They remind me of the old Quest for the Avatar days, back in the Navy when my 3 roommates and I manned the console 24 hours a day in shifts, trying to find mandrake root.<br /><br />Someone please make an updated Ultima IV! I want IV back, dammit. The sequels never even came close.<br /><br />Not likely, though. Maybe in a generation or two. It's usually at least a few decades before they remake a classic movie, so maybe I'm just not waiting long enough.<br /><br />If they remake a classic game, I think it's really important to have the same music. This is where remakes (if you can call 'em that — they're usually either sequels or unrelated new games with similar elements) tend to screw things up. The music should be updated by a competent composer, but it should be the <em>original</em> themes. The Mario franchise does a pretty good job of this, actually. Zelda, not so much. I haven't heard the "Gerudo Valley" theme in any Zelda game since Z64, which came out ten years ago, and along with the title theme it's one of the two best Zelda themes ever.<br /><br />Remember Metroid? It had really cool music all around. I didn't play any of the Metroid franchise after that, until last year when I played Metroid: Prime on the Wii. The music <em>sucked</em>! Actually the game kinda sucked too. The controls were phenomenal, and made me want to play <em>all</em> games like that, but the game was boring. Sigh.<br /><br />It's also kind of important that remakes Not Suck. I guess franchise sequels aren't really remakes, though, so apparently it's OK if they suck. That seems to be the standard. There are exceptions. Mario Galaxy's music was Top 5 of all time. But if you can't afford that Japanese guy, it's OK to reuse old music! I'm serious! I give you my explicit permission!<br /><br />Anyway, I'll probably keep playing Oblivion until New Castle Wolfenstein comes out. I just don't see anything else all that compelling on the horizon. Just counting the weeks until Bethesday's next Elder Scrolls release, I guess. Morrowind to Oblivion was a 5-year wait, so I'm gonna have to find <em>something</em> to do for the next 3 years.<br /><br />Excuses aside, I should really blog more often. I was pretty amazed at how much email my last entry received. To be perfectly on the up-and-up, I haven't read the comments. I'm sorry, but I've been too busy trying to find a Daedric Katana to enchant. Priorities first!<br /><br />But I checked my GMail account on Friday and it was completely swamped with dozens and dozens of new emails, many more than usual. People seemed to feel it made us buddies, which meant they could email me just to say "hi", and ask if I wanted to "hook up." I don't know if that's good or not, but I tried to answer all their mail.<br /><br />Oh wait, that was Niko's account in the Corporate Email Training simulator. My bad.<br /><br />Anyway, today's blog entry was just procrastination. I'm supposed to be getting some work done, so I'll get back to it. Maybe this bandit will have a katana.Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com34tag:blogger.com,1999:blog-13674163.post-87559726936493696032008-09-10T16:59:00.000-07:002008-09-10T18:22:20.896-07:00Programming's Dirtiest Little Secret<table border="0"><tr><td width="20%"> </td><td>"And as for this non-college bullshit I got two words for that: learn to fuckin' type"<br />— <a href="http://www.imdb.com/title/tt0105236/quotes">Mr. Pink</a></td></tr></table><br /><br />This is another one I've wanted to write forever. Man, I've tried a bunch of times. No ruck. Not Rucky. Once again I'm stuck feeling so strongly about something that I'm tripping over myself trying to get my point across.<br /><br />So! Only one thing left to try: bust open a bottle of wine and see if that gets the ol' creative juices flowing all over my keyboard. Rather than top-down, which is boring, let's go bottoms-up.<br /><br /><b>Once upon a time...</b><br /><br />...in, uh, let's see... it was about 1982. Yeah. A looooong time ago. This is practically a fairy tale.<br /><br />Once upon a time in '82, there was this completely hypothetical fictitious made-up dorky 12-year-old kid named Yeev Staigey, who was enduring his sophomore year at Paradise High School in Paradise, California. Yeev had skipped 3rd, 7th and 8th grades and entered high school at age 11, in a heroic and largely successful effort to become socially inept for the rest of his life.<br /><br />Boy, I could tell you all sorts of stories about little Yeev at that age. He was even lamer and more pathetic than you're probably already imagining.<br /><br />However, our story today concerns Yeev's need to take a, um, an... elective of some sort. I'm not sure what they called it, but at Yeev's school, you couldn't just take math and science and languages and history and all that boring stuff. No! Yeev was being educated in the United States of America, so he had to take "electives", which were loosely defined as "Classes Taught by the Football Coach because Some Law Said that Football Coaches Had to Teach A Course Other Than Football."<br /><br />These "electives" (which you could "elect" not to take, in which case they would "elect" not to graduate you) were the kinds of courses that put the "Red" in Red-Blooded American. These were courses like Wood Shop, Metal Shop, Auto Shop, and of course that perennial favorite, Just Chop Your Hand Off For Five Credits Shop.<br /><br />At the time our story begins, our pathetic hero Yeev is peering through his giant scratched bifocal goggles at the electives list, trying to find one that doesn't involve grease and sparks and teachers screaming for a medic, can anyone here make a tourniquet, help help help oh god my pension, and all that manly American stuff you find in Shop class.<br /><br />Yeev noticed that one of the electives, surely placed there by mistake, was Typing. Like, on a typewriter. Yeev thought this seemed, in the grand scheme of things, relatively harmless. The worst that could happen was maybe getting your fingers jammed in an electric typewriter just as lightning hit the building, causing you to jerk wildly in such a way that your pants accidentally fall down around your ankles and everyone laughs loudly at the Mervyn's white briefs your mom bought you. That would be mildly embarrassing, yes, but in a few years almost nobody would remember, except when they saw you.<br /><br />Despite this potential pitfall, Typing definitely sounded more appealing than Tourniquet Shop.<br /><br />Yeev checked, and sure enough, the school's football coach was actually teaching the class. For real. Seeing as this was going to be the closest Yeev would ever get to a football field during his educational career, Yeev decided to go for it.<br /><br />Yeev didn't know it at the time, but they say coaches make the best teachers. You know. "Them." "They" say it. It's got some truth to it, actually. Coaches have to get a bunch of complicated information through to people with the attention spans of hungry billy goats. That takes mad skilz, as "they" also say.<br /><br />Have you ever noticed how on NFL Prime Time, the ex-coach commentators and coached ex-player commentators always have big, beefy hands, and they wave them at you as they talk, riveting your attention on the speaker? It's because your reptilian brain is thinking "that dude is about to hit me." Coaches know how to get your attention. They know how to <em>teach.</em><br /><br />So Yeev was pretty fortunate in getting a coach. It wasn't <em>all</em> roses, mind you. He was unfortunate in the sense that he was living in 1982, he had little to no experience with computers, and the school was so backwards that by 2008 they still wouldn't have a fugging website, apparently. And back in 1982 they could only afford budget for half the typerwriters to be electric; the rest were the old, manual, horse-drawn kind of typewriter.<br /><br />It would have been better if Yeev had been learning to type today. Today they have fast keyboards, and smart programs that can show you your exact progress, and so on.<br /><br />I feel almost jealous of people who need to learn to type today. Don't you?<br /><br />But in 1982, little bifocaled Yeev had no software training programs, so he had to learn from a football coach.<br /><br />And all things considered, this was pretty rucky.<br /><br />Let me tell you how it went down...<br /><br /><b>Learning Licks</b><br /><br />Have you ever watched a professional concert musician practicing? I'm talking about those those world-class ones, the kinds of musicians that were trained in academies in China and Russia and have all the technique of Japanese robots combined with all the musical soul of, well, Japanese robots.<br /><br />Well, they practice like this: Fast, Slow, Medium. Fast, Slow, Medium. Over and over.<br /><br />That's how they practice.<br /><br />It's kinda like Goldilocks. You remember her, don't you? That little girl in the fairy tale that got eaten by bears? Too hot, too cold, just right. That's how musicians practice.<br /><br />In classical music, they call difficult hunks of music "passages". In electric guitar music, they call 'em "licks". But it's pretty much the same thing. You want to train your fingers to swipe through those notes like a Cheshire Cat licking its big smile. Llllllick!<br /><br />Here's how you train your fingers.<br /><br />You start with a passage. Anything at all. At first it'll just be a single note. Later it'll become a few notes, a phrase, a measure, a couple of measures. Anything you're having trouble with and you want to master.<br /><br />First you play the lick <b>as fast as possible</b>. You don't care about making mistakes! The goal of this phase is to loosen your fingers up. You want them to know what raw speed feels like. The wind should be rushing through their fingery hair! Yuck!<br /><br />Next you play it <b>as slow as necessary</b>. In this pass you should use proper technique. That basically means "as proper as you can", because state-of-the-art technique is (a) constantly evolving and (b) always somewhat personal. You pick any discipline, they've got schools of thought around technique. There's no right answer, because our bodies all work a little differently. You just have to pick the technique that you like best and try to do it right.<br /><br />Eventually you can <em>invent</em> your own technique. Sometimes you're forced to: I'll tell you about that in a little bit. But at first you should learn someone else's tried-and-true technique, and after you've mastered it, <em>then</em> decide if you want to change it. Before you go charging off in your own crazy directions, you need to master your form.<br /><br />Form is liberating. Believe it. It's what they say, and they say it for sound reasons.<br /><br />Whatever technique you choose, in the slow pass you don't care about speed AT ALL. You care about accuracy. Perfect practice makes perfect, and all that. You want your fingers to know what it feels like to be <em>correct</em>. Doesn't matter if it takes you 30 seconds per note. Just get it right. If you make a mistake, start over from the beginning, <em>slower</em>.<br /><br />Finally, you play it "at speed". If you're practicing a musical instrument, you play it at the target tempo. You want your fingers to feel <em>musical</em>. Musicians generally agree that you don't want to make mistakes in this phase, or you're just practicing your mistakes. But realistically, most musicians are probably willing to make a few minor sacrifices here in the third pass, as long as the music shines through beautifully.<br /><br />Let's call it 5 sacrifices a minute. That's what you're targeting.<br /><br />Fast, Slow, Medium. Over and over. That's what they do. And it works!<br /><br /><b>Learning to Type</b><br /><br />Yeev's football coach was a very wise man. I don't know if he played a musical instrument, but he sure as heck used classical practice ideas.<br /><br />Yeev dutifully attended class once a day. First he had to learn the basics of typing. There's not much to it, really. You hold your hands in a certain position on "home row". You keep your wrists off the keyboard. There's a diagram showing which fingers type which keys. You memorize it. You try each key a few times.<br /><br />Think back to kindergarten, when they had you writing the alphabet. You'd fill a line with "A"s, and then a line with "B"s. Just like that.<br /><br />Within a day or two, you have the keyboard layout memorized, and given enough time, you can type anything without looking at the keys. Just a day or two, and you're already touch-typing!<br /><br />After the basics, unsurprisingly, Yeev's class played a lot of Typing Football. This was a game the coach had invented to help making learning how to type fun. It wasn't too hard, since the coach astutely realized that not everyone in the class would have the NFL rule book and playbook memorized. Typing football pretty much involved dividing the class in half and moving the ball down the field by typing better than the other half.<br /><br />The drills Yeev did in 1982 could be done a lot better today using software. Heck, today they have software that lets you practice typing by <a href="http://en.wikipedia.org/wiki/The_Typing_of_the_Dead">shooting zombies</a>. How cool is that?<br /><br />If there's any secret to learning to type, it's persistence. Yeev's class kept at it. Every day, five days a week for 12 weeks, they typed. They didn't have homework, since it wasn't expected that they'd have typewriters. They just came in, played typing football, and did the fast/slow/medium drills.<br /><br />Sure, sure, there were nuances. Sometimes they'd practice common letter groupings in their language of choice, which in Yeev's case was English. Groups like "tion", "the" and "ing" had to be practiced until they rolled out with an effortless flick. Sometimes they'd practice stuff with lots of punctuation, or numbers, or odd spacing.<br /><br />That kind of detail is beyond the scope of our story. It's all handled by typing software these days. You'll see.<br /><br />So how did it turn out? Well, by the end of the semester, Yeev was typing a healthy 60 words per minute. And he wasn't even the best in the class. It was approximately a 45-day investment of 50 minutes a day.<br /><br />And it was fun!<br /><br />Realistically, these days, with better software and better keyboards, learning to type is probably more like a 30-day investment of 30 minutes a day.<br /><br />Today Yeev types about 120 wpm. He entered college still typing around 60-65 wpm, but he decided to practice up after IM'ing with a fellow student named Kelly who typed 120 wpm, using an old Unix program called "talk". Yeev could <em>feel</em> her impatience as they IM'ed. He mentioned it, and she said: "You should see me on a Dvorak keyboard."<br /><br />Yowza. Yeev was just socially ept enough by then to bite his tongue really hard, and not type anything at all.<br /><br />But enough about Yeev. The guy's made-up anyway.<br /><br /><b>Do you need to learn to type?</b><br /><br />Well, uh... yeah. You <em>know</em> you do. That's the thing. Even as you make excuses, you know deep down that you need to learn. Typing is how we interact with the whole world today. It doesn't make sense to handicap yourself.<br /><br />Perhaps you're one of those people who declares: "I'm not rate-limited! I spend all my time in design and almost none of it entering code!" I hear that all the time.<br /><br />You're wrong, though. Programmers type all day long, even when they're designing. <em>Especially</em> when they're designing, in fact, because they need to have conversations with remote participants.<br /><br />Here's the industry's dirty secret:<br /><br /><br /><center><font color="firebrick">Programmers who don't touch-type fit a profile</font>.</center><br /><br />If you're a touch-typist, you know the profile I'm talking about. It's dirty. People don't talk about dirty secrets in polite company. Illtyperacy is the bastard incest child hiding in the industry's basement. I swear, people get really uncomfortable talking about it. We programmers act all enlightened on Reddit, but we can't face our own biggest socio-cultural dirty secret.<br /><br />Well, see, here's how it is: I'm gonna air out the laundry, whether you like the smell or not.<br /><br />What's the profile? The profile is this: <b>non-touch-typists have to make sacrifices in order to sustain their productivity.</b><br /><br />It's just simple arithmetic. If you spend more time hammering out code, then in order to keep up, you need to spend <em>less</em> time doing something else.<br /><br />But when it comes to programming, there are only so many things you can sacrifice! You can cut down on your documentation. You can cut down on commenting your code. You can cut down on email conversations and participation in online discussions, preferring group discussions and hallway conversations.<br /><br />And... well, that's about it.<br /><br />So guess what non-touch-typists sacrifice? All of it, man. They sacrifice all of it.<br /><br />Touch typists can spot an illtyperate programmer from a mile away. They don't even have to be in the same room.<br /><br />For starters, non-typists are <em>almost</em> invisible. They don't leave a footprint in our online community.<br /><br />When you talk to them 1-on-1, sure, they seem smart. They usually <em>are</em> smart. But non-typists only ever contribute a sentence or two to any online design discussion, or style-guide thread, or outright flamewar, so their online presence is limited.<br /><br />Heck, it almost seems like they're standoffish, not interested in helping develop the engineering culture. Too good for everyone else!<br /><br />That's the first part of the profile. They're distant. And that's where their claim that "most of their time is spent in design" completely falls apart, because design involves communicating with other people, and design involves a <em>persistent record</em> of the decision tree. If you're not typing as part of your design, then you're not doing design right.<br /><br />Next dead-giveaway: non-typist code is... minimalist. They don't go the extra mile to comment things. If their typing skills are really bad, they may opt to comment the code in a second, optional pass. And the ones who essentially type with their elbows? They even sacrifice <em>formatting</em>, which is truly the most horrible sin a programmer can commit. Well, actually, no, scratch that. It's the second worst. The worst is misspelling an identifier, and then not fixing it because it's too much typing. But shotgun formatting is Right Up There.<br /><br />You know. Shotgun formatting? Where you shove all your letters into a shotgun, point it at the screen, and BLAM! You've Got Code? I knew a dude who coded like that. It was horrible. It was even more horrifying to watch him, because he stared directly downward at his keyboard while he typed, and he'd type with exactly two fingers, whether he needed them both or not, and about once a minute he'd look up at the screen.<br /><br />During these brief inspections of his output, one of two things would happen. The first possibility was that he'd reach for his mouse, because he had been typing into the wrong window for the past 60 seconds, often with comical results. If he didn't reach for his mouse, he'd reach for the backspace key, which he would press, on average, the same number of times he had pressed other keys.<br /><br />That dude just may have been CPU bound rather than I/O-bound, though, so I guess I'll cut him some slack.<br /><br /><b>B-b-b-but REFACTORING *fap* *fap* *fap*</b><br /><br />Yeah, yeah. Refaptoring tools make you feel studly. I hear ya. I've heard it <em>plenty</em> of times. The existence of refactoring tools makes typing practically obsolete! A thing of the past! You just press menu buttons all day and collect a paycheck!<br /><br />I got it. Really. I hear ya.<br /><br />Here's the deal: everyone is laughing at you. Or if they're your close friend, they're just pitying you. Because you suck. If you <em>really</em> think refactoring tools are a substitute for typing, it's like you're telling us that it's OK for you to saw your legs off because you have a car. We're not fucking buying it.<br /><br />If you are a programmer, or an IT professional working with computers in <em>any</em> capacity, <b>you need to learn to type!</b> I don't know how to put it any more clearly than that. If you refuse to take the time, then you're... you're... an adjective interjection adjective noun, exclamation.<br /><br />Yeah, I went ahead and redacted that last sentence a bit. It's better this way. I want us to remain friends. You just go ahead and madlib that sucker.<br /><br /><b>The Good News</b><br /><br />Here's the good news, though. Seriously, there's good news. Like, now that you're finally gonna learn to type, I've got good news for you.<br /><br />And I <em>know</em> you're gonna learn. You know how I know? I'll tell you: it's because you've read this far!<br /><br />Seriously. The fact that you can actually read sets you apart.<br /><br />You'd be absolutely astonishedly flabbergasted at how many programmers don't know how to READ. I'm dead serious. You can learn to speed read even faster than you can learn to type, and yet there are tons of programmers out there who are unable to even <em>skim</em> this blog. They try, but unlike speed readers, these folks don't actually pick up the content.<br /><br />It's the industry's other dirty little secret.<br /><br />So! Given that you've read this far, you now realize that it's high time you learned to type. You know you can do it. You even know it's not going to be that hard. You know you're just going to have to give up a couple dozen games of Wii Golf, and suddenly you'll be, like, twice as productive without having had to learn so much as a new data structure.<br /><br />You <em>know</em>. That's why <em>I know</em> you're going to learn.<br /><br />So I'll tell you the good news: it's frigging easy! Fast, slow, medium! Get some typing software and just <em>learn</em>. We're not talking about dieting here, or quitting smoking. It doesn't matter how old you are, how set in your ways you are: it's <em>still</em> easy. You just need to put in a few dozen hours.<br /><br />Hell, if you're having trouble, just email me, and I'll give you a personalized pep talk. I can afford it. I type pretty fast. Plus your email will be really short.<br /><br />In fact, here's a mini pep talk for ya: I didn't know how to touch-type numbers until I'd been out of college for 3 or 4 years. I finally got fed up with the fact that every time I had to type a number, I had to sit up, look down at the keys, and fumble through with a couple of random fingers.<br /><br />So I finally spent 15 minutes a day for, like, 2 weeks. That's it. You don't have to type numbers very often, as it happens, so after a week or so, every time I needed to type a number I just <em>slowed down and typed it right</em>. After about 2 weeks, I was typing numbers.<br /><br />That was 15 years ago! 15! 15! 15! I love being able to type that without looking! It's <em>empowering</em>, being able to type almost as fast as you can think. Why would you want it any other way?<br /><br />C'mon. It's time to bite the bullet and <em>learn</em>.<br /><br /><b>Where to start?</b><br /><br />Well, if it were me, I'd go online and look for free typing software. I'd search for, oh, an hour or two at most, spread over a week or so. I'd try everything out there. If nothing free seemed to be doing it for me, I'd get Mavis Beacon. It's, like, the brand name for typing software. I have no idea if it's good, but I imagine it's a hell of a lot better than a football coach teaching you on an electric typewriter.<br /><br />I dunno, man. I just can't... I can't <em>understand</em> why professional programmers out there allow themselves to have a career without teaching themselves to type. It doesn't make any sense. It's like being, I dunno, an actor without knowing how to put your clothes on. It's showing up to the game unprepared. It's coming to a meeting without your slides. Going to class without your homework. Swimming in the Olympics wearing a pair of Eddie Bauer Adventurer Shorts.<br /><br />Let's face it: it's <em>lazy</em>.<br /><br />There's just no excuse for it. There are no excuses. I have a friend, John, who can only use one of his hands. He types 70 wpm. He invented his own technique for it. He's not making excuses; he's typing circles around people who <em>are</em> making excuses.<br /><br />Shame on them!<br /><br />If you have two hands, then 70 wpm, error-free, is easily within your reach. Or faster. It ain't as hard as you think. It's a LOT more useful and liberating than you think.<br /><br />And since you're Rucky enough to be learning today, you might as well learn on a Dvorak layout. It's a free speed boost. Might as well give yourself a head start!<br /><br />That's it. That's my article. <em>Please</em> — learn to type. This should be a non-issue, NOT one of the industry's dirty secrets that nobody talks about. Tell your boss you're going to take the time. Get your employer to pay for the software. Have them send you off to a course, if necessary, so you can't weasel out of it.<br /><br />Do whatever it takes.<br /><br />Then let me know how it goes. Believe it or not, I want to hear your success stories. Send me mail. It'll make my day!Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com204tag:blogger.com,1999:blog-13674163.post-24814119356127606662008-08-12T00:57:00.000-07:002008-08-12T03:24:51.056-07:00Business Requirements are BullshitSome CEO emailed me the other day. I don't remember who it was; people mail me all the time about their blah blah yawn product service thingy, and on the rare occasions I bother to read mail from strangers, I don't usually remember anything about the email, even if I respond to it. I can remember broad categories of questions I get, but everything else is just a blur. That's senility for ya.<br /><br />But <em>this</em> dude, or possibly dudette, asked me a really good question. One that made an impression. And by "really good", I mean it was one of the flat-out wrong-est business questions you could possibly ask. It was like asking: "How can I get the most out of my overseas child laborer pool?"<br /><br />There's no right answer to a question that's just <em>wrong</em>.<br /><br />But his question, well, people ask it a lot. In fact there are whole books that give answers to this inherently-wrong question.<br /><br />His question was: <blockquote><em>"Hello, blah blah, I'm the CEO or C-something-O or whatever of a company that blah blah BLAH, and I read your <a href="http://steve-yegge.blogspot.com/2006/09/good-agile-bad-agile_27.html">blog about Agile</a> from almost 2 years ago, which kinda resonated with me in a scary way, inasmuch as I realized perhaps belatedly that I was being duped, BUT I was sort of wondering: <p><b>How do you go about gathering business requirements, so you know what to deliver to the customer?</b></p> <p>Signed, blah blah blaaaaaaaaaah."</p></em></blockquote> <em>(Note: not a verbatim transcript. But close enough.)</em><br /><br />And my answer was: <b>Business requirements are bullshit!</b><br /><br />Well... actually it was: "<em>gathering</em> business requirements is bullshit", plus a bunch of accompanying explanation. But the shorter version sure is catchy, isn't it?<br /><br />The rest of this little diatribe expands a bit on my reply to this guy. (I actually sent him an email with much of this material in it. When I <em>do</em> reply to strangers, I still try do a thorough job of it.)<br /><br />Before we dig into the meat and potatoes of today's meal, let's talk about my credentials, and how it is that I am qualified to offer an opinion on this topic.<br /><br /><b>My Credentials</b><br /><br />If you're a business person stumbling on my blog for the first time, welcome! Allow me to introduce myself: <em>I don't matter.</em> Not one teeny bit. You shouldn't care about my credentials! All that matters is the message, which is hopefully so self-evident that if you hadn't already realized it, you'll be kicking yourself.<br /><br />It's OK, though. Kick away! In fact, give yourself an extra kick for me. Kicking yourself is a great way to remember a new lesson down in your bones. When you get actual bones involved, the painful throbbing for the next few days provides just enough of a reminder that you'll never forget again.<br /><br />In any case, I make no claim to any credentials <em>whatsoever</em>, and this advice is all straight from my butt. Do what you like with it. The advice, that is.<br /><br />With my credentials out of the way, let's see about this guy's question.<br /><br />By the way, I'm specifically addressing this rant towards you <em>only</em> if you (or your team, or company) are building your own product or service offering, or at least defining it. If you're asking consultants to define the product for you, well, you're on your own. Good luck. And if you're a consultant, well, you don't get much choice about what to build, so you're also (mostly) on your own. Although I'd advise you to try to find work that fits the strategy I outline here, as you'll have more fun with it and do a better job overall.<br /><br /><b>Gathering Requirements 101</b><br /><br />It's the first thing everyone wants to do! It's the first thing they teach you in Project Management Kindergarten: the <em>very first thing you should do</em> on a new project is look both ways before crossing the street to your building. Assuming you parked across the street. And the next thing is: start gathering business requirements!<br /><br />Ah, the good old days of Project Management 101. Life seemed so easy back then.<br /><br />The requirements-gathering process they teach you typically involves finding some "potential customers" for your product, and interviewing them in a nonscientific way to try to figure out what they want out of your proposed product. Or service. Or whatever. It doesn't really matter.<br /><br />Potential customers can be hard to come by, especially since you're building something that nobody will ever, EVER want. Well, that's getting ahead of myself. Let's just agree that when you're setting out to Gather Business Requirements, your potential customers usually aren't already sitting in the room with you.<br /><br />Sometimes it's a lot of effort to go find real customers and bring them in and get them to agree to be grilled for hours. Hence, requirements-gathering sometimes takes the form of "role play", where you get some poor saps in Accounts Receivable to pretend they're Potential Customers for your product, and you interview them instead.<br /><br />You might be laughing about those poor Role Playing schmucks, but in reality it doesn't much matter <em>who</em> you interview, because by the time you get to this phase in the project, you've already screwed it up beyond all hope of repair.<br /><br />That process of looking around for customers? It's a plot device that novelists like to call <em>foreshadowing</em>. If you don't have any readily available customers now, then they sure as hell won't be readily available when your product launches.<br /><br />Allow me to illustrate with a Case Study.<br /><br /><b>An Illustrative Case Study</b><br /><br />See? Credentials or no, I sure can talk the lingo, eh? I also like my incisive use of the popular business term "schmuck" in the previous section. It's a term that can apply to people in all phases of project catastrophes, not just Requirements Gathering.<br /><br />For our Case Study, I will not do any research and the product will be entirely fictional. This is the approach used by most real companies.<br /><br />Let's say that our Ideas Department has decided that we should get into Personal Nose Hair Grooming devices, because there's clearly a huge unmet need for nose hair grooming, as evidenced by Karl, our accountant, who has a thatch of nose hair that's <em>almost</em> long and thick enough to be a mustache, if only he would groom the thing a little. And we've seen lots of people like Karl. Clearly a nose-hair grooming kit would be the ideal addition to any man's personal grooming lineup, which typically consists of spitting into the sink and thinking he should get the mirror fixed someday.<br /><br />So we need to gather some Business Requirements. Unfortunately nobody wants to talk directly to Karl, because, well, nose hair can sometimes be a mildly sensitive issue in some cultures. Plus nobody wants to make eye contact with him. So instead we hire some people to go out and do focus-group studies on the subset of people who are comfortable talking publicly about their nose-hair grooming problem, notwithstanding the fact that everyone in this tiny category is obviously too crazy to trust with important business questions.<br /><br />The focus groups find a few nut-jobs who say: "Yeah, I'd <em>love</em> a nose-hair grooming appliance! Plucking your nose hair is painful, and trimming it involves jamming a small whirly Osterizer thing all the way up to your brain. Maybe if I just combed it into a mustache, nobody would notice!" And behold, we have the makings of some Business Requirements.<br /><br />The product is a complete failure, of course. The project postmortem reveals that the user studies and focus groups failed to realize that only certain men, namely "men with significant others", ever even <em>notice</em> their nose hair, even when said hair becomes long enough to interfere with tying their shoelaces. The significant other must forcibly remove the nose hairs with garden shears at least twice before the man gets the hint and starts attending to the problem himself.<br /><br />Not to mention the fact that a nose-hair mustache would be even more obvious and comical than a comb-over. Well, almost.<br /><br />In the end, it doesn't matter how great a Nose Hair Groomer we build, because we failed at the business-requirements gathering phase: nobody actually wants this product. The people who <em>might</em> want it don't know they have a nose-hair issue, and the ones who do know just trim it.<br /><br />The thing that might surprise you is that this fictitious (and yet eerily familiar) Case Study isn't just an illustration of how gathering business requirements can go afoul. We're not talking about a <em>failure mode</em> for Gathering Business Requirements (GBR). We're talking about something more fundamental: the GBR phase of a project is a leading indicator that the project will fail.<br /><br />Put another way: GBR is a virtual guarantee that a project is building the wrong thing, so no amount of brilliant execution can save it.<br /><br /><b>From Business Requirements to Bad Idea</b><br /><br />I learned a lot about the Fine Art of Building Shit that Nobody Wants back at Geoworks, when in 1993-1994 I was the Geoworks-side Engineering Project Lead for the <a href="http://www.thocp.net/hardware/hp_omnigo100.htm">HP OmniGo 100 and 110</a> palmtop organizer products. Both of them launched successfully, and nobody wanted either of them.<br /><br />People spend a lot of time looking at why startups fail, and why projects fail. Requirements gathering is a different beast, though: it's a <em>product</em> failure. It happens during the project lifecycle, usually pretty early on, but it's the first step towards product failure, even if the <em>project</em> is a complete success.<br /><br />Self-professed experts will tell you that requirements gathering is the most critical part of the project, because if you get it wrong, then all the rest of your work goes towards building the wrong thing. This is sooooort of true, in a skewed way, but it's not the complete picture.<br /><br />The problem with this view is that requirements gathering basically never works. How many times have you seen a focus group gather requirements from customers, then the product team builds the product, and you show it to your customers and they sing: "Joy! This is exactly what we wanted! You understood me perfectly! I'll buy 500 of them immediately!" And the sun shines and the grass greens and birds chirp and end-credit music plays.<br /><br />That <em>never</em> happens. What really happens is this: the focus group asks a bunch of questions; the customers have no frigging clue what they want, and they say contradictory things and change the subject all the time, and the focus group argues a lot about what the customers really meant. Then the product team says "we can't build this, not on our budget", and a negotiation process happens during which the product mutates in various unpleasant ways. Then, assuming the project doesn't fail, they show a demo to the original customers, who say: "This is utterly lame. Yuck!" Heck, even if you build exactly what the customer asked for, they'll say: "uh, yeah, I asked for that, but now that I see it, I clearly wanted something else."<br /><br />So the experts give you all sorts of ways to try to get at the Right Thing, the thing a real customer would actually buy, without actually building it. You do mockups and prototypes and all sorts of bluffery that the potential customer <em>can't actually use</em>, and they have to imagine whether they'd like it. It's easy enough to measure <em>usability</em> this way, but virtually impossible to measure <em>quality</em>, since there are all sorts of intangibles that can't be expressed in the prototype. I mean, you can try — we sure tried on the OmniGo products — but in this phase nobody "imagines" that the thing will weigh too much, or be too slow, or will go through batteries like machine-gun rounds, or that its boot time will be 2 minutes, or any number of other things that ultimately affect its sales.<br /><br />And even if you do a world-class job of prototyping and getting at the "real" desired feature set, it still goes through a series of iterations with the engineers and product team, who aren't the actual customers and have little interest in building what the customer <em>really</em> wants (because they're not being measured or evaluated on that — they're evaluated on delivering what everyone ultimately <em>agrees</em> to deliver). During each iteration they push back on things the customers asked for. As the proposal degrades, the customers like it less and less, but they want to be helpful, so eventually they say, "Yeah, I guess I could use that." (Which means: I wouldn't take one of these even if they were giving it away.)<br /><br />Don't get me wrong: I'm not saying that usability teams can't do a good job. It's just that when the project implementation team and the target customer aren't exactly the same group of people, then there are inevitably <em>negotiations</em> and <em>compromises</em> that water an idea down about two levels of quality: great becomes mediocre, and ideas that start as "pretty good" come out "just plain bad."<br /><br />So I'm tellin' ya: gathering requirements is the Wrong Way To Do It. At <em>best</em>, it results in mediocre offerings. To be wildly successful you need a completely different approach.<br /><br /><b>The investing analogy</b><br /><br />Warren Buffett and Peter Lynch, both famous and successful investors, say pretty much the same thing about investing. Peter Lynch's mantra sums it up: "invest in what you know."<br /><br />If you actually take the time to read Lynch's books (which I have), you'll see that this pithy mantra is a placeholder for something a bit more subtle: you should invest in what you know <em>and like</em>. You should invest in companies that make products or services that you are <em>personally</em> excited about buying or using <em>right now</em>.<br /><br />When you invest with this strategy, you're taking advantage of your local knowledge, which tends to be more accurate than complicated quantitative packets put together by analysts. And your local knowledge is <em>definitely</em> more accurate than the reports produced by the companies, who want to paint themselves in the nicest light.<br /><br />Warren took a lot of heat in the 1990s for not investing in the tech sector. But hey, he didn't feel comfortable with tech, so he didn't invest in it. One way to look at this is: "ha ha, what a dinosaur, he sure missed out, and now he's, uh, only the richest person in the world by a small margin." But another, more accurate way to look at it is this: <em>he's the richest person in the world, you asshole.</em> When he gives you investment advice, <b>take it!</b><br /><br />And Warren's advice is: don't invest in stuff you don't understand! <em>Even if it seems like a sure thing.</em><br /><br />That's the hard part. Sometimes it <em>looks</em> like a surefire winner for some large group of people that doesn't actually include you personally at this particular moment. But it's a really large group!<br /><br />Let's say, for instance, that you hear that Subway (the sandwich franchise) is going to do an IPO. They've been privately held all these years and now they're going public. Should you invest? Well, let's see... the decision now isn't quite as cut-and-dried as it was in their rapid-expansion phase, so, um, let me see, with current economic conditions, I expect their sales to, uh... let me see if I can find their earnings statement and maybe some analyst reports...<br /><br /><b>No!</b> No, No, NO!!! Bad investor! That's the kind of thinking that loses your money. The <em>only</em> question you should be asking yourself is: how many Subway sandwiches have I eaten in the past six months? If the number is nontrivial — say, at least six of them — and the rate of sandwiches per month that you eat is either constant or increasing, <em>then</em> you can think about looking at their financials. If you don't eat their sandwiches, then you'd better have a LOT of friends who eat them every day, or you're breaking the cardinal rule of Buffett and Lynch.<br /><br />The investing analogy is an important one, because if you're a company or team planning to build something, then you're planning an investment. It's not exactly the same as buying stock, but it amounts to the same thing: you're betting your time, resources and money — all of which boil down to money (or equivalently time, depending on which one is in shorter supply.)<br /><br />So when translated into project selection, Buffett's and Lynch's advice becomes: <em>only build what you know.</em> The longer, more accurate of the version of the investing rule — only invest in what you know <em>and</em> are excited about using yourself right now — has a simpler formulation for products and businesses. That formulation goes like this:<br /><br /><center>ONLY BUILD STUFF FOR YOURSELF</center><br />That's the Golden Rule of Building Stuff. If you're planning to build something for someone else, <em>let someone else build it.</em><br /><br /><b>Building stuff for yourself</b><br /><br />You can look at <em>any</em> phenomenally successful company, and it's pretty obvious that their success was founded on building on something <em>they</em> personally wanted. The extent that any company begins to deviate from this course is the extent to which their ship starts taking on water.<br /><br />And the key leading indicator that they're getting ready to head off course? You guessed it: it's when they start talking about gathering business requirements.<br /><br />Because, dude, face it: if it's something <em>you</em> want, then you already know what the requirements are. You don't need to "gather" them. You think about it all the time. You can list the requirements from memory. And usually it's pretty simple.<br /><br />For instance, a few years ago I announced to some friends: "I sure wish someone would make a product you can spray on dog poop to make it, you know, just dissolve away." My friends laughed loudly and informed me that this was (apparently) the premise of some Adam Sandler movie I hadn't seen.<br /><br />Well, OK, sure, but... I mean, they kinda missed the point. I still want the product. Its requirements list is pretty simple. Here's the business requirements list:<br /><br /> a) It should dissolve dog poop.<br /><br />Gosh. Can that really be the entire list? You bet! Sure, there are lots of implicit requirements: it shouldn't cost a fortune, it should be environmentally friendly, it shouldn't kill kittens, etc. But those kinds of requirements are true for <em>all</em> products and services.<br /><br />If I knew how to make this product, then Adam Sandler movie or no, I'd probably try to make it. The target market is larger than just pet owners; anyone living near a park would probably own a bottle or two. I would use the stuff like it's going out of style. I'd attach it to my shoes so that every time I took a step it would spray the area in front of me, like a walking garden hose.<br /><br />Building a product for yourself is intrinsically easier, since you don't have to gather requirements; you already know what you want. And you also know, for any given compromise anyone suggests, <em>whether it will ruin the product</em>. If someone says, "I have a product that dissolves dog poop, but it takes 18 hours", then you'll know you've entered into "prolly not worth it" territory. You don't have to go ask the focus group. You just know.<br /><br /><b>The Mistake of Imagination</b><br /><br />Despite its obvious advantages, following the rule of building stuff for yourself is actually really hard to carry out in practice. Why? Oddly enough, it's ego.<br /><br />For one thing, people like to think they're unique and special, and that their tastes aren't necessarily widely shared by others. This is what drives fashion: the need to differentiate yourself from "the crowd", by identifying with some smaller, cooler crowd.<br /><br />The reality is that for any given dimension of your personality, there are oodles of people just like you. If you want something, other people want it too. You define a market: a bunch of them, in fact.<br /><br />You just have to be smart about <em>which</em> of your needs you want to fulfill, since if it's building yachts, well, it's not exactly going to be mass-market, unless you find a way to build a mass-market yacht. Which would be pretty cool, incidentally. I'll buy one if you make it.<br /><br />It's also really easy to fool yourself into thinking that this is a product or service you <em>would</em> use, because, hey, you have a great imagination. When you lose your car keys, you can picture them in all sorts of places: the kitchen drawer, the coffee table, on top of the fridge, and when you picture them there, it's just as vivid as a memory. So you wind up looking for them everywhere! Your powerful imagination is pretty much your worst enemy when it comes to deciding whether you'd like something enough to use it yourself.<br /><br />We all made the Mistake of Imagination on the OmniGo projects. "I could see myself using this product," we'd tell ourselves, "if, that is, I were the kind of person who used this product, which I could sort of envision." You'll tell yourself almost anything to justify the work you're doing. Giving up in mid-project is a big loss of face for an individual, harder for whole teams, virtually impossible for most companies. The OmniGo had four companies involved, making it the hardest possible project to back out of, even though by the halfway point virtually everyone involved knew the product would kind of suck.<br /><br />What we <em>really</em> wanted, while we were building the OmniGo, was summed up by our brilliant product manager Jeff Peterson. We were having beers one day, and he said, "You know, this thing is just getting way too complicated! It doesn't need a 12C calculator emulator! It doesn't need a spreadsheet! It doesn't need a database application! I mean, come on! All it needs is a notepad, a simple calendar, an alarm clock, and maybe a pocket calculator, and it should fit in your front shirt pocket, and it should be a <em>phone</em>."<br /><br />It was 1994 and he was describing the iPhone. And you know what? He was right! That <em>was</em> what we wanted. But HP was driving the spec, and they weren't building it for themselves. They were building the product <em>specifically</em> for this imaginary group of high-power on-the-go consumer accountants who use 12C calculators and want a whole desktop suite crammed onto their 1994-era mobile device. And that's just who bought the thing: pretty much nobody. (They sold a few thousand units, which in mass-market terms is "pretty much nobody.")<br /><br /><b>Trimming the Requirements</b><br /><br />Who was it who said that you're done writing not when there's nothing left to add, but when there's nothing left to take away? Was it St. Exupéry? I promised not to do any research for this rant, but I think it might have been him.<br /><br />Ideally the product you're building for yourself should be <em>simple to describe</em>, so that other people can quickly evaluate whether they, too, want this thing. It's often called the "elevator pitch", because you should be able to describe the product in the time between when the cable snaps and the elevator hits the ground. "Dissolves dog poooooop!!! <crash>" It used to just be the time for an elevator ride, but those investors keep raising the bar.<br /><br />You can almost always make a product better by trimming the requirements list. We're talking brutal triage: throwing out stuff that's really painful to lose, such as the ability to change the battery.<br /><br />If you're lucky, you should be building a product that so excites everyone involved that <em>everyone</em> has an opinion, and you wind up spending most of your time in triage.<br /><br />When you're <em>trimming</em> the business requirements, then you're exhibiting healthy project behavior. This contrasts directly with <em>gathering</em> requirements, which has both the connotation that you're clueless about the product <em>and</em> the connotation that you're inflating the requirements list in direct conflict with schedule, usability and fashion. Trimming: good. Gathering: bad.<br /><br />Trimming the requirements list is a leading indicator that you're a smart company who's about to launch something major. An ideal requirements list looks something like this:<br /><br /> a) It should dissolve dog poop.<br /><br />As a great real-life example, consider the the Flip camcorder, which kinda came out of nowhere and "stole" 13% of the camcorder market (although I'd bet good money that it actually created new market share). Does it dissolve dog poop? Well, no, but it's still pretty cool:<br /><br /> 1) it costs $150 or less. (A lot less, actually.)<br /> 2) it has no cables or wires. Just one flip-up USB connector.<br /> 3) it has one big red button: RECORD, plus a teeny one for playback.<br /> 4) it doesn't take cartridges or cassettes or discs or cards or anything<br /> 5) it doesn't have any controls or settings or <em>anything</em><br /> 6) it stores one hour of video and has roughly one hour of battery life<br /> 7) it's about the size of a cell phone<br /> 8) it records videos that work well with YouTube<br /> 9) it comes in pretty colors<br /><br />I mean, DAMN, those guys knew what they were doing. We always used to joke about a product so simple that it only had one button, which we pressed for you before it left the factory. That's how simple a product needs to be in order to make the mass market. One button, pretty colors. They <em>nailed</em> it. Talk about a missed startup opportunity. (Flip guys: if your equity plan is still reasonable and you want someone to make your desktop software not suck, and yes, it really sucks, please give me a call.)<br /><br />You don't need an <em>original</em> idea to be successful. You really don't. You just need to make something that people want. Even if someone else appears to be making something popular, it's usually possible to improve on the idea and grab market share. And it's painfully counterintuitive at times, but the best improvements often come from simplifying.<br /><br />The easiest way to build a product that kicks ass is to start with someone else's great idea (camcorders, for instance), and <em>take stuff away</em>.<br /><br />In any event, originality is overrated. Coming up with something completely original isn't just hard to do: it's also hard to <em>sell</em>, because investors (and possibly customers) will need to be educated on what this new thing is and why people would want it. And when it comes to buying stuff, nobody likes to be educated. If the product isn't immediately obvious, investors and customers will pass it up.<br /><br />It's easy to come up with new product ideas if you start with the understanding that everything sucks. There are no completely solved problems. Just because someone appears to be dominating a market with an "ideal" offering doesn't mean you can't take market share from them by building a better one. <em>Everything</em> can stand improvement. Just think about what you'd change if you were doing it for yourself, and everything should start falling into place.<br /><br />If nothing else, building things for yourself is more fun, so you're successful regardless of what happens. But it also has great product-survival characteristics, because people can't bluff you into making something lame.<br /><br /><b>Sometimes you just can't win</b><br /><br />By way of don't-sue-me disclaimer, I should point out that building something for yourself doesn't <em>guarantee</em> success. Even if you build a product that most of your target market really really wants, and you hit the right price point and release date and everything, your product can still fail catastrophically.<br /><br />The example that leaps to mind is that company in the 1990s (can anyone remind me of the name? I've forgotten) that built a mountain-bike seat extender. I ride mountain bikes, yes, on actual mountains, so this product made <em>immediate</em> sense to me. I <em>really</em> wanted one. Sounds like a winner, right?<br /><br />The basic physics problem this company was solving is that a lower seat position gives you better balance, and a higher seat position gives you more power. It's a trade-off. You generally want more power when you're grinding uphill, and you want more balance when you're speeding downhill. But during a race you would have to give up precious seconds to adjust your seat between every uphill and downill transition; you'd get your butt kicked. Even as a casual rider, adjusting your seat height all the time is annoying enough that most people just don't do it, resulting in some sacrifice of balance, power, or both.<br /><br />So this brilliant, innovative company came out with a well-made product that lets you adjust your seat height "in flight", as it were, without slowing down, and without adding much (if any) weight to the bike. I don't remember exactly how it worked, but it was a reasonable implementation.<br /><br />Interestingly, <em>it didn't matter how it worked</em>. It could have been actual magic: a product that read your mind and raised or lowered your seat by exactly the right amount, at exactly the right speed (you don't want it to rabbit-punch you in the nuts, for example — remind me sometime to tell you about how I found that out), as frequently or infrequently as necessary to strike the <em>perfect</em> trade-off of balance and power for any slope, at any time.<br /><br />And it <em>still</em> would have failed, even if 90% of the mountain biker population, like me, secretly coveted the product. It could have cost fifty cents, been available everywhere, and been installable by a four-year-old with one hand in under two seconds. It could have come pre-installed on all bike models, via a brilliant channel deal with all the main manufacturers (or retailers), and bikers would have ripped the thing off the bike faster than their kickstand (which is usually the first thing to go.)<br /><br />So, uh, what happened, exactly? Wasn't I just ranting that building a product <em>for yourself</em>, one that you <em>know</em> is the Right Thing for some well-defined audience consisting of people just like you in some dimension — wasn't I ranting that this would always work?<br /><br />Well, no. It's just the best way to pick projects. But they can still fail for surprising reasons.<br /><br />In this case, the fundamental marketing force that the company failed to take into account was <em>fashion</em>. People don't often think of mountain biking (or programming, for that matter) as a fashion industry, but failing to understand the role of fashion is a recipe for random disasters.<br /><br />What happened was this: while they were hyping the product — by demoing it at trade shows, letting bike magazines check it out, and generally working their way towards getting retailer shelf space — the pro bikers took one look at the thing and said: "Hey, looks pretty cool. Maybe I'll get one for my girlfriend. Or my fugging <em>grandma</em>. How much is it?"<br /><br />At which point, everyone (me included) who had been ranting about standing in line to buy one when they came out, we, ah, we were very quick to point out that we were also excited to buy them <em>for our grandmothers</em>, whom we loved just as much as the pros love their grandmothers, thank you very much. In fact, our grandmothers are too macho to use this thing. Maybe we'll put one on our kid's stroller! So there!<br /><br />And that's how a great product, one that probably would have changed biking nearly as fundamentally as the derailleur, was doomed from birth because the trendsetters of Mountain Biking Fashion pronounced it a Product for Sissies, and that was that.<br /><br />Heck, even derailleurs are <a href="http://en.wikipedia.org/wiki/Fixed-gear_bicycle">falling out of fashion</a> in some circles. Go figure. Someone took the time-tested "solved problem" of bicycles, removed something, and wound up creating a new market.<br /><br />Fashion is generally hard to predict, but it usually means "sacrificing comfort or convenience for the sake of style". Take another look at the iPod: it has almost no features. It doesn't have an "off" button. Heck, you can't even change the battery. Not exactly convenient in many ways. But it sure has style!<br /><br />Fashion's not the only way your otherwise perfect product can fail, but it's definitely one to keep an eye on.<br /><br /><b>Less is more... more or less</b><br /><br />OK, fine, I haven't exactly been following my own advice about minimalism. But I <em>have</em> been writing my blog the way <em>I</em> like it, haven't I? So even if nobody reads it, I'm still having fun. If you're going to create something that has a nonzero chance of failure — and believe you me, it's nonzero — then you might as well have fun doing it, right?<br /><br />Anyway, there you have it: the slightly expanded version of the email I sent that CEO guy. Gathering business requirements is hokum. Hooey. Horseshit. Call it what you want, but it's a sign of organizational (or individual) cluelessness. If you don't already know <em>exactly</em> what to build, then you're in the wrong business. At the very least, you should hire someone who does know. Don't gather business requirements: hire domain experts.<br /><br />If you can't think of <em>anything</em> in your company's "space" that you personally would use, then you should think seriously about (a) changing your company's direction, or (b) finding another company. This is true no matter what level you're at. You should be working on something you love, or failing that, at least working on something that you know really well.<br /><br />"But... but..." I hear you saying. I hear you! You sort of get what I'm saying, but you have all these reservations and objections and questions and stuff.<br /><br />Well, that's what the comments section is for. I'm sure you can think of some <em>other</em> explanation for why Warren Buffett is the richest person in the world. Let's hear it!Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com94tag:blogger.com,1999:blog-13674163.post-64035344811552830362008-06-16T18:44:00.000-07:002008-11-18T03:30:16.299-08:00Done, and Gets Things Smart<em>Disclaimer: I do not speak for Google! These are my own views and opinions, and are not endorsed in any way by my employer, nor anyone else, for that matter.</em><br /><br />Everyone knows and quotes Joel's old chestnut, "<b>Smart</b>, and <b>Gets Things Done</b>." It was a <a href="http://www.joelonsoftware.com/articles/fog0000000073.html">blog</a>, then a <a type="amzn" asin="1590598385">book</a>, and now it's an aphorism.<br /><br />People quote Joel's Proverb all the time because it gives us all such a nice snuggly feeling. Why? Because to be in this industry at all, even a bottom-feeder, you <em>have</em> to be smart. You were probably the top kid in your elementary school class. People probably picked on you. You were the geek back when geeks weren't popular. But now "smart" is fashionable.<br /><br />That's right! All those loser kneebiter jocks in high school who played varsity and got all the girls and sported their big, winning, shark-white smiles as they barely jee-parlor-fran-saysed their way through the classes you coasted through: where are they now? A bunch of sorry-ass bank managers and failed insurance salesmen and suit-wearing stiffs at big law firms working for billion-dollar conglomerates where their daddies got them VP jobs where they just have to show up and putt against the other VPs on little astroturf greens in the hallways!<br /><br />That's right, los... er, well, some of them are doing OK, I guess. "But they're not as rich as Bill Gates!" That's the other big tautology-cum-aphorism we geeks like to pull out when we're feeling insecure. Notwithstanding that Bill had a rich daddy and made his money through exactly the kind of corporate shenanigans that Big Meat is using on us today, with those selfsame jocks. We like to think the more important point is that Bill, Jobs, Bezos and the other tech billionaires (all fierce, business-savvy people) were geeks, and are thus evidence of the revenge of the nerds. And hey, tech did make a lot of money in the 80s and 90s. So "smart" is sort of in fashion now.<br /><br />Unfortunately, "smart" is a generic enough concept that pretty much everyone in the world thinks they're smart. This is due to the Nobel-prize-winning <a href="http://en.wikipedia.org/wiki/Dunning-Kruger_effect">Dunning-Kruger Effect</a>, which says, in effect, that people don't know when they're not smart. <font color="purple"><em>(Note: people have pointed out that it was an Ig-Nobel. Me == Dumbass. Fortunately, "me == dumbass" was more or less the point of the article, so I'll let it stand.)</em></font><br /><br />So looking for <b>Smart</b> is a bit problematic, since we aren't smart enough to distinguish it from B.S. The best we can do is find people who we <em>think</em> are smart because they're a bit like us.<br /><br />Er, what about <b>Gets Stuff Done</b>, then? Well, the other qualification you need to be in this industry is that you have to have produced <em>something</em>. Anything at all. As soon as you write your first working program, you've gotten something done. And at that point you probably think of yourself as being a pretty hot market commodity, again because of the Dunning-Kruger Effect.<br /><br />So, like, what kind of people is this <b>Smart, and Gets Things Done</b> adage actually hiring?<br /><br />I've said it before: I thought I was a top-5% (or so) programmer for the first fifteen years I was coding. (1987-2002). Every time I learned something new, I thought "Gosh, I was really dumb for not knowing that, but <em>now</em> I'm definitely a superstar." It took me fifteen frigging years before I realized that there might in fact still be "one or two things" left to learn, at which point I started looking outward and finally discovered how absolutely bambi-esquely thumperly incompetently clueless I really am. Fifteen years!<br /><br />But hell, let's be honest here: I still think I'm smart. We all do. Sure, I've managed to figure out, at least partly, just how un-smart I am, but I've got the whole "I'm smart" thing so hardwired from when I was a kid surrounded by "dolts" who couldn't memorize things as fast as I could, or predict the teacher's punch line way in advance, or whatever questionable heuristic I was using for measuring my own smartness, that it's hard not to think that way. And of course the "dolts" were way better than me at just about everything else. And they think they're pretty smart too! Definitely above average, anyway.<br /><br /><b>Squaaaaawk</b><br /><br />So we all think we're smart for different reasons. Mine was memorization. Smart, eh? In reality I was just a giant, uncomprehending parrot. I got my first big nasty surprise when I was in the Navy Nuclear Power School program in Orlando, Florida, and I was setting historical records for the highest scores on their exams. The courses and exams had been carefully designed over some 30 years to maximize and then test "literal retention" of the material. They gave you all the material in outline form, and made you write it in your notebook, and your test answers were graded on edit-distance from the original notes. (I'm not making this up or exaggerating in the slightest.) They had set up the ultimate parrot game, and I happily accepted. I memorized the entire notebooks word-for-word, and aced their tests.<br /><br />They treated me like some sort of movie star — that is, until the Radar final lab exam in electronics school, in which we had to troubleshoot an actual working (well, technically, not-working) radar system. I failed spectacularly: I'd arguably set another historical record, because I had <em>no idea</em> what to do. I just stood there hemming and hawing and pooing myself for three hours. I hadn't understood a single thing I'd memorized. Hey man, I was just playing their game! But I lost. I mean, I still made it through just fine, but I lost the celebrity privileges in a big way.<br /><br />Having a good memory is a serious impediment to understanding. It lets you cheat your way through life. I've never learned to read sheet music to anywhere near the level I can play (for both guitar and piano.) I have large-ish repertoires and, at least for guitar, good technique from lots of lessons, but since I could memorize the sheet music in one sitting, I never learned how to read it faster than about a measure a minute. (It's not a photographic memory - I have to work a little to commit it to memory. But it was a lot less work than learning to read the music.) And as a result, my repertoire is only a thousandth what it could be if I knew how to read.<br /><br />My memory (and, you know, overall laziness) has made me musically illiterate.<br /><br />But when you combine the Dunning-Kruger effect (which affects me just as much as it does you) with having one or two things I've been good at in the past, it's all too easy to fall into the trap of thinking of myself as "smart", even if I know better now. All you have to do, to be "smart", is have a little competency at something, anything at all, just enough to be dangerous, and then the Dunning-Kruger Effect makes you think you're God's gift to that field, discipline, or what have you.<br /><br />This is why everyone loves <b>Smart, and Gets Things Done</b>, which Joel always writes in boldface. We love it because right from the start it has the yummy baked-in assumption that <em>you</em> are smart, and that <em>you</em> get things done. And it also tacitly assumes that you know how to identify other people with the same qualities!<br /><br />But... you don't.<br /><br />It sucks, but the Dunning-Kruger Effect is frighteningly universal. It's a devil's pitchfork two horrible, boldfaced <b>prongs</b>. First:<br /><br /><b>Incompetent people grossly overestimate their own competence</b><br /><br />We already talked about that, right? You're a good programmer! Heck, you're a great programmer! You're "smart", so anything you don't know you can go look up if you need it! Right?<br /><br />Welcome to incompetence.<br /><br />The second prong is a bit longer, and has a barbed poison tip:<br /><br /><b>Incompetent people fail to recognize genuine skill in others</b><br /><br />Both prongs are intrinsically funny when you're watching them in action in <em>someone else</em>, and they're incomprehensible when anyone tries to poke you with them. Not necessarily offensive, mind you: you <em>might</em> get offended if someone tries to imply that you're not as competent as you feel you are, but it's more likely (per the D-K effect) that you'll simply not believe them. Not even close. You're smart! They're just wrong! Gosh!<br /><br />The second prong, that of the ability to recognize true competence, has major ramifications when we conduct interviews. That's what Joel was writing about in <b>Smart and Gets Things Done</b>, you know: conducting technical interviews.<br /><br />How do you hire someone who's smarter than you? How do you <em>tell</em> if someone's smarter than you?<br /><br />This is a problem I've thought about, over nearly twenty years of interviewing, and it appears that the answer is: you can't. You just have to get lucky.<br /><br />It's easy for a candidate to <em>sound</em> smart. Anyone can use big jargon-y words and talk the talk like nobody's business, but I'm living, breathing proof that articulacy doesn't connote any other form of intelligence. Heck, the Markov-chain synopses of my blogs that people post in quasi-jest tend to look like I wrote them.<br /><br />All too often I find myself on interview loops where the candidate knows a seemingly astounding amount about coding, computer science, software engineering, and general industry topics, but we find out at the last minute that they can't code Hello, World in any language.<br /><br />This is, of course, one of the failure-patterns that Joel's <b>Get Things Done</b> clause is designed to catch. But interviews are conducted under pretty artificial conditions, and as a result they wind up being most effective at hiring people who are good at interviewing. This is a special breed of parrot, in a way. Interviewing isn't a particularly good predictor of performance, any more than your rank in a coding competition is a predictor of real-world performance. In fact, somewhat depressingly, there's almost no correlation whatsoever.<br /><br /><b>If interviews suck, then why do we still do them this way?</b><br /><br />Lots of reasons. One is just history: everyone else does it that way. Companies tend to hire pretty similar HR departments, and HR tends to guide companies towards doing it the way everyone else does it. Same goes for technical management, which is all too often HR-driven as the company grows.<br /><br />Another is that interviewing is already a fairly time-intrusive function for the interviewers, and it tends to be miserable for the interviewees. Trying to make the process somehow more rigorous or accurate just exacerbates these side effects.<br /><br />Another is the "rite of passage" phenomenon, wherein engineers feel that if they had to go through the gauntlet, then everyone else should, too.<br /><br />So for the most part, everyone does the same non-working variety of interviews, and hopes for the best.<br /><br />As far as identifying good people goes, the best solution I've ever seen was at Geoworks, where you were required to do a six-month internship before you could get hired full-time. This seems to be the norm in non-tech departments at most tech companies. They often substitute "contractor" for "intern", but it works out roughly the same. Geoworks is the only company I've seen stick to their guns and make it mandatory for engineers.<br /><br />However, I'm convinced that it only worked because Geoworks seeded the engineering staff with great people. The founding engineers set up a truly beautiful software engineering environment, with lots of focus on tools, mentoring, continuing education, "anarchy projects" to let off steam and encourage innovation, and a host of other goodnesses.<br /><br />I've been a contractor at companies that had no good engineers at all, literally none whatsoever. A mandatory six-month internship at such companies would only serve to <em>lower</em> their average bar, since anybody competent would leave after the six months was up. This doesn't contradict the D/K Effect. It's easy to spot lackluster, soulless engineering organizations, and doing so doesn't imply that you're especially smart.<br /><br /><b>The "Extended Interview"</b><br /><br />Anyway, I've often wondered where the Geoworks founders found such great engineers. The short answer is: "Berkeley", but I'm really looking for something deeper than that; namely: how did they <em>know</em>?<br /><br />Along similar lines, I've long felt that Amazon's success was due in no small part to Bezos having seeded his tech staff with great engineers. World-class great. I don't know where or how he found them, since, again, how do you hire someone who's smarter than you? He's a brilliant guy, but his original choices (ex-<a href="http://en.wikipedia.org/wiki/Lucid_Inc.">Lucid</a> folks, by and large) seem a stroke of blind luck that's hard to attribute to mere genius. I'll probably never know how it happened. Wish I did!<br /><br />They weren't too big on software engineering, though, or more likely they all felt that time-to-market trumped everything else (and were correct, at least in their case), so Amazon is successful but lacks a high-quality software engineering culture. It's gotten much better over the years, of course, but it's a far cry from Geoworks. It's largely up to individual teams at Amazon to decide how much engineering to sprinkle into their coding. There's no central group (or distributed peer group) who can tell any team how to build their software. This is effectively mandated from the top, in fact.<br /><br />But time-to-market is a pretty powerful business force. Maybe that's the missing link? Lucid was founded by Dick Gabriel, the "worse is better" guy, and Amazon took the "worse is better" idea (internally) to untold new extremes.<br /><br />Dunno! But it sure worked for them.<br /><br />Google has a process similar to the Geoworks 6-month internship idea. Geoworks's internship was a form of "extended interview", since it was obvious even in the 1980s that the classic interview format isn't a very good predictor of performance. At Google, you don't have to do an internship. However, unlike at most other companies, you're not "slotted" into a real position on the tech ladder until you've been on the job at least six months. During that time you need to prove that you can function at the level you were hired for, and if it's wrong in either direction your level is adjusted at slotting time.<br /><br />The "extended interview" (in any form) is the <em>only</em> solution I've ever seen to the horrible dilemma, <b>How do you hire someone smarter than you?</b> Or even the simpler problem, How do you identify someone who's <em>actually</em> <b>Smart, and Gets Things Done</b>? Interviews alone just don't cut it.<br /><br />Let me say it more directly, for those of you who haven't taken this personally yet: you can't do what Joel is asking you to do. You're not qualified. The <b>Smart and Gets Things Done</b> approach to interviewing will only get you copies of yourself, and the work of Dunning and Kruger implies that if you hire someone better than you are, then it's entirely accidental.<br /><br />Don't get me wrong: you should still try! Don't throw the bath and baby away. <b>Smart and Gets Things Done</b> is a good weeder function to filter out some of the common "epic fail" types.<br /><br />But realize that this approach has a cost: it will also filter out some people who are just as good as you, if not better, or even <em>way</em> better, along dimensions that are entirely invisible to you due to Dunning-Kruger forces mostly beyond your control.<br /><br />So there's this related interviewing/hiring heuristic that I think may better approximate the kinds of people you really want to hire: <b>Done, and Gets Things Smart</b>.<br /><br />I'll take Joel's cue and write it in <b>bold</b> so you know it's important. It's not <b>condescending</b> or anything. Really. Let's all recite it together, to make it catchy and stuff. Maybe we should have a little ditty for it, so it sticks in our heads annoyingly, forever. I think the Happy Birthday song will do nicely:<br /><br /><b>Hap-py Birth-day To Yooooooou,</b><br /><b>Done-and Geh-ets Things Smaaaart</b><br /><b>Hmmm hmm HMMMMM Hmmmm Hmmmm whoeeeeeever</b><br /><b>Done and geh-eeeeeets thiiiiiings Smaaaaaaaart</b> *clap* *clap*<br /><br />There! You'll remember it every time anyone has a birthday.<br /><br /><b>Done, and Gets Things Smart</b><br /><br />All gentle faux-condescension aside (or as the classroom jocks would have read, "fox condesomething"), Joel's <b>Smart, and Gets Things Done</b> heuristic seems really obvious to everyone. It has this magical "we're all smart in this together" appeal. But sadly, for the reasons I've outlined, almost everyone interprets it to mean "<b>Carbon Copy, of Myself</b>". Great guidance gone astray!<br /><br />The <b>Done, and Gets Things Smart</b> approach is also a way of finding great people, but it recognizes that the Dunning-Kruger Effect requires some countermeasures. It's modeled on the early successes I've witnessed at Geoworks, Amazon, and Google, all of whom had one thing in common: they hired <b>brilliant seed engineers</b>. This boldface is really addictive when you get started on it!<br /><br />They all managed to continue hiring great people afterwards, but the seed engineers were the most important. I'm hoping that this is intuitively obvious in much the same way that wanting smart people who get things done is intuitively obvious.<br /><br />I think you really, <em>really</em> want great seed engineers, and that this is a different class of engineer from the "pretty darn good" engineer we typically hire using Joel's oft-misinterpreted advice.<br /><br />Seed engineers. It's key. You can apply the "I want ideal seed engineers" rule recursively to an organization, a department, a project, a team, and even your officemates. We're not just talking about startups.<br /><br />Let me ask you a brutally honest question: since you began interviewing, how many of the engineers you've voted thumbs-up on (i.e. "hire!"), are engineers you'd personally hire to work with you in your first startup company? Let's say this is a hypothetical company you're going to found someday when you have just a little more financial freedom and a great idea.<br /><br />I posit that most of you, willing to admit it or not, have a lower bar for your current company than you would for your own personal startup company.<br /><br />The people you'd want to be in your startup are <em>not</em> of the <b>Smart and Gets Things Done</b> variety.<br /><br />For your startup (or, applying the recursion, for your new project at your current company), you don't want someone who's "smart". You're not looking for "eager to learn", "picks things up quickly", "proven track record of ramping up fast".<br /><br />No! Screw that. You want someone who's superhumanly godlike. Someone who can teach <em>you</em> a bunch of stuff. Someone you admire and wish you could emulate, not someone who you think will admire and emulate you.<br /><br />You want someone who, when you give them a project to research, will come in on Monday and say: "I'm <b>Done</b>, and by the way I improved the existing infrastructure while I was at it."<br /><br />Someone who always seems to be finishing stuff so fast it makes your head spin. That's what my <b>Done</b> clause means. It means they're frigging done all the time.<br /><br />I met my first <b>Done, and Gets Things Smart</b> engineers back at Geoworks. This was looong before I had any sort of a clue that I suck as an engineer, but these folks (Andrew Wilson and Chris Thomas, if you really must know) were <em>weird</em>. They never seemed to be working that hard, but they were not only 10x as productive as their peers, they also managed technical feats that were quite frankly too scary for anyone else. They could (as just one trait) dive in and learn new languages and make fixes to tools that the rest of us assumed were, I dunno, stuff you'd normally pay a vendor to fix. They dove into the hairiest depths of every code base they encountered and didn't just add features and make fixes; they waved some sort of magic wand and improved the system while they were in there: they would <b>Get Things Smart</b>. Make the systems smarter, that is. Sort of like getting your act together, but <em>they</em>'d do it for <em>your</em> code.<br /><br />I've met many more such engineers along the way. They're out there. They're better than you. They were better twenty years ago than I am today or ever will be. Maybe it's natural ability. Maybe it's luck in education or upbringing. Maybe they have a secret recipe for improving rapidly and learning utter fearlessness. I don't know. But I've met 'em, and they aren't "smart". They're abso-flutely fugging brilliant.<br /><br />You can't interview these people. For starters, they're not interested; these are the people that companies hold on to as long as humanly or companyly possible. The kinds of people that companies file lawsuits over when they're recruited away.<br /><br />You can only find <b>Done, and Gets Things Smart</b> people in two ways, and one of them I still don't understand very well.<br /><br />The first way is to get real lucky and have one as a coworker or classmate. You work with them for a few years and come to realize they're just cut from a finer cloth than you and your other unwashed cohorts. You may be friends with some of them, which helps with the recruiting a little, but not necessarily. The important thing is that you <em>recognize</em> them, even if you don't know what makes them tick.<br /><br />This is the one great hope we programmers have for fighting the Dunning-Kruger Effect, the one hope we have for getting something better than the average "just like me" Solid Plugger Joe Nobody you pick up with the <b>Smart, and Gets Things Done</b> approach. Our only "out" is that working side-by-side with someone will show us clearly when they vastly outclass us.<br /><br />Your devious little mind will come up with all sorts of rationalizations for why they're so damn good, so you can continue to think of yourself as <b>Smart, and Gets Things Done</b> material. You may conclude that they're just a genetic anomaly, and it's no fair even trying to compare yourself to someone who obviously has an unfair gift from the heavens. Or you may tell yourself that they're just a domain expert in various domains that you don't "need" right now. Or you may simply choose not to think about it too much. Good Old Compartmentalization to the rescue!<br /><br />But working with them directly <em>will</em> show you when they're better. It's the only way. You'll gradually realize that your math deficiencies aren't just something that you might need to beef up on if you ever "need to"; you'll see that virtually every problem space has a mathematical modeling component that you were blissfully unaware of until <b>Done, and Gets Things Smart</b> gal points it out to you and says, "There's an infinitely smarter approach, which by the way I implemented over the weekend." You stare slack-jawed for a little while, and then she says: "Here's a ball. Why don't you go bounce it?"<br /><br />These people aren't just pure gold; they're golden-egg-laying geese. They are the ones you want to bring with you to your own startup. Not the <b>Smart, and Gets Things Done, Just As Soon As I Read Up On The Subject, On The Company's Dime</b> riff-raff like you and me. No. They're your seed engineers: the ones who will make or break your company with both their initial technical output and the engineering-culture decisions they put into place — decisions that will largely determine how the company works for the next twenty years.<br /><br />I've been debating whether to say this, since it'll smack vaguely of obsequiousness, but I've realized that one of the Google seed engineers (exactly one) is almost singlehandedly responsible for the amazing quality of Google's engineering culture. And I mean both in the sense of having established it, and also in the sense of keeping the wheel spinning. I won't name the person, and for the record he almost certainly loathes me, for reasons that are my own damn fault. But I'd hire him in a heartbeat: more evidence, I think, that the <b>Done, and Gets Things Smart</b> folks aren't necessarily your friends. They're just people you're lucky enough to have worked with.<br /><br />At first it's entirely non-obvious who's responsible for Google's culture of engineering discipline: the design docs, audited code reviews, early design reviews, readability reviews, resisting introduction of new languages, unit testing and code coverage, profiling and performance testing, etc. You know. The whole gamut of processes and tools that quality engineering organizations use to ensure that code is open, readable, documented, and generally non-shoddy work.<br /><br />But if you keep an eye on the emails that go out to Google's engineering staff, over time a pattern emerges: there's one superheroic dude who's keeping us all in line.<br /><br />How do you interview for such a person? You can't! Everyone will tell you they're God's Gift to engineering quality. Everyone knows how to give it impressive lip service. Heck, there are lots of people who take it way too far and try to gridlock the organization in their over-enthusiasm, when what you really want is a balanced and pragmatic approach. I'd argue that it's virtually impossible to detect these "soft skills" in a classic interview setting, except to the extent that you're hiring your own clone, which according to our thesis here, is NOT what you want. You want <b>Done, and Gets Things Smart</b>: done faster than you, and made <em>your</em> system smarter!<br /><br />I'm guessing that Google's founders worked with this person in school, enough to recognize his valuable talents. Hence they used Identification Approach #1: get lucky in who you work with.<br /><br />Incidentally, they hired plenty of other brilliant seed engineers who were equally responsible for Google's great technical infrastructure. I'm just using this one guy as an illustrative example. But you really want the <b>Done, and Gets Things Smart</b> people on every team. If you could mix in one <b>Done, and Gets Things Smart</b> person with every five to ten <b>Smart, and Gets Things Done</b> people, then you'd be in good shape, since the latter, being "smart", can hopefully learn a lot from the former.<br /><br />But when you're starting a company, or an organization, or a big project, the need for <b>Done, and Gets Things Smart</b> seed engineers is <em>desperate</em>. It's dire, in the sense that if you don't get the right seed people in place, you're dooming your organization to mediocrity, if you manage to succeed at all.<br /><br />And it's direst when you're in a startup, because you can't pillage people from elsewhere in your organization who you know are good. And because <b>Done, and Gets Things Smart</b> people are worth their weight in refined plutonium, they're probably reasonably happy in their current position.<br /><br />So let's assume you're looking at the vast ocean of programmers, all of whom are self-professed superstars who've gotten lots of "stuff" done, and you want to identify not the superstars, but the super-<em>heroes</em>.<br /><br />How do you do it? Well, Brian Dougherty of Geoworks did it somehow. Jeff Bezos did it somehow. Larry and Sergey did it somehow. I'm willing to bet good money that <em>every</em> successful tech company out there had some freakishly good seed engineers. But a lot of company heads who make these decisions aren't necessarily industry programmers, and they still manage to find some world-class people in all the noise. So there must be a second way to identify <b>Done, and Gets Things Smart</b>.<br /><br />I think Identification Approach #2, and this is the one I don't understand very well, is that you "ask around". You know. You manually perform the graph build-and-traversal done by the Facebook "Smartest Friend" plug-in, where you ask everyone to name the best engineer they know, and continue doing that until it converges.<br /><br />I think this might work, assuming you have lots of connections initially (lots of roots for your graph), so you don't get stuck in some slummy local minimum.<br /><br />I've seen companies go to university professors and ask them who their brightest students are; that's a variant of this approach, although it usually only turns up <em>future</em> <b>Done, and Gets Things Smart</b> engineers. Every once in a very rare while you'll get a recent college grad in this category, but I think more often they tend to be experienced enough to make Gandalf feel young again.<br /><br />Because technical brilliance, seemingly superhuman productivity, and near-militaristic adherence to software discipline aren't enough. They also need leadership skills. They don't have to be <em>great</em> leaders; in fact in a pinch, just being bossy might work for a while, as long as they're bossing people in the right directions. But they need to have the ability to guide the organization (or new team, or whatever) in uniformly excellent directions, which requires <em>some</em> leadership, even if it's bad or amateurish leadership.<br /><br />As much as I suspect Approach #2 may work, I think Approach #1 is probably more reliable. Take a closer look at your coworkers who are doing things that you "could learn if you ever need it". Read up on the old Dunning-Kruger Effect. I recommend it with irony dialed at 11, since personally I have yet to read more than the Wikipedia article and a few other articles here and there. I'll read it if I "need to". Psh.<br /><br /><b>Done, and Gets Things Smart</b><br /><br />Not superstars: superheroes! People who are freakishly good at what they do. People who finish things so fast that they seem to have paranormal assistance. People who can take in any new system or design for all intents instantaneously, with no "ramp-up", and who can immediately bring insights to bear that are quite simply beyond your rustic abilities.<br /><br />Those are the folks you want. I'm not going to tell you: "Don't settle for less." Far from it. You still want to hire the <b>Smart, and Gets Things Done</b> folks. But those folks have a long way to grow, and they probably have absolutely no idea just how far it is. So you want some <b>Done, and Gets Things Smart</b> people to guide them.<br /><br />And now, to play us out...<br /><br />Dooooone and Ge-ets Things Smart,<br />Done and Ge-ets Things Smart,<br />Done and Geeee-eeeets Thiiings Smaaaa-aaaart,<br />Done and Ge-ets Things Smart!Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com96tag:blogger.com,1999:blog-13674163.post-28161948212394519562008-06-14T14:11:00.000-07:002008-12-08T21:38:52.135-08:00Rhinos and TigersI will once again plagiarize myself by transcribing a talk I gave.<br /><br />First: be warned! I offer this gesture of respect to you — yes, you! — when I say that this is at least 20 minutes of reading. This is long even for me. If you're surfing reddit, gobbling up little information snacks, then it's best to think of this entry as being more like a big downer cow. Unless you're <em>really</em> hungry, you should wait for it to be sliced into little bite-sized prion patties before consuming it.<br /><br />If you do read it, you'll see the CJD analogy is surprisingly apt. I ramble even more than usual, and lose my train of thought, and the slides might as well be scenes from a David Lynch movie for all the relation they have to my actual talk.<br /><br />But once again I find myself astonished at how much I agree with myself, by and large. Funny how that works. And I made a few decent jokes here and there. So I'm transcribing it.<br /><br />If you're impatient, and I wouldn't blame you a bit, the best part is probably "Static Typing's Paper Tigers". That might be worth reading. As for the rest... *shrug* If you're really starved for content, you might find some of it entertaining.<br /><br /><b>The Setting</b><br /><br />I gave this talk at the <a href="http://code.google.com/events/io/">Google I/O Conference</a> in San Francisco a few weeks ago. My talk was boringly titled "Server-Side JavaScript on the Java Virtual Machine", and there were initially only about 40 or 50 people in the room (out of a 2500-person conference) when I started the talk.<br /><br />Even though I personally thought the talk was pretty boring, people kept trickling in, and I estimate there were about 400 people stuffed in the room by the end. It was standing-room only, and people were spilling out into the hall. The conclusion? The other talks must have been <em>really</em> boring.<br /><br />After my talk it became pretty clear to me that it should have been titled "Rhinos and Tigers", so that's its new name. I've tried to make it flow well by splitting it into arbitrary sub-sections, whose titles aren't really part of the talk. But otherwise it's pretty much a word-for-word transcription, except with most of the umms and aaaahs elided. I've included the subset of the slides that seemed relevant; you can find the rest at the <a href="http://sites.google.com/site/io/server-side-javascript-on-the-java-virtual-machine">Google I/O site</a>.<br /><br />So enjoy! Or not. I've given you plenty of warnings. You'll see!<br /><br /><b>Rhinos and Tigers</b><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyKoBuBQzjEV-oOjCM_oeI_CNKCpsvT-1_K9OGMzviv9lFLLY2RVHE8AKMRpiSvg-a4tKMPWo6OjLxyKpZ-_8q1zW7_5VMx-i8az3rAA4NDIYoQGGr6cL6KQhtfTS-rWB7K9yPIg/s1600-h/rhino.002.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyKoBuBQzjEV-oOjCM_oeI_CNKCpsvT-1_K9OGMzviv9lFLLY2RVHE8AKMRpiSvg-a4tKMPWo6OjLxyKpZ-_8q1zW7_5VMx-i8az3rAA4NDIYoQGGr6cL6KQhtfTS-rWB7K9yPIg/s320/rhino.002.jpg" alt="" id="BLOGGER_PHOTO_ID_5211879248409597394" border="1" /></a><br /><br />Hello! I'm Steve Yegge. I work at Google. I've been there about three years, and it's real nice.<br /><br />I'm going to be talking about server-side scripting in general, and talking a lot about <a href="http://www.mozilla.org/rhino">Mozilla Rhino</a> and the technology behind it. I'm going to try to get through it in maybe 20-25 minutes, maybe 30 if I start ranting, at which point I'll open it up for questions. I kind of want you guys to help drive how this goes.<br /><br />Make sense? <em>(Ed: Well, it made sense at the time. Sigh.)</em><br /><br />All right, cool. Let's get started.<br /><br />Sooo... I'm going to be talking about Server-Side JavaScript on the Java Virtual Machine.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7WC_IbWoC11rhSa0uYYCHoggLxWIROMCAXrc6_NiuxmUfb3wb0KZCmWfuSRLNBwB8nmxhmz5zxrSj8zmM-cIJvx6oecNvO3i_pCAy0T2fyN1dKcMUaWtLH7-X4r7v9L0ugAC_ig/s1600-h/rhino.003.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7WC_IbWoC11rhSa0uYYCHoggLxWIROMCAXrc6_NiuxmUfb3wb0KZCmWfuSRLNBwB8nmxhmz5zxrSj8zmM-cIJvx6oecNvO3i_pCAy0T2fyN1dKcMUaWtLH7-X4r7v9L0ugAC_ig/s320/rhino.003.jpg" alt="" id="BLOGGER_PHOTO_ID_5211879247414730114" border="1" /></a><br /><br />Yes. We've got this big animal. Rhino.<br /><br />So let's see... who here has used a JVM language before? Oooh! My god, lots of you, almost all of you. Great!<br /><br />Well I'm going to be talking about Rhino in particular. I'll be making reference to other JVM languages. I want to kind of help you see how this fits into the landscape, why you might use it, why you might not use it.<br /><br />But for those of you who haven't used a JVM language, the Java Virtual Machine is sort of like .NET: you can run multiple languages on it. You can write an interpreter in Java, or you can compile your language down to Java bytecode. Or you can compile it down to your own bytecode; there are different ways to do it.<br /><br />But typically these languages are sort of drop-in replacements for Java. Which means you can implement classes, you can implement interfaces, you can subclass things. It gives you an alternate syntax and semantic layer on top of the libraries, and on top of the virtual machine.<br /><br />I'll assume that this makes sense... well, actually, I won't!<br /><br /><b>FOO Chaos</b><br /><br />There's this dude named <a href="http://en.wikipedia.org/wiki/Walter_Bright">Walter Bright</a>, who wrote the <a href="http://www.digitalmars.com/d">D programming language</a>, among many other things. <em>(Raise hand)</em> Has anyone heard of Walter? He's this really smart dude. He wrote Symantec Cafe, and the game Empire [and Zortech C++].<br /><br />He told me the other day, [talking about] one of my blog rants, that he didn't agree with the point that I'd made that virtual machines are "obvious"<em></em>. You know? I mean, of course you use a virtual machine!<br /><br />But he's a compiler dude, and he says they're a sham, they're a farce, "I don't get it!" And so I explained it [my viewpoint] to him, and he went: Ohhhhhhh.<br /><br />Virtual machines are great for language interoperability. If everybody in the world used his language, then yeah, you probably wouldn't need a virtual machine. You'd probably still want one eventually, because of the just-in-time compilers, and all the runtime information they can get.<br /><br />But by and large, we don't all use D. In fact, we probably don't all use the same five languages in this room. And so the VM, whether it's the CLR, or the Java VM, or Parrot, or whatever... it provides a way for us to interoperate.<br /><br />Now I'll tell ya — I was at Foo Camp last summer. I've been wanting to tell this story... I'm telling you guys; it's the coolest story. And it's relevant here. Heh. Very relevant.<br /><br />So I was in this tent... you know what <a href="http://en.wikipedia.org/wiki/Foo_Camp">Foo Camp</a> is? It's O'Reilly's, you know, <b>F</b>riends <b>O</b>f <b>O</b>'Reilly invite thing that they do each summer. It's coming up in a couple of weeks. And people give presentations; people show up and just wander into your [presentation] tent, and wander back out if they don't like it.<br /><br />So I was in this discussion at the very end of the last day, where the Apple <a href="http://llvm.org/">LLVM</a> guy Chris [Lattner] was hosting a talk on dynamic languages running on different VMs. And there was the Smalltalk <a href="http://squeak.org/">Squeak</a> guy there, and there was <a href="http://headius.blogspot.com/">Charles Nutter</a> for <a href="http://jruby.codehaus.org/">JRuby</a> and representing the JVM. <a href="http://www.blogger.com/www.iunknown.com">John Lam</a> was there for <a href="http://www.ironruby.net/">IronRuby</a> and CLR, and there were the <a href="http://www.parrotcode.org/">Parrot</a> people. I can't even remember them all, but the whole room was <em>packed</em> with the VM implementors of the VMs today, and people who are implementing languages on top of them.<br /><br />This was a <em>smart</em> group of people, and well-informed. And you know, I was like a fly on the wall, thinking man, look at all [these brains].<br /><br />And Chris, well, he let everybody go around the room and talk about why their VM was the best. And they were all right! That's the weird thing: every single one of them was right. Their VM was the best for what they wanted their VM to do.<br /><br />Like, Smalltalk [Squeak]'s VM was the best in the sense of being the purest, and it was the cleanest. Whereas the Java one was the best because, you know, it has Java! Everybody's was the best. Parrot's was the best because it was vaporware. Ha! Ha, ha ha. Sorry guys.<br /><br />So! He [Chris] asked this really innocent question. He goes, "You know, I don't really know much about this stuff..."<br /><br />Which is bad, you know. When somebody says that to you at Foo Camp, it means they're settin' you up.<br /><br />He says, "So how do these languages talk to each other?"<br /><br />And the room just <em>erupted</em>! It was chaos. All these people are like, "Oh, it's easy!" And the rest of them are like "No, it's hard!" And they're arguing, and arguing, and arguing. They argued for an <em>hour</em>.<br /><br />And then they stood up, still arguing, and they kept talking about it, heading into the dinner tent. And they sat down, going at it for like three hours.<br /><br />It was <em>chaos.</em><br /><br />Because some people were saying, "Well, you know, if Ruby's gonna call Python, well, uh, you just call, right? You just share the same stack, right?"<br /><br />And the others are like, "Well, what about different calling conventions? What if they support optional vs. non-optional arguments? What if they support default arguments? What about the threading model? What about the semantics of, you know, the <code>this</code> pointer? What about all this <em>stuff?</em>"<br /><br />And they're like <em>(waving hands)</em> "Ooooh, we'll gloss over it, gloss over it, smooth it over." And the reply is: "You <em>can't</em>. This is fundamental. These languages work differently!"<br /><br />And oh my god, it was really interesting. And it was also very clear that it's ten years of research and implementation practice before they get this right. Before you'll be able to have a stack of calls, where you're calling from library to function, library to function in different languages.<br /><br />So today, VMs are good for interoperability, but you've gotta use a bridge. Whether it's JRuby, or Jython, or Rhino, they provide a set of APIs — you know about <a href="http://java.sun.com/javase/6/docs/api/javax/script/package-summary.html">javax.script</a>, right? It's this new thing introduced to the JDK, where they try to formalize just a couple of APIs around how you call the scripting language from your language... you know, it's a sort of "reverse foreign-function interface". And then how they call each other, maybe.<br /><br />But it's all done through... kind of like serialization. You marshal up your parameters as an array of arguments, and it's a heavyweight call that goes over [to the script runtime] and comes back, and it's like... it's a pain! You don't want that. But today, that's kind of what we're stuck with.<br /><br />At least we have that, though, right? I mean, try having Ruby call Python today, and they have different FFIs. You can do it, but you definitely want the VM's help.<br /><br />So, Walter, that's why you need VMs.<br /><br /><b>Power to your users</b><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPDBV0HdFJnQlvUMnS4J97aF6ve6HLKvFOe-egYr9Y6cDX3rrCY6wkAWWNcLrZv5YHLpgfIQF9oX5hbWYIIYf-T4XwunZz_0RkTOMF-qLCZpqXth99XI-C1eLRaJGqZyAffV72tQ/s1600-h/rhino.004.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPDBV0HdFJnQlvUMnS4J97aF6ve6HLKvFOe-egYr9Y6cDX3rrCY6wkAWWNcLrZv5YHLpgfIQF9oX5hbWYIIYf-T4XwunZz_0RkTOMF-qLCZpqXth99XI-C1eLRaJGqZyAffV72tQ/s320/rhino.004.jpg" alt="" id="BLOGGER_PHOTO_ID_5211879258733925266" border="1" /></a><br /><br />So! Yeah, there's a lot of stuff I could talk about. I gave a practice of this talk to Mike Loukides, an editor at O'Reilly, and it completely changed what I wanted to talk about.<br /><br />I do want to talk about Rhino's technology; I want you to come away understanding it. But more importantly, I want you guys to understand where this fits in this Google conference. And where it fits in <em>your</em> plans, going forward.<br /><br />See, it's really interesting. We all know, or at least most of us I think agree, that server-side computing is finally starting to make inroads onto the desktop. Fat clients aren't so much the norm anymore. You've got applications like Google Maps, GMail, Google Docs, those kinds of apps, that are doing "desktop quality" applications in the browser, with the server handling your storage.<br /><br />That's kind of one of the messages of this conference. Everybody's doing it, right? It's not just Google. And it makes a certain amount of sense, so I'm not going to go into the reasons why you'd do that. I'm assuming it's sort of a given. <span style="color: rgb(106, 90, 205);"><em>(Editor's Note: you'd be amazed at how many emails I get from people who maintain it's a fad of some sort, one that's going away, which is why I bother to make this disclaimer.)</em></span><br /><br />The interesting thing is this: all applications... who was it who said "All apps will eventually grow to the point where they can read mail, and if they don't, they'll be replaced by ones that can"? <em>(audience: "JWZ")</em> JWZ? <a href="http://en.wikipedia.org/wiki/Jamie_Zawinski">Jamie Zawinski</a>. Yeah. It's a variant of somebody else's quote [Greg Kuperberg's], but...<br /><br />So it's true, right? Apps grow. If you like an app, you're gonna want to start doing more and more stuff in it. If you like it a <em>lot</em>, like I like Emacs, heh... you know, you like your editor. Everybody here is a programmer, right? You all use development environments? Do you ever find it kind of annoying that you have to switch from your IDE to your browser? Why isn't the IDE the browser too? Why aren't these unified?<br /><br />I mean, let's face it: I only run two apps. Unless I need to run, like, <a href="http://www.omnigroup.com/applications/OmniGraffle/">OmniGraffle</a> or the <a type="amzn" asin="0735709246">Gimp</a>, or something to do a document, or <a href="http://www.apple.com/iwork/keynote/">Keynote</a> here to do the presentation — I just switched to Macs, so I'm learning all these names, but, this PowerPoint thing — most of the time, when I'm developing, I'm running shells, and I'm running Emacs, and I'm running a browser. That's it! So you kind of wish they'd be the same.<br /><br />Well, once they get big enough, your IDE and Emacs and the browser have this thing in common, which is that they are <em>scriptable</em>!<br /><br />That's the magic point at which your application becomes sort of <a href="http://steve-yegge.blogspot.com/2007/01/pinocchio-problem.html">alive</a>. Right? Because people can change it, if it doesn't work the way they like it.<br /><br /><a href="http://en.wikipedia.org/wiki/Greasemonkey">GreaseMonkey</a>! Perfect example. You don't like our web page that we give you? Write a GreaseMonkey script and change it all around, right? That's cool! Scripting is really important.<br /><br />I mean, Emacs, it stands for "Editor Macros", and it started off as a really thin engine, and the whole editor was written in scripts. And now it's huge. It has a million lines or so of Emacs-Lisp code floating around.<br /><br />So it's weird... you go through this transformation, where your scripting languages are originally for, well, scripting. And it eventually grows into application level/scale development. OK?<br /><br />Now we all see this happening in clients. Excel, for instance, is scriptable. And the reason that Excel is so powerful, I mean the reason that you can go to the bookstore and get a book that's <a type="amzn" asin="0321262506">this thick</a> on Excel, and scientific computing people use it, whatever, is that it has a very very powerful scripting engine.<br /><br />In fact, all of Microsoft Office has it. Right? You can fire up Ruby or Python or Perl, and you can actually control, though the COM interface, you can actually <a href="http://steve.yegge.googlepages.com/scripting-windows-apps">tell IE to open a document</a> and scroll to a certain point in it. Or you can open up Microsoft Word and actually... I mean, if you want to do the work, you could actually get to where you're typing into your Perl console and it's showing up over in Word.<br /><br />Server-side computing has to get there. It's <em>gonna</em> get there.<br /><br />But how many server-side apps are user scriptable today? Precious few. Google has a couple, you know, like our <a href="http://en.wikipedia.org/wiki/JotSpot">JotSpot acquisition</a>, which is [scriptable] in Rhino...<br /><br />So we're talking about something that's kind of new. I mean, we can all see it coming! But it's still kind of new, the idea, right? Why?<br /><br />Because this is scary hard, from a security perspective. Heh. You're going to run code on <em>my</em> servers? Uh... OK...<br /><br />I mean, Yahoo! Store, you know, Paul Graham's <a href="http://en.wikipedia.org/wiki/Viaweb">Viaweb</a> that <a href="http://www.paulgraham.com/avg.html">went on to become Yahoo! Store.</a> People have done it, right?<br /><br />I wrote a <a href="http://www.cabochon.com/">game</a> that was really cool. <a href="http://www.cabochon.com/wiz/code_examples">Scriptable</a>! I mean, high school kids were teaching themselves to program so they could write areas and monsters and spells and quests for this game that I wrote, which was written in <a href="http://www.cabochon.com/api/index.html">Java</a> and scriptable in Jython.<br /><br />It's a big deal! I mean, people want to be able to write these apps.<br /><br />However, I had to live with the fact that I didn't personally have enough bandwidth to come up with a decent security model to keep them from... it's a trust-based security model in my game. They write a script, they could erase my hard disk, right? So I've got to be very careful, and recognize that I can only let certain people that I trust do it. And that I've got to be prepared for really big disasters.<br /><br />Because also there's denial-of-service. It's inadvertent: oh, their script is taking up all the bandwidth [or CPU or memory] on my server, and everybody else in the game is paralyzed. Right? I mean, how do you deal with it?<br /><br />You've got to deal with user [i.e., programmer] throttling: memory usage, the database or datastore usage, like Amazon's computing cloud, you know, they have a lot of this stuff in place. But usually it's pretty coarse-grained when it gets started, right? You get a box, and a certain amount of disk storage, and you get the whole CPU, because how are you gonna allocate CPUs out to people when the languages themselves that are being used for scripting don't support that? <span style="color:#6a5acd;"><em>(Editor's Note: obviously you can just use process scheduling, but I'm talking more about multithreaded processes like my game, or Second Life, where many users may be scripting within the same processes. It makes things harder.)</em></span><br /><br />We're getting there; it's happening. But it's new. And it's hard. Because you don't want people to be able to go and get access to your company's proprietary code or resources and wreak havoc. You just want to host their computing.<br /><br />So when you decide you're going to take your server-side application, with its beautiful Ajax app talking to the server, and now you want open it up: to add extensibility for your end users — they're not just scripting the client; there's scripting happening on the server that's theirs — you have to make a decision!<br /><br />Namely, what language are you going to give them?<br /><br />We have... see, unfortunately it's hard for me to talk about Google products, because all I know are their internal code names, and not their launch names. I can never remember them. But we have... something like that. Heh. Called... Prometheus, I think? Uh, wha... what is it?<br /><br /><em>(audience member: Google App Engine)</em> Ahem, the <a href="http://code.google.com/appengine/">Google App Engine</a>, of course! Yes. The Google App Engine. Ahem. Yes. <em>(me: embarrassed)</em><br /><br />And I think it's... Python. Right now. But of course they want to open it up, because, it's like, you don't really want to force people to switch editors, unless you want a real fight on your hands. You kinda don't really want to force people to switch languages either, right? People want to use whatever they're comfortable with.<br /><br />So again, you wind up with this hosted environment, where you're supporting multiple languages; which one do you pick [first]? They picked Python. You can pick [anything], but you've got these problems to deal with. And I'm going to argue today that Rhino is actually a really good choice for this particular problem space.<br /><br />OK, we've got people pooling up in the back here. Is it time to invite them in? Come on in, sit down, there's space! All right, cool. Yeah. Welcome!<br /><br />So yeah. That's what I'm talking about today. Do you guys understand the perspective now, the context? I'm talking about server-side scripting, that either you do yourself inside your company, because you feel like you've got some logic that needs to be kind of "glue" logic — "scripting" — or, more importantly, you're opening it up to your users. Which means you need to sandbox it, and you need to meter it and throttle it.<br /><br /><b>Advantages of scripting on the JVM</b><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7UpVfr49yfnHd2-KGzxO66xtxQpnvNOGDo2VimtZhwfpY6G6SClfkNhf5jFwwl68FFqgKfgwhOHzojwlTw-y182IY8p6BgPzXTHfed0ejXoIT_OW8a_6LPicXlOujMxyFScn-_w/s1600-h/rhino.005.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7UpVfr49yfnHd2-KGzxO66xtxQpnvNOGDo2VimtZhwfpY6G6SClfkNhf5jFwwl68FFqgKfgwhOHzojwlTw-y182IY8p6BgPzXTHfed0ejXoIT_OW8a_6LPicXlOujMxyFScn-_w/s320/rhino.005.jpg" alt="" id="BLOGGER_PHOTO_ID_5211879266043344930" border="1" /></a><br /><br />All right. Yeah. So this is a JVM language. A JVM language can share the Java libraries, and the Java Virtual Machine. It's really cool, right? And really powerful.<br /><br />Right off the bat, these JVM implementations of other languages, like JRuby vs. Ruby, Jython vs. Python, right? They get all these free benefits, that may not necessarily exist in the C runtimes for these languages.<br /><br />Example? Java has a really good garbage collector these days. A <a href="http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29#Generational_GC_.28aka_Ephemeral_GC.29">generational garbage collector</a> that's becoming an incremental [and/or concurrent] generational garbage collector... I mean, it's good! Whereas for a lot of these [C-based] languages, they use mark-and-sweep, reference-counting...<br /><br />Another one is native threads. It's veeery nice to have native threads, and also have well-defined semantics for the threads and how they interact with the memory model. I mean, a lot of these [non-JVM] languages are like, "Well, we have threads, but you probably... don't want to use them." Because you're kind of on your own.<br /><br />So what happens is people use process-switching; it's the share-nothing model. And that's great too, for certain situations. Provided you've got good engineering library support around it, like the <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/package-summary.html">java.util</a> concurrency libraries. They can help you design something without having to do a formal proof around it to get it to work.<br /><br />That helps a lot in multicore. It helps! JavaScript has no [language-level] threads, because Brendan Eich says "<a href="http://weblogs.mozillazine.org/roadmap/archives/2007/02/threads_suck.html">over his dead body</a>". I totally understand where he's coming from on this, right? There's certainly the "promise" of better concurrency out there. Erlang, you know, and maybe <a href="http://en.wikipedia.org/wiki/Software_transactional_memory">STM</a>...<br /><br />But hey man, today? I mean, right now? You want to write something with high throughput, and you've got a lot of I/O to do, and it's parallelizable? And you want to get a lot of throughput on one box, because it's got multiple cores?<br /><br /><em>(shrugging)</em> Well, threads get the job done. So if you've got it in your so-called "scripting language", it's a big win.<br /><br />We've got garbage collection, threads... and asynchronous I/O, right? When Java first came out there was the whole "one thread per socket" model [actually, two], which meant that you couldn't write a webserver that could handle ten thousand concurrent requests. It didn't matter how much memory or CPU your box had. Anyone here ever tried to fire up 10,000 threads on one box?<br /><br />Yeah... yeah. What happens is, the scheduler and task-switching resources for managing the threads swamp your machine. So eventually Java wrote a <a type="amzn" asin="="0596002882"">wrapper</a> around the Unix or Windows or whatever native interfaces so you could get super-high throughput.<br /><br />So all of the sudden, by sticking something on the JVM... Sure, you initially get a bit of a hit in performance. When these people first port a language to the Java Virtual Machine, it's usually about twice as slow, right? BUT, it's got async I/O, and it's got [native] threads, and it's got better non-pausing (by and large) garbage collection. And from there, they can make it smarter.<br /><br />But they've also got the JIT. I don't know, I mean, did anybody here... I gave a <a href="http://steve-yegge.blogspot.com/2008/05/dynamic-languages-strike-back.html">talk on dynamic languages</a> recently at Stanford, but I don't want to rehash that if you guys already know about that.<br /><br />Basically I argued in that talk — successfully at Stanford, so I think that was... something — that for <a href="http://en.wikipedia.org/wiki/Just-in-time_compilation">just-in-time compilers</a>, it's becoming pretty clear, they have a lot better access, a lot better data at runtime about how you're actually using your program right now than the compiler ever had.<br /><br />So the JITs can do all kinds of inlining and optimizations that you just can't do in a compiler. And what that means is that everybody running on this VM gets to benefit from the JIT.<br /><br />So there are lots of advantages to these JVM languages. Or .NET, if you happen to be using Microsoft stuff. It's a good model.<br /><br /><b>But why Rhino?</b><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRKgTUz1T2ecahq7PMZpXk19sS98vocKkw8dJc2sTR9KVIcjXI_aowFO7nLj6pfm8ekFvAH61EidS_0758Q7dOjA8V3vYEgtHvKiR8muZcRoHWc5kGLueX4cNzKkt5yNSJinQKgQ/s1600-h/rhino.006.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRKgTUz1T2ecahq7PMZpXk19sS98vocKkw8dJc2sTR9KVIcjXI_aowFO7nLj6pfm8ekFvAH61EidS_0758Q7dOjA8V3vYEgtHvKiR8muZcRoHWc5kGLueX4cNzKkt5yNSJinQKgQ/s320/rhino.006.jpg" alt="" id="BLOGGER_PHOTO_ID_5211879274237736898" border="1" /></a><br /><br />So why Rhiiiiino? Why JavaScript?<br /><br /><em>(loudly)</em> Who here thinks JavaScript is kind of icky? Come on, come on, be honest. Yeah, there we go, a couple of people. Yeah.<br /><br />Yeahhhh... and you know what? It is! Right? Because, well, because of vendor implementation issues. That's one [reason]. Also, Brendan was kind of forced to rush it out the door. You guys know... back at Netscape, when they built JavaScript, it was called, um, LiveScript?<br /><br />And Brendan was building <a href="http://en.wikipedia.org/wiki/Scheme_%28programming_language%29">Scheme</a> for the browser. Scheme!<br /><br />Everyone in here who knows Scheme, raise your hand. <em>(Many people, at least fifty, raise their hands.)</em><br /><br />Holy... smokes! A lot more than I would've guessed. Wow.<br /><br />OK, well, as it happens, you guys are not "representative". <em>(laughter)</em><br /><br />And so, Netscape kinda looked at it, and said: "Yeah, well, we did say Scheme, but, uh, there's this Java thing, this giant marketing thing, and we want to be involved with it." And some back-door deals happened, and then they came to Brendan and said: "Make it look like Java."<br /><br />So now it's Scheme with Java syntax, right? So he had to pull a lot of all-nighters for a couple of weeks, and he came up with JavaScript.<br /><br />So, you know, it's got some flaws. Some of which make you want to go scrape your teeth on the sidewalk because it's less painful. So it's true, but what language doesn't have some flaws?<br /><br />The interesting thing about Rhino, which is an implementation of JavaScript in Java, is that there's only one language. You don't have to worry about vendor-implementation or cross-platform problems because... it's just Rhino. So right out of the starting gate, that's a win.<br /><br />Plus, Rhino gives you the ability to work around some of the problems. A classic one is the problem in JavaScript where you can't define non-enumerable properties. Right? You know how you can go <code>for (i in foo) ...</code>, and it'll enumerate the keys of your object as if it were a hashtable.<br /><br />Nice feature, right? And you can add properties to objects; you can go to <code>Object.prototype</code>, which is the root object of the whole system, and add your own function(s) there. But what happens is, you've added a function that's now enumerable in everybody's arrays, and everybody's data structures, so their <code>for..in</code> loops break.<br /><br />Which means that fundamentally you can't install a library that's completely seamless and emulates Ruby or Python or some other really expressive set of library functions. Like, you want your <code>map</code> and <code>collect</code>, and your <code>String.reverse</code>, and... you know what I mean?<br /><br />You can't do it in browser JavaScript, so people wind up going different routes. They either do what <a href="http://prototypejs.org/">Prototype</a> does, and just install it, and you're screwed if you use <code>for..in</code>, but you don't use <code>for..in</code>, right?<br /><br />Or they use functions. They don't use object-oriented programming. And you know, functional programming is great and everything, but OOP, as we've discovered in programming in the large, is a nice organizational tool. Putting the function or method together with the thing that it's operating on is a nice way of organizing things.<br /><br />So it's kind of unfortunate when you have to use functions, because if you have to say, you know, <code>HTMLElement.getChildren.whatever</code>, it gets inverted with functions: <code>whatever(getChildren(HTMLElement))</code>. You have to call from the innermost one to the outermost... it's "backwards", right?<br /><br />Rhino winds up getting around that problem completely. We did, anyway, internally. Because it's written in Java. So you can call the <a href="http://www.mozilla.org/rhino/apidocs/">Rhino interface</a>. You can call Parser, or the interpreter, or the runtime; you can do whatever you want.<br /><br />So I wrote this little <code>defineProperty</code> function, that's like five lines of code. It calls into the script runtime <a href="http://www.mozilla.org/rhino/apidocs/org/mozilla/javascript/ScriptableObject.html">Java class that implements JavaScript objects</a>, which has a <a href="http://www.mozilla.org/rhino/apidocs/org/mozilla/javascript/ScriptableObject.html#DONTENUM">non-enumerable</a> <code><a href="http://www.mozilla.org/rhino/apidocs/org/mozilla/javascript/ScriptableObject.html#defineProperty%28java.lang.String,%20java.lang.Object,%20int%29">defineProperty</a></code>.<br /><br />JavaScript has non-enumerable properties; it just doesn't let you add your own. It's just a language flaw, right?<br /><br />That [<code>defineBuiltin</code> function] enabled us, in the project I'm going to be talking about a little bit later here, to implement all of Ruby and Python's runtime — all the functions we liked — in [server-side] JavaScript, in a non-intrusive way. We were also able to implement a class system, and all this other stuff.<br /><br />So Rhino is <em>not</em> browser JavaScript.<br /><br />Man, we've got more people pooling up at the entrances. You guys are welcome to come in, squeeze in and sit down... come on in... welcome. There's still space. Especially up here kinda in the front, in the middle, where nobody wants to sit. But trust me, it's better there.<br /><br />So yeah. Rhino's history: it's like ten years old. Or more? More than ten years, maybe. It started inside Netscape, side by side with <a href="http://www.mozilla.org/js/spidermonkey/">SpiderMonkey</a>. A lot of people have been hacking on it. Rhino's pretty robust.<br /><br /><b>Rhino at the shootout</b><br /><br />I have a question for ya. I did this "JVM shootout" like three and a half years ago. I was kind of tired of using Java for scripting, and I wanted to look at all the JVM languages. So I did this game. You know about the game <a href="http://en.wikipedia.org/wiki/Sokoban">Sokoban</a>? I would have done Sudoku if the craze had hit then. It's a little dude who pushes these blocks around these mazes?<br /><br />Well, I reimplemented this thing, which is about, you know, six or seven hundred lines of Java code. It had a [user] interface, and a little search algorithm that had him chase your mouse. It was just big enough of an application that I could reimplement it in like 10 different languages, and actually compare how it was speed-wise, how to use them [the languages], how well they interoperated with Java... it was an actual apples-to-apples comparison.<br /><br />Most of them really, really, REALLY stank. It was baaaad. I mean, there are like <a href="http://www.is-research.de/info/vmlanguages/">250 JVM languages</a> out there, but most of them are just complete toys. But there were ten or so that were actually pretty good. You could do anything in them, and they had decent performance, and they were good, right?<br /><br />And it [the shootout] kind of petered out, because it started looking like Rhino-JavaScript was going to win. I had this sort of <a href="http://en.wikipedia.org/wiki/Decision-matrix_method">solution selection matrix</a> of criteria where... it was kind of a heuristic function where I weighted all these terms, right? Just to kind of get a feel for which one [was best].<br /><br />And I wanted JRuby to win. You should never go into these comparisons wanting one of them to win, because, you know, you're either going to bias it or you're gonna be disappointed. JRuby at the time was really slow. It's much faster now, and everything, but at the time, it was so new.<br /><br />Jython was good, but it wasn't fast enough, and the author of Jython had gone off to greener pastures.<br /><br />Rhino! Rhino had good tools, and it had good performance, and it was... JavaScript! Eeeeww!<br /><br />So I never even really... I published it, but I didn't leave it up. I'm actually going to bring it back up again soon; I'm going to update it and do a couple of new languages. Because I find this an eternally fascinating question, which is: what is the <a href="http://steve-yegge.blogspot.com/2007/02/next-big-language.html">next big language</a>, especially on the JVM, going to be?<br /><br /><b>Domain-specific languages</b><br /><br />Java will always have a place. But I think there are domains, like Java Swing, you know? The Java GUI stuff? Java's really not very good for that. We've kind of figured out that Object-Oriented Programming doesn't work that well for UIs. You want it to be declarative. HTML showed that you want a dialog, with a title bar, and a body, and it <em>nests</em>, to match the [UI] tree.<br /><br />That works really well. It's succinct. Even in HTML it's succinct, right? Whereas with a [Java-style] object-oriented language, you've got to say, you know, <code>createContainer()</code>, <code>addChild()</code>, <code>addChild()</code>, <code>addChild()</code>, 'til the cows come home. And it doesn't <em>look</em> anything like... you can't pattern-match it and say "ah yes! this looks just like my UI!"<br /><br />So people write these wrappers around Swing. Like there's <a href="http://commons.apache.org/jelly/">Apache Jelly</a>, which wound up with this XML framework to do Swing programming, that was 30% less verbose than Java.<br /><br />What are the odds that XML's going to wind up being less verbose than <em>anything?</em> <em>(loud laughter)</em> Really! I mean, I was shocked. 30% less verbose. And I looked at it, too. They weren't cheating. I mean, they did the best Swing implementation in Java that they could, but Jelly was better.<br /><br />So there are domains for which Java is just not appropriate. But you still maybe want to use a VM for all the reasons that I outlined earlier.<br /><br />So yeah! There's room for these other languages. But which one? All of them? Are they going to solve the problem I brought up from Foo Camp earlier? To where it doesn't matter which language you're using; they can call each other, and [mix however you like?]<br /><br />I mean, how's your editor going to feel about that? How's your team member going to feel about it? A lot of people don't like learning new languages.<br /><br />Who here doesn't like learning new languages? Come on, be honest... <em>(A few people raise hands)</em> Yeah! New languages. No fun!<br /><br />It's actually kind of... you should try it. <em>(laughter)</em> You know? It is. It's a good idea.<br /><br />There's this dude — has everyone heard of <a type="amzn" asin="0262560992">The Little Schemer</a>? The Little Schemer, <a type="amzn" asin="026256100X">The Seasoned Schemer</a>, <a type="amzn" asin="0262562146">The Reasoned Schemer</a>? Cool books, right? Teach you functional programming in this really bizarre format that hooks you in.<br /><br /><a href="http://en.wikipedia.org/wiki/Daniel_P._Friedman">Dan Friedman</a>, the guy who [was] one of the collaborators on those books — I was reading an <a href="http://www.cs.indiana.edu/hyplan/dfried/mex.pdf">article he wrote</a>. Early in his career he realized that languages are fundamental to computer science and to engineering; they're really important. And he wanted to be able to learn a new language every quarter.<br /><br />And after he did that for a while, he said, you know what? I want to learn a new language every <em>week</em>. OK? And you can actually get to the point where you can do this. Now it probably takes 2-3 months before you're actually as comfortable with the new language as you were with your favorite old one. This happened to me with JavaScript; I was freaking out for the first couple of months, thinking "this is <a href="http://steve-yegge.blogspot.com/2007/06/rhino-on-rails.html"><em>never</em> gonna work</a>".<br /><br />But eventually you get over the hump, and you're like <em>(relieved sigh)</em> "aaaah, yes." Right? You learn how to work around the problems. It's just like learning Java or whatever your first language happened to be. You've got to learn your way around all these problems, and you've gotta learn how things work, and how the libraries are laid out. And all of the sudden it becomes like breathing.<br /><br />So Dan Friedman, after he said he was learning a language a week, I thought, "wow, that's pretty macho." But he said, nah, that wasn't good enough: he wanted to be able to <em>implement</em> a language a week. And he got pretty serious about it. Spent years researching how to do this effectively. <em>(Well, now I'm just speculating – Ed.)</em><br /><br />This is where I'd love to see engineers today. Knowing languages will make you a better programmer. It will! It will even if you're not using them. Just write a little application in it, and it opens your mind. [Each new one] opens your mind. And now suddenly you know the superset of all the languages out there. So it's not scary to learn new ones.<br /><br />And you can also recognize situations where a <em>language</em> is actually the right tool for the job. Not a library, not a framework, not some new object or some new interface. A language!<br /><br />Case in point? <a type="amzn" asin="0596528124">Regular expressions</a>. <em>(raise hand)</em> Who likes to write their own giant deterministic finite automata to do string matching? Heh. It's weird — nobody raised their hand.<br /><br />Who likes to do lots and lots of manual DOM manipulations, as opposed to, say, <a type="amzn" asin="0596002912">XPath</a>? XPath is a language, right? DOM manipulations, you know... it depends, but usually, no: not if you can get away with using a language for it.<br /><br />I could talk for hours about that, so I'm not going to. But, you know... it's good to learn new languages. So I'm gonna teach you JavaScript today. I'm gonna dive in. So let's go!<br /><br /><b>The right way to do unit testing</b><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnJq3ROQsXobZpEII8fordT747GJMxTtQhUFP5jEmzAsHNcX9s07JziBWgcIdlmDjXovFFle5jvdDdhM_12XcP9s0hY-F4sqyoPRGpfi_eNTjAyLmDnKCZcS4UyTaT8mrezfnNgQ/s1600-h/rhino.007.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnJq3ROQsXobZpEII8fordT747GJMxTtQhUFP5jEmzAsHNcX9s07JziBWgcIdlmDjXovFFle5jvdDdhM_12XcP9s0hY-F4sqyoPRGpfi_eNTjAyLmDnKCZcS4UyTaT8mrezfnNgQ/s320/rhino.007.jpg" alt="" id="BLOGGER_PHOTO_ID_5211879478557130754" border="1" /></a><br /><br />Oh yeah. So unit testing. I mean, like, all the other stuff on this slide is like "blah blah blah", but then Unit Testing [in Rhino] — this was a real surprise to me.<br /><br />I write a lot of Java code day to day, [out of the] probably five different languages I code in regularly. And unit testing was always a chore. Unit testing is a chore.<br /><br />I mean, come ON. Unit testing's a chore, right? <em>(raise hand)</em> Who here thinks unit tests are just a poor man's static type system? Eh? <em>(some laughter)</em> Yeah!<br /><br />Well, not really, since you have to write unit tests for them [the static languages] too. <em>(more laughter)</em><br /><br />You need to write unit tests, and unfortunately in Java it's <b>very painful</b>. I'm speaking into the mic now, so that everybody can hear. Unit testing in Java is painful!<br /><br />It's <em>so</em> painful that people, the Java... community, the Java world, has evolved <em>around</em> it. OK? They've said: "Don't use constructors. Constructors are baaaaad."<br /><br /><em>(pointed pause)</em> I mean... what!? <em>(laughter)</em><br /><br />I mean, like, if you program in Ruby, say, you know that you can actually change the way the metaclass produces objects. So if you want it to be a singleton, or you want it to meet certain... you want it to be a Mock object, you just replace the <code>new</code> function on the fly, and you've got yourself a Mock constructor, right?<br /><br />But Java's set in stone! So they use factories, which leads to this proliferation of classes. And you have to use a "Dependency Injection Framework" to decouple things, right?<br /><br />And it's just, like, <em>(panting)</em>... We were doing business logic early on in Java. When Java came out, it was like: "Ooooh, it's a lot easier to write C code", basically, or C++. Rather than focusing on your memory allocation strategy, or which of your company's six conflicting <code>string</code> classes you're gonna use, you get to focus on "business logic".<br /><br />But unfortunately that only took you so far. Now you're focusing on Mock object frameworks. It [Java] only took you a little farther.<br /><br />Now I <em>swear</em>, man, doing Mock objects in Rhino is so easy! It's easier, even, than in JRuby or in Jython, because JavaScript has <a href="http://www.json.org/">JSON</a>. I didn't even know, like, the name of it when I started doing this, right? But JSON... I've gotta show you guys this. This is so cool. <em>(flipping through slides)</em><br /><br />Yeah, tools, blah blah blah. We'll come back to it. Oh, it's way down in here. Urrggh. Come on... here's one!<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqVWGZxe_CSNtjN_1g9aWFMD8ouDGVLw3k3IgIf_h3V-QMf_CeD1qdMKX2OH75IAQAcpwDF6tnOUFQVKbmdMjfG60Wta7PPMokIWVCPGM7_UBylS65kDenvYjZXJL7UbQwQ6-51A/s1600-h/rhino.011.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqVWGZxe_CSNtjN_1g9aWFMD8ouDGVLw3k3IgIf_h3V-QMf_CeD1qdMKX2OH75IAQAcpwDF6tnOUFQVKbmdMjfG60Wta7PPMokIWVCPGM7_UBylS65kDenvYjZXJL7UbQwQ6-51A/s320/rhino.011.jpg" alt="" id="BLOGGER_PHOTO_ID_5211879525016416130" border="1" /></a><br /><br />OK. Down on the bottom we've got some code here. Actually on the top, too. So I do a <code>new Thread</code> with a <code>new Runnable</code>, and, uh... it sure looks a lot like Java code, huh? This is one advantage of JavaScript, actually. Java...Script, right? Ten years later it's finally becoming the scripting language for Java?<br /><br />So that syntax <em>(with an obj literal following <code>new Runnable()</code>)</em> is a little weird, but there's another one here that says:<br /><br /><pre>js> obj = {run: function() { print('hi') }}</pre><br /><br />So I've declared an object literal, using "JSON style". Now JSON doesn't let you do — does JSON let you do functions? Probably not, right? But I mean, fundamentally you're doing this declarative property-value list, right?<br /><br />And so what I've got is this anonymous thing that has a named "run" property whose value is a function! That prints "hi". And now I can create a new <code>Thread</code>, with a new <code>Runnable</code> that wraps it, and what effectively I've done is I've used that thing as the <code>Runnable</code> interface [implementation], which expects a function called "run" that takes no arguments and does whatever the thread's supposed to do.<br /><br />This is how you do mock objects!<br /><br />I have this huuuge legacy system, right? With hundreds of static methods. Static methods are also bad these days, right? Noooo static methods. 'Cuz they're not mockable. Right? Java has changed Java. Because Java's not unit-testable. So now you can't just go to the store and [buy a book and] learn Java. You have to learn all these... fashions. You have to learn what's in vogue.<br /><br />Subclassing! <i>Not</i> in vogue right now. You talk about subclassing, and people are like "NNnnnnnooooo, you should use manual delegation even though it's really hard and cumbersome and awkward."<br /><br />And you're like, "but I just want to change this one method, and plus it's built into the language, and there's syntax for it, and it's kind of well-understood..." And they just say "NO!"<br /><br />It's out of favor. For similar reasons. Oh my god...<br /><br />And I'm telling ya: the reason unit testing is easy, is, fundamentally, the way you develop in a dynamic language is <em>different</em> from the way you develop in a static language: C++, Java... OCaml, Scala, whatever. Your favorite static language.<br /><br />To a large extent, especially in C++ and Java, the way you develop is:<br /><ol><li>you write the code</li><li>you compile the code</li><li>you load the code</li><li>you run the code</li><li>it doesn't work</li><li>you back up</li><li>you change it again</li></ol><br />So it's this batch cycle, right? 1950s. Submit your punch cards, please.<br /><br />In a dynamic language — and this is clearest when you're writing in Emacs Lisp [because of the <code>*scratch*</code> buffer] — but it's somewhat clear when you're developing in a console, in Python or Ruby, Perl, Lua, whatever, you write an expression, and you give it some mock data.<br /><br />You're writing a function, you're building it on the fly. And when it works [for that mock data], you're like, "Oh yeah, it works!" You don't run it through a compiler. You copy it and paste it into your unit test suite. That's one unit test, right? And you copy it into your code, ok, this is your function.<br /><br />So you're actually proving to yourself that the thing works by construction. Proooof by construction.<br /><br />Obviously you still need your unit tests <em>(er, I meant integration tests – Ed.)</em>, because there's going to be higher-order semantics, you know, calling conventions between classes and methods...<br /><br />Whew! This room is really filling up. Um, is there anything we can do to help here, guys in the back? <em>(Tech guy says something inaudible in the video)</em> Yeah, please! There're more seats here. I just want to... I don't want to get to where people can't even make it into the room.<br /><br />Yeah, so unit testing. I know you guys all hate unit testing. So did I. Or you say, "I looooove unit testing," but then, you know, your test coverage is still sitting at like 80%.<br /><br />I'm telling you, man, this is a huge, huge thing. It changes the way you do your development.<br /><br /><b>Rhino's not Ruby</b><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijy3-I4PDQmk5TpG9dAOXaJferkup1UHxTZ-KSAFuj5fQQkmZ7hZud6vLrn1Ia8wnuBAEnvgM_8OCsg35-_sVc1gYD8fXIHZbmzw8TR0KPo9WsGUBrN5pSDEaaW6g1mMoEc_wQfg/s1600-h/rhino.018.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijy3-I4PDQmk5TpG9dAOXaJferkup1UHxTZ-KSAFuj5fQQkmZ7hZud6vLrn1Ia8wnuBAEnvgM_8OCsg35-_sVc1gYD8fXIHZbmzw8TR0KPo9WsGUBrN5pSDEaaW6g1mMoEc_wQfg/s320/rhino.018.jpg" alt="" id="BLOGGER_PHOTO_ID_5211880018737921026" border="1" /></a><br /><br />And oh, yeah... I'm going to be talking shortly here about <a href="http://steve-yegge.blogspot.com/2007/06/rhino-on-rails.html">Rhino on Rails</a>, which is this thing that I did... it's not Rhino on Rails, actually. It's actually, <em>I</em> called it "Rhino's not Ruby". Because I got kinda burned at Google for using Ruby. Yeah. Uh, for good reasons, good reasons. But they were like: "No."<br /><br />And so of <em>course</em> I called it "Rhino's not Ruby": RnR. Because people know JavaScript; they're kinda comfortable with JavaScript, so they were OK with it. So I had to port Rails; it was kind of a lot of work, but, you know, well it works! We're using it here internally; it's nice. I mean, I actually know it's nice, because six months went by and I didn't look at it for those six months. And for this recent project, I picked it up, and I was, like, is this gonna be gross?<br /><br />But actually, it's really pretty nice. Because you've got all the Java libraries, and all the integration with Google stuff. It's cool. I'll try to open-source it this year, if I forget to say that later on.<br /><br />Anyway, I was writing unit tests for this thing, and... uh... <em>(I completely blow my stack. Who am I? Where am I?)</em><br /><br />Have I lost where I am on the slides? <em>(Duh.)</em><br /><br />I've diverged from the slides. I'll come back to RnR shortly. Basically, I got unit-testing religion. That's the end of that sort of stack.<br /><br />If you can do it easily, and you don't have to rewrite your application to be unit-testable? Man. That's a big difference.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGSNtEllJPbE1lyFmElzxiERR8VzvvZ3moO3J25fSLpesOxrLOohVRl-TtkagpVjkHkqu1dJKZ2wgianiAju1rM4dmmol5kZg1F328ygrHp9hiuEWT1khR6kSjbcB0Xna8GVdgmw/s1600-h/rhino.009.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGSNtEllJPbE1lyFmElzxiERR8VzvvZ3moO3J25fSLpesOxrLOohVRl-TtkagpVjkHkqu1dJKZ2wgianiAju1rM4dmmol5kZg1F328ygrHp9hiuEWT1khR6kSjbcB0Xna8GVdgmw/s320/rhino.009.jpg" alt="" id="BLOGGER_PHOTO_ID_5211879504697614514" border="1" /></a><br /><br />So why would you <em>not</em> use Rhino for server-side scripting?<br /><br />Well, it's not super-duper fast right now. It's on the order of about twice as slow as Java, depending on what you're doing. So if it really has to be super, super fast — use C++! Right? Naaaah, you can use Java. Like I was saying the other day [at Stanford], it's widely admitted inside of Google — there's this whole discussion, is Java as fast as C++, right? And Google Java developers definitely admit that it's as fast as C++. The C++ people, well... yeah. <em>(sigh)</em><br /><br />Let's see... if you're writing a library, then Rhino's actually not so good right now. There is no standard library interface for scripting languages. We haven't got there yet. It's all, it's all related to what I told you about before, you know the calling interop. A lot of these [languages] have their own package systems: their own <code>import</code>, their own <code>require</code>, right? So if you're gonna write a library, you should probably still write it in Java. Maybe.<br /><br />If you're doing a framework, where you're defining how things are called: whether we're calling you, or you're calling us, then it's OK.<br /><br />And if you really <em>hate</em> JavaScript, then that's, you know, that's fine... But keep in mind, again, that you may be providing something for your end-users. If you go out to a high school and you survey people, and you ask, "So what language are you learning? How are you teaching yourself programming?" It's a sure bet that a lot of them are doing Ajax stuff. Right? A lot of them are doing JavaScript.<br /><br />If you want to make your end-users happy, and you want to immediately capture a very big user base, then no matter how you detest JavaScript (and again, Rhino-JavaScript's really not as bad as browser JavaScript, it's much better), your users probably will prefer it. It's something to keep in mind.<br /><br />All right.<br /><br /><b>Static Typing's Paper Tigers</b><br /><br />And then we've got <a href="http://www.scala-lang.org/">Scala</a>. I've gotta mention Scala. Who here knows... you've heard of Scala? Yeah? <em>(a few hands go up)</em> Mmmmm, yeah, getting there... looks like some people, OK.<br /><br />Scala is a very strongly typed language for the JVM. It's from researchers in Switzerland; they're professors. It's from sort of the same school of thought that static typing has evolved with over the last fifteen years in academia: Haskell, SML, Caml, these sorts of <a href="http://en.wikipedia.org/wiki/Hindley-Milner">H-M</a> functional languages.<br /><br />And Scala's interesting because it actually takes a functional static type system and it layers... it merges it with Java's object-oriented type system, to produce.... Frankenstein's Monster.<br /><br />I've got the <a href="http://www.scala-lang.org/docu/files/ScalaReference.pdf">language spec</a> here in my backpack. Oh, my god... I mean, like, because it's getting a little bit of momentum, right? So I figure I've got to speak from a position of sort of knowledge, not ignorance, when I'm dissing it. (Heh heh.)<br /><br />And so <em>before</em>, I was like: "Oh yeah, Scala! Strongly typed. Could be very cool, very expressive!"<br /><br />The... the the the... the language spec... oh, my god. I've gotta blog about this. It's, like, ninety percent [about the type system]. It's the biggest type system you've <em>ever</em> seen in your life, by 5x. Not by an order of magnitude, but man! There are type types, and type type types; there's complexity...<br /><br />They have this concept called <code>complexity complexity<T></code> Meaning it's not just complexity; it's not just complexity-complexity: it's <em>parameterized</em> complexity-complexity. <em>(mild laughter)</em> OK? Whoo! I mean, this thing has types on its types on its types. It's <em>gnarly</em>.<br /><br />I've got this Ph.D. languages intern whose a big Haskell fan, and [surprisingly] a big Scheme fan, and an ML fan. [But especially Haskell.] He knows functional programming, he knows type systems. I mean, he's an expert.<br /><br />He looked at Scala yesterday, and he told me: "I'm finding this rather intimidating."<br /><br />I'm like, "THAT sounds like it's gonna take off!" <em>(loud laughter)</em> Oh yeah!<br /><br />But the funny thing about Scala, the really interesting thing — you guys are the first to hear my amazing insight on this, OK? — is: it's put the Java people in a dilemma. There's a reeeeeeeal problem.<br /><br />The problem is, the Java people say, "Well, dynamic languages, you know, suck, because they don't have static types." Which is kind of circular, right? But what they mean, is they say: No good tools, no good performance. But even if you say, look, the tools and performance can get as good, they say, "Well, static types can help you write safer code!"<br /><br />It's... you guys know about those talismans? The ones, where, "What's it for?" "To keep tigers away"? <em>(some chuckling)</em> Yeah? And you know, people are like, "How do you know it keeps tigers away?" And your reply is: <em>(sneering)</em> "Do you see any tigers around here!?" <em>(minor laughter)</em><br /><br />So this is what... OK, so for a long time, for many years... and you know, I've written more Java code than most Java programmers <em>ever</em> will. <em>(Editor's note: nearly 1M lines in production. Ouch.)</em> So trust me. I tried. OK? I'm not just coming in and saying "I don't want to learn Java." No. I know Java as well as the next person.<br /><br />But I come to them and say, let's do proof by – say, argument by example! You know, an existence proof. <a href="http://www.imdb.com/">IMDB</a> is written in Perl, right? Yahoo! – many of their properties are written in PHP. A lot of Microsoft stuff's written in VB, right? ASP .NET? Amazon.com's portal site is Perl/Mason.<br /><br />A lot of companies out there are building big, scalable systems – and I mean scalable in the sense of throughput and transactions, stuff like that, but also scalable in terms of human engineering — in these dynamic languages with no static types. [Using nothing more than good engineering principles.]<br /><br />So... isn't that a demonstration that you don't need the static types to keep those tigers away?<br /><br />And they're like: "Well! But! What if... what if a tiger came?" <em>(laughter)</em> Right? "People need shotguns in their house in case a bear comes through the door, right?" The Simpsons made fun of that. <em>(laughing continues)</em><br /><br />Yeah. So, you know, for a long time, I was like: "Yeah, yeah, yeah. OK. So tigers could come. Fine."<br /><br />Scala, now, is the tiger that's going to kill Java. Because their [type-talisman] argument now has become a paradox, similar to the Paul Graham Blub Paradox thing, right? Because they're like, "Well, we need static typing in order to engineer good systems. It's simply not possible otherwise."<br /><br />The Scala people come in and they go: "Your type system <em>suuuuuucks</em>. It's not sound. It's not safe. It's not complete. You're casting your way around it. It doesn't actually prevent this large class of bugs. How many times have you written <code>catch (NullPointerException x) ...</code> in Java? Our type system doesn't allow [things like] that."<br /><br />Our type system does what <em>you</em> said <em>your</em> type system was doing.<br /><br />So, therefore, you should be using it! ∴<br /><br />And the Java people look at it and go: "Wehellll... <em>(cough cough)</em>... I mean, yeah, I mean... <em>(*ahem*)</em>" <em>(running finger under collar, as if sweating profusely)</em> They say, "Welllll... you know... it's awfully... cummmmmbersome... I..."<br /><br />"We can actually get around the problems in practice that you guys say your type system is solving through Good Engineering Practices."<br /><br /><em>(laughter begins to grow)</em><br /><br />HA!!! <em>(I point accusingly at the audience, and there's more laughter)</em><br /><br />Yeah.<br /><br />So Scala is creating a real problem for [Java's] static typing camp now. Because their last little bastion of why they're using it, the whole tigers argument, they're like, "Ah, well... we... we keep shotguns in our house." [This is what they've been reduced to.]<br /><br />OK? Yeeeeahhhh...<br /><br />So back to dynamic languages!<br /><br />But my point was — from a previous slide actually — it's very interesting. See, I wrote this Rails port, and it wasn't... I never got it to where it was quite as succinct as Rails, because JavaScript has curly braces and a little bit of extra syntactic cruft. But it was close!<br /><br />And then we used this framework to build this web app internally. It was for external consumption. It's kind of a long story that I won't go into. But we had like 20 engineers on this thing, for close to a year. And we had a huge application. I mean in terms of user functionality: Ajax-enabled pages, server-side persistence stuff... it was a big app.<br /><br />And it was, like 40,000 lines of code, including the templates and the client-side JavaScript. The whole thing! OK? I mean, you add in unit tests, you know, you add in everything, including build files and stuff, and this thing was up to like, maybe 55,000 lines of code.<br /><br /><em>Thousand</em>.<br /><br />I mean, Java programmers would be saying, "We haven't hit 55 million yet. <em>(Looking at feet)</em> But, well... we're gonna." <em>(laughter)</em><br /><br />And it's like, I tell 'em that <em>(shaking head)</em>, I tell 'em that, and they're like: "Well." <em>(avoiding eye contact)</em><br /><br />That's what they say. "Well."<br /><br />And that's, you know, that's pretty much it. <em>(laughter)</em><br /><br /><b>Behind the Rhino</b><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEfR95cY_Dn7WymhrJg2D0PF7lJZAr4Teb7ogQ-XIvWoFDqJCHOGZkJfsFL98ktShnkvvo61MgzdMmHBjVEmetZe4106JphFCLyDvuCpz1qBInVnQ3yAqAiBERvGZqpMfhHSx9ZA/s1600-h/rhino.010.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEfR95cY_Dn7WymhrJg2D0PF7lJZAr4Teb7ogQ-XIvWoFDqJCHOGZkJfsFL98ktShnkvvo61MgzdMmHBjVEmetZe4106JphFCLyDvuCpz1qBInVnQ3yAqAiBERvGZqpMfhHSx9ZA/s320/rhino.010.jpg" alt="" id="BLOGGER_PHOTO_ID_5211879514625212994" border="1" /></a><br /><br />So unfortunately we have thirteen minutes left. I'm sorry. So let's really quickly go through some of the really cool things about Rhino, the technology here.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNFVZJWXDjQMGkfarsOop8gDOO6iHl1k8kPAWM9gHY6rcWl7Vvqb9Ods8CVfIfhNlD77JaYWlVSxEb9DbzpwVgSR5yRGmkraQ_u8lvyTxQm_FLNc7z-mzsP4e2WC-pksQO0YBG7A/s1600-h/rhino.012.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNFVZJWXDjQMGkfarsOop8gDOO6iHl1k8kPAWM9gHY6rcWl7Vvqb9Ods8CVfIfhNlD77JaYWlVSxEb9DbzpwVgSR5yRGmkraQ_u8lvyTxQm_FLNc7z-mzsP4e2WC-pksQO0YBG7A/s320/rhino.012.jpg" alt="" id="BLOGGER_PHOTO_ID_5211879697459442178" border="1" /></a><br /><br />You can JavaScript from Java, and Java from JavaScript. Guess which one's easier?<br /><br />Obviously calling Java from JavaScript is easier, because Java's really cumbersome. It doesn't have anything to help you, so you have to do basically what I was talking about with Swing earlier. <code>JavaScriptObject j = new JavaScriptObject()</code> You know. <code>JavaScriptObject, Context.enter!</code> You've got all this <em>stuff</em> on the Java side. 'cuz it's Java.<br /><br />But, uh... but it works! And you can do both directions. Here's an example of a Java program to bootstrap... actually I believe this is completely standalone; it works out of the box. It's a Rhino Demo:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgdgHKXjwBMYKlWtkRMDs8m3RAIE3t5TGcj8QMjtTbVHk_Xl-7RGiQi7hkKpT0Rrs_Gksr16KG41ub878MofhTJgWRzbb4emguSmM75SowddET2qF4uxZ7P8UenMbhx9H46XUVOGw/s1600-h/rhino.013.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgdgHKXjwBMYKlWtkRMDs8m3RAIE3t5TGcj8QMjtTbVHk_Xl-7RGiQi7hkKpT0Rrs_Gksr16KG41ub878MofhTJgWRzbb4emguSmM75SowddET2qF4uxZ7P8UenMbhx9H46XUVOGw/s320/rhino.013.jpg" alt="" id="BLOGGER_PHOTO_ID_5211879699636378930" border="1" /></a><br /><br />This is what you need to do to create a JavaScript object called <code>foo</code> that has a function called <code>a</code>. A property called "<code>a</code>", sorry, whose value is "hello".<br /><br />So what you do is you call <code>Context.initStandardObjects()</code>, which sets up the JavaScript runtime. You only have to do it once. And then you call <code>newObject</code> to create a new JavaScript object. And then you call <code>evaluateString</code> to evaluate it in the context of this object.<br /><br />It's one example of how you do it, but it's not too hard. You can call back and forth.<br /><br />So that means that anything that was written in JavaScript that you feel, oh Gosh this really needs to be componentized, you need to stick it down in a Java library for whatever reason: you can do it! You can migrate code back and forth between the JavaScript layer and the Java layer. This is true for all JVM languages, I think.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjALxlU3klHlU4PhG_oeKhoCsixnoDQYVnpPXa7ZPwvJzAJfwpklUhyphenhyphenkKTTd0bIj2l4BpYowgHqXZ9LHbS5PE0F_i-xf8RPwsuby8TRq3DzcMzbJ_4fRTjkMNX965AshU4ea8QLA/s1600-h/rhino.014.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjALxlU3klHlU4PhG_oeKhoCsixnoDQYVnpPXa7ZPwvJzAJfwpklUhyphenhyphenkKTTd0bIj2l4BpYowgHqXZ9LHbS5PE0F_i-xf8RPwsuby8TRq3DzcMzbJ_4fRTjkMNX965AshU4ea8QLA/s320/rhino.014.jpg" alt="" id="BLOGGER_PHOTO_ID_5211879704972668914" border="1" /></a><br /><br />Uh... this is the actual code that I was referring to earlier, where you can define non-enumerable properties. I called it <code>defineBuiltin</code>. There's some closure stuff going on here... I don't want to bore you guys. <em>(Editor's note: <code>Function.bind()</code> based on Douglas Crockford's original)</em> <br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhvZW4TmiHApB7EcgBBJ2XCom5cvQcLKZ0hTvKEtb7aLSCsOO9waukWDgmaDZkvz2272w23rkxPcD5H_1m7tAGE0cpx1ttDyaMBnVCQeY0WSbzmRvjbdPd6Th_ATrULH7s_6AbhA/s1600-h/rhino.015.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhvZW4TmiHApB7EcgBBJ2XCom5cvQcLKZ0hTvKEtb7aLSCsOO9waukWDgmaDZkvz2272w23rkxPcD5H_1m7tAGE0cpx1ttDyaMBnVCQeY0WSbzmRvjbdPd6Th_ATrULH7s_6AbhA/s320/rhino.015.jpg" alt="" id="BLOGGER_PHOTO_ID_5211879711262561458" border="1" /></a><br /><br />Runtime delegation: this is one of the reasons unit testing is really easy. You guys know about Smalltalk, uh, method-missing? [<code>doesNotUnderstand</code> actually] It's <code>method_missing</code> in Ruby. It's the... if you call an object — I think Objective C <a href="http://en.wikipedia.org/wiki/Objective-c#Forwarding">probably has something like this too</a>.<br /><br />You call an object, and you say: "I'm calling <code>foo</code>", and it says: "I don't have a <code>foo</code>". Right?<br /><br />Normally, what happens when you do this? In Java it goes *BARF*. As it should, probably... <em>unless</em> what you really wanted to do was delegate to some other object. Right? "Design Patterns". Say you want to write a Bridge that says: "Oh! You're calling <code>foo</code>, but you don't want to call it on me. You want to go to the game server, call it there, marshal it, send it back. We'll pretend it's a remote method call."<br /><br />Right? There's a lot of stuff you've got to go through in Java to do stuff like this. In JavaScript — as you all know, if you're using dynamic languages...<br /><br />Man, we've got a huge pool of people in the back. It's getting pretty rough. But we're almost out of time! Fortunately. Heh.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhC4orJf38M1jbDlbacB1KODbhIjaCLUFmmbe4CaYFjYZMOs8vz_nhmMtKMZEzsz61xTSZL-2_4NjndEdkMTKIuv91Sis1ezCo2ST-6c1KvezdDsY_sYzNflxGICSLf2jTMhHIjCw/s1600-h/rhino.016.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhC4orJf38M1jbDlbacB1KODbhIjaCLUFmmbe4CaYFjYZMOs8vz_nhmMtKMZEzsz61xTSZL-2_4NjndEdkMTKIuv91Sis1ezCo2ST-6c1KvezdDsY_sYzNflxGICSLf2jTMhHIjCw/s320/rhino.016.jpg" alt="" id="BLOGGER_PHOTO_ID_5211879712956430354" border="1" /></a><br /><br />OK, so let me tell you a little bit about embedded XML. It's kind of interesting, kinda neat. This is supported in Firefox, in some browsers. It's a spec that Adobe and some other people, <a href="http://en.wikipedia.org/wiki/E4X">BEA, put together</a>.<br /><br />And it's cool! Because you can say stuff like<br /><br /> <code>var company = <big xml thing></code><br /><br />Now of course there's this weird, big religious debate going on, between JSON advocates and XML advocates. It's weird! They're, like, locking horns.<br /><br />When I was a kid — when I was a kid, jeez... When I was <em>twenty</em>, it feels like when I was a kid — I used to have tropical fish. And my brothers and I noticed two things about tropical fish.<br /><br />One is that they die, because we're not in the tropics. <em>(some laughter)</em> Sad.<br /><br />And the other is that if you put a bunch of different species of tropical fish in a tank together, they ignore each other... except for the ones that are the same [or nearly the same] species. They bite each other. That's what they do. Fish bite each other. They have a pecking order, right?<br /><br />JSON and XML are muscling in on each others' space, and there are bristles, OK, and it's so silly! It's silly. The whole thing, right? I mean, XML is better if you have more text and fewer tags. And JSON is better if you have more tags and less text. Argh! I mean, come on, it's that easy. But you know, there's a big debate about it.<br /><br />Nevertheless, sometimes XML is appropriate [in JavaScript], especially if you're loading it from a file or whatever. These literals are interesting. And so it provides new operators and new syntax for actually going in and... it's kind of like XPath. Except it's JavaScript-y.<br /><br />And I tell you: it is the worst documented thing on the planet! It's horrible, man, working with E4X initially. But... eventually you can figure it out. And I have a document that hopefully I'll be ready to release pretty soon, that actually covers it pretty well. And Adobe has some good documentation for it.<br /><br />And then eventually it clicks, like learning any other language. This is a minilanguage. And you go: "Ha, I get it! I get it. It's not as crazy and dumb as I thought. It actually works."<br /><br />It's kind of a neat feature. You guys know other languages that embed XML? <a href="http://www.ibm.com/developerworks/library/x-scalaxml/">Scala does</a>. I don't see all of you using that, but C# 3, I think, does XML? Coming [soon]? <em>(Editor's note: it's apparently been deprioritized by the C# team, although VB 9 has them.)</em><br /><br />Anyway, it's kind of an interesting approach.<br /><br /><b>Inside the Rhino</b><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRSW4dRkATC6OsNqEc6za0w7TfrjUeNip8VPcNHZvN9NR6HfjpJtyH2yRJCGmS45tk4AZnUPxijPlSitkXvIawV41S6kdYK_bC5tJAhvTxXtP9wp-wuKIwZgPk_MXlV19WqYj0ng/s1600-h/rhino.017.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRSW4dRkATC6OsNqEc6za0w7TfrjUeNip8VPcNHZvN9NR6HfjpJtyH2yRJCGmS45tk4AZnUPxijPlSitkXvIawV41S6kdYK_bC5tJAhvTxXtP9wp-wuKIwZgPk_MXlV19WqYj0ng/s320/rhino.017.jpg" alt="" id="BLOGGER_PHOTO_ID_5211880015740624434" border="1" /></a><br /><br />All right. So this is Rhino. Now you know. After I explain this diagram, you'll know what you need to know about Rhino to talk to other people about it.<br /><br />You start with some JavaScript code. It goes to a parser. That turns it into a tree. A syntax tree.<br /><br />Rhino's parser today, currently, immediately begins the next step of rewriting it as code generation. Right away, as it's parsing. Now this is a problem. Because if it takes an if-statement or a switch-statement or a for-loop, and it generates sort of assembly-language like jumps to targets? And generates labels, you know, converts it into sort of three-address code, that's eventually going to actually become three-address code: assembly or bytecode.<br /><br />Then it kind of sucks if you're trying to use the parse tree for your IDE. To, like, syntax highlight, or pretty-print, or show errors and warnings, or whatever. Unfortunately a lot of languages — most languages — do this because they're written by compiler guys and compiler gals. And they don't see the value. But unfortunately we're all doing more and more processing of code. Language tools, right? Frontend stuff.<br /><br />So I rewrote Rhino's parser recently, and I'm currently fixing the code generator. And I'm gonna get it out into the Rhino mainstream in a couple of weeks here. Because my project at Google is doing a lot of code processing. And it's a faithful representation. So if your big beef about Rhino is that there's no Visitor over the AST: I'm fixing that.<br /><br />And then there are two paths here: you see one on the left that goes code generator to bytecode. And there's the bytecode, or pseudo-bytecode, for the JavaScript code up there. And then it goes to an interpreter. The interpreter is a big loop. Bytecode is this [roughly] postorder traversal of the tree, that flattens it in a way that allows you to push onto a stack to evaluate the operands.<br /><br />It's all actually very important; you should go <a href="http://en.wikipedia.org/wiki/Interpreter_%28computing%29">read up on how it works</a> if you're not familiar with it, or if you've forgotten since you first learned it.<br /><br />And the interpreter is actually pretty fast, because it's a loop. There's not a lot of calling out. I mean, there are some calls out into the runtime, but mostly it's this loop: push, pop, push, pop. So the JIT picks it up and can optimize it pretty well.<br /><br />The reason that there's two code paths here, the reason that they wrote the interpreter — they originally had just a classfile compiler — was that compiling to a classfile is this batch/static operation, where you want it [the resulting bytecode] to be fast. You want to do the standard, classic compiler optimizations. You want to generate the <a href="http://en.wikipedia.org/wiki/Control_flow_graph">control-flow graph</a>, you want to eliminate dead code, you want to do good register allocation.<br /><br />In JavaScript's case, it's often possible not to generate a closure. You can actually use Java instance variables and Java local variables instead of these heavier-weight JavaScript [activation objects]. Because at the logical level, JavaScript doesn't really even have a stack. It has object allocations on the heap; those are your Function invocations. Sloooow. Right? Because [in comparison] the Java stack translates to the C stack.<br /><br />So the fact that the compiler can go through and optimize away a lot of this JavaScript dynamic stuff that's provable you're not gonna need, well, that's nice! But it takes time. The interpreter is a path that allows you to dynamically develop: load code in and see how it's gonna work right now, at the unfortunate expense of the Rhino people having to maintain these two code paths. But, you know, there's a lot shared between them.<br /><br />And that's it! The script runtime implements all the JavaScript, you know, <a href="http://www.ecma-international.org/publications/standards/Ecma-252.htm">Ecma spec stuff</a>: <code>Array</code>, <code>String</code>, <code>Boolean</code>, <code>Date</code>, the <code>Math</code> functions. And a lot of it just delegates down to Java where possible.<br /><br />Pretty clean! Pretty standard. It's a pretty standard compiler, interpreter and runtime. You're gonna see this if you dig into your favorite JVM language. You'll probably see something similar to this. This is actually more mature than a lot of them. A lot of them start off by interpreting the parse tree, and it's slow from the method calls.<br /><br />So this is why Rhino's fast. Now it could be a lot faster, and we're working on it. Hopefully, you know, these things can be as fast as Java, in the same way that Java made the claim that it can be "as fast as C++". And for long-running applications, that's usually true. Especially with parallelism, right? Threads. And especially if the JIT has a chance to look at the actual code paths being used and compile them down into machine code <em>specific</em> for that code path, as a fall-through.<br /><br />Obviously for benchmarks, where they fire something up and run a loop a thousand times, or whatever? C++ is faster because the JIT hasn't had any time to kick in and evaluate what's going on. But for long-running services — which is what we're all writing, yeah? At this Ajax conference — the JIT will kick in. And your Rhino code now will get very close to where Java's performance is. <em>(Provided you're not doing number-crunching - Ed.)</em> So don't worry about that so much.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijy3-I4PDQmk5TpG9dAOXaJferkup1UHxTZ-KSAFuj5fQQkmZ7hZud6vLrn1Ia8wnuBAEnvgM_8OCsg35-_sVc1gYD8fXIHZbmzw8TR0KPo9WsGUBrN5pSDEaaW6g1mMoEc_wQfg/s1600-h/rhino.018.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijy3-I4PDQmk5TpG9dAOXaJferkup1UHxTZ-KSAFuj5fQQkmZ7hZud6vLrn1Ia8wnuBAEnvgM_8OCsg35-_sVc1gYD8fXIHZbmzw8TR0KPo9WsGUBrN5pSDEaaW6g1mMoEc_wQfg/s320/rhino.018.jpg" alt="" id="BLOGGER_PHOTO_ID_5211880018737921026" border="1" /></a><br /><br />So RnR, I already talked about it. It doesn't have a database backend yet, because we're using Google's internal store, like Bigtables. Which is why I haven't open-sourced the thing yet.<br /><br />It's weird: somebody told me the other day, they sent me mail and said: "I think you're today's Paul Graham". And he meant this in the most negative connotation possible. "You're today's Paul Graham, and RnR is the next Arc."<br /><br />I was like, "What!?" And he said, well, he's a server-side JavaScript guy. I mean, there aren't that many, right? Most of us are thinking client-side. But he's a server-side JavaScript guy. And he goes to people and says, why aren't you using server-side JavaScript? And they say: "We're waiting for Steve Yegge to release RnR."<br /><br />And I'm... this is news to me! I'm working on... stuff, you know. Work. And this [open-sourcing RnR] is part-time and everything.<br /><br />This year, now that we know people are interested in it, we will release it. <em>(At least we'll try for this year - Ed.)</em><br /><br />It's just a little weird, right? Because Sun hired the JRuby guys, and they're doing <a type="amzn" asin="1590598814">JRuby on Rails</a>, and it's eventually going to be part of the Java Development Kit. It's gonna be, you know: it's Sun's lightweight answer to EJB and all those giant frameworks. You want to build something quickly and use the Rails model, well, run Rails on the JVM!<br /><br />So I thought: if JRuby on Rails had been (a) ready when we started using it [i.e. writing RnR], and (b) Google would let me use Ruby, then I would have used that! So RnR was like a transitional thing.<br /><br />But... again, you know, I think that there are other people in situations where you really prefer to use JavaScript. So yeah, I guess I'll... open-source it. We're working on it.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuXb6HdyceCiEMZ8KVxUm_StpjpN7vbmpukqu_SD1UoCFu_YVKeakGaMUw1DV3xTOj2-IZFOevymDKFr_ZLiYEpCORVNIi48rbFgsffTdnmMJ52Hqqs1eibQG6okKtA_-3pqbCXw/s1600-h/rhino.019.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuXb6HdyceCiEMZ8KVxUm_StpjpN7vbmpukqu_SD1UoCFu_YVKeakGaMUw1DV3xTOj2-IZFOevymDKFr_ZLiYEpCORVNIi48rbFgsffTdnmMJ52Hqqs1eibQG6okKtA_-3pqbCXw/s320/rhino.019.jpg" alt="" id="BLOGGER_PHOTO_ID_5211880025296089602" border="1" /></a><br /><br />This is the last slide, by the way; I know you guys are tired. We're doing a lot of work on it. I'm working on it personally, I mean working on Rhino. Because I think it's all right. I'm used to JavaScript now, and I think it's a good implementation. It's a good compiler, so it's good practice for me, to learn how compilers work.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-OxCeoFc-eBq79JrFXgV2EdN3lKf1USM4Ormu6VNkcghK1Z8QJ_g3iFOL3g3E2TtNRZvZslxXHy2G9k55oVu_wXIGVhMEG5WZtqAKknLF1emdgpx0VokC2ZnYeCgEq3BO0SCL0A/s1600-h/rhino.008.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-OxCeoFc-eBq79JrFXgV2EdN3lKf1USM4Ormu6VNkcghK1Z8QJ_g3iFOL3g3E2TtNRZvZslxXHy2G9k55oVu_wXIGVhMEG5WZtqAKknLF1emdgpx0VokC2ZnYeCgEq3BO0SCL0A/s320/rhino.008.jpg" alt="" id="BLOGGER_PHOTO_ID_5211879494303248066" border="1" /></a><br /><br />We've got a debugger, but we're making the debugger better. We've got a sandboxing model, but that could definitely be wrapped up and made available to you folks.<br /><br />We'd like to open-source our JSCompiler: the thing that compresses JavaScript for Google Maps and GMail and stuff. I know there are some open-source ones out there. We don't think that it's competitively in our best interest to keep the thing internal. It'd be better to get it out there so you all benefit from it, and so you can all hack on it, right? We're working on open-sourcing our JSCompiler and other stuff.<br /><br />So that's it! I wanted to cover Rhino, but I also wanted to leave time for questions. And I've left you <em>(looking at big LED clock in the back)</em> one minute and sixteen seconds for questions. Sorry about that.<br /><br />So really quickly, if there are any burning questions, I'll repeat the question and try to answer it. Otherwise feel free to come up afterwards and chat.<br /><br /><b>Q&A</b><br /><br /><b>Q: Why won't Google let you use Ruby? </b><br /><br />Yeah, that's a good question. Um... uh... I kinda <a href="http://steve-yegge.blogspot.com/2007/06/rhino-on-rails.html">wrote that up in a blog</a>. Isn't that stupid? "Read my blog!"<br /><br />The short answer is: it imposes a tax on the systems people, which means that it's not completely self-contained within the team that's using it. And for that reason, primarily, it's really not a good idea right now.<br /><br />Any other burning questions?<br /><br /><b>Q: Do threads suck?</b><br /><br />Well, you know... they're... you know... Yeah. But I mean, what other options do you have? I mean, you have multiprocessing/share-nothing, which is heavyweight and it requires more work.<br /><br />So I use threads. I'd prefer something better, but they're what we've got today.<br /><br /><b>Q: There are some guys that I work with, and one of their comments on JavaScript lately, since I've been wanting to use Rhino because I love JavaScript... what they brought up is that JavaScript is becoming a lot like Python, and that may or may not be such a great thing. I wanted to know what you have to say about that.</b><br /><br />Ah. OK. Well, yeah, it already has borrowed some stuff from Python in <a href="http://developer.mozilla.org/en/docs/New_in_JavaScript_1.7">JavaScript 1.7</a>: <code>yield</code>, Array comprehensions, destructuring assignment. And these are good features. They're good. They're not going to change it to be syntactically whitespace sensitive, right?<br /><br />I don't know. The guys working on it have really taken off in completely different directions from Python. They're looking at an optional static type system, so in that sense it's maybe more like Groovy. And they're looking at maybe fixing some of the bugs.<br /><br />But I don't know how that's going to evolve yet. Because there's obviously a lot of people who have skin in the game, a lot of people interested in affecting the way the spec evolves. So it's all kind of up in the air right now.<br /><br />All right, so we're really out of time. I'd like to thank you for coming. And please come up afterwards. Thanks!<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCFqLHUMTt2HHxfZzKVFEZBHuvYUcH5RSEzDvdZ5r683c-HIwft_0v5sPWtoP_hPEukigkoMbrDS84utLytlSri_hxFP6MQpa3uyc2yJevsvm1ZaPvVL6fRPh8tf54esZaA3yZvw/s1600-h/rhino.020.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCFqLHUMTt2HHxfZzKVFEZBHuvYUcH5RSEzDvdZ5r683c-HIwft_0v5sPWtoP_hPEukigkoMbrDS84utLytlSri_hxFP6MQpa3uyc2yJevsvm1ZaPvVL6fRPh8tf54esZaA3yZvw/s320/rhino.020.jpg" alt="" id="BLOGGER_PHOTO_ID_5211880022125479698" border="1" /></a>Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com94tag:blogger.com,1999:blog-13674163.post-8921052203825260612008-05-11T21:11:00.000-07:002008-12-08T21:38:56.945-08:00Dynamic Languages Strike BackSome guys at Stanford invited me to <a href="http://www.stanford.edu/class/ee380/">speak at their EE Computer Systems Colloquium</a> last week. Pretty cool, eh? It was quite an honor. I wound up giving a talk on dynamic languages: the tools, the performance, the history, the religion, everything. It was a lot of fun, and it went over surprisingly well, all things considered.<br /><br />They've uploaded the video of my talk, but since it's a full hour, I figured I'd transcribe it for those of you who want to just skim it.<br /><br />This is the first time I've transcribed a talk. It's tricky to decide how faithful to be to my spoken wording and phrasing. I've opted to try to make it very faithful, with only minor smoothing.<br /><br />Unfortunately I wound up using continuation-passing style for many of my arguments: I'd occasionally get started on some train of thought, get sidetracked, and return to it two or three times in the talk before I finally completed it. However, I've left my rambling as-is, modulo a few editor's notes, additions and corrections in [brackets].<br /><br />I didn't transcribe Andy's introduction, as it seems immodest to do so. It was funny, though.<br /><br />Technical corrections are welcome. I'm sure I misspoke, oversimplified, over-generalized and even got a few things flat-out wrong. I think the overall message will survive any technical errors on my part.<br /><br /><b>The talk...</b><br /><br />Thank you everybody! So the sound guys told me that because of a sound glitch in the recording, my normally deep and manly voice, that you can all hear, is going to come through the recording as this sort of whiny, high-pitched geek, but I assure you that's not what I actually sound like.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqXspq8ayYB-lyCud0shZpDLNoOhe4PE3BvJqgLD9Y1GOHx5Btlirx42KSc0ErtU6zYSOvFAdXo0O2MU7RyTAT-B5AdsbcS88OSUik0n2WaRMd8WtFdiwRsoANXhVFMajlrTJi5Q/s1600-h/thumbnail000.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqXspq8ayYB-lyCud0shZpDLNoOhe4PE3BvJqgLD9Y1GOHx5Btlirx42KSc0ErtU6zYSOvFAdXo0O2MU7RyTAT-B5AdsbcS88OSUik0n2WaRMd8WtFdiwRsoANXhVFMajlrTJi5Q/s320/thumbnail000.jpg" alt="" id="BLOGGER_PHOTO_ID_5199365666677324066" border="0" /></a><br /><br />So I'm going to be talking about dynamic languages. I assume that you're all dynamic language interest... that you've got an interest, because there's a dude down the hall talking about Scala, which is you know, this very strongly typed JVM language (a bunch of you get up and walk over there – exactly.) So you know, presumably all the people who are like really fanatical about strong typing, who would potentially maybe get a little offended about some of the sort of colorful comments I might inadvertently make during this talk — which, by the way, are my own opinions and not Google's — well, we'll assume they're all over there.<br /><br />All right. I assume you all looked through the slides already, so I don't need to spend a whole lot of time with them. I'll go into major rant-mode here at the end. My goal is... for you guys to come away with, sort of a couple of new pictures in your mind, thinking about how languages have evolved over the last 20 years, where they're going, what we can do to fix them, that kind of thing.<br /><br />Does anyone here know how to use a Mac? It's showing me this weird, uh... thing... OK. All right. Here goes.<br /><br />So!<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZoc0uP0SnLsLP6zJhlc1-IUV2-MfRZkCQSZrI9L1GIhn0u_9RUNkyq1iulH6W1ZzELGqcst-2lDyBzuFrcgtpoX6iHOsAeALkTouQP2gbEvxsqKX99MPwH11lApKY3fufRaQRFw/s1600-h/thumbnail001.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZoc0uP0SnLsLP6zJhlc1-IUV2-MfRZkCQSZrI9L1GIhn0u_9RUNkyq1iulH6W1ZzELGqcst-2lDyBzuFrcgtpoX6iHOsAeALkTouQP2gbEvxsqKX99MPwH11lApKY3fufRaQRFw/s320/thumbnail001.jpg" alt="" id="BLOGGER_PHOTO_ID_5199365670972291378" border="0" /></a><br /><br />Popular opinion of dynamic languages: slooooow! They're always talking about how Python is really slow, right? Python is, what, like 10x to 100x slower? And they have bad tools.<br /><br />And also there's this sort of, kind of difficult-to-refute one, that says at millions of lines of code, they're maintenance nightmares, right? Because they don't have static types. That one, uh, unfortunately we're not going to be able to talk much about, because not many people have millions-of-lines code bases for us to look at — because dynamic languages wind up with small code bases. But I'll talk a little bit about it.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhumBbmGYO03_81kypmHkt6BWLFKP1Z3mgHNplqnu7Ne9b7cFE_0J2fdxPSlcNbscLeN4NInpAEPGrLBPI81-wLU52nkpQ531-e5nKZwTNXFtEG8h0fJyBzt-IW75xjeWiQMmd6NQ/s1600-h/thumbnail002.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhumBbmGYO03_81kypmHkt6BWLFKP1Z3mgHNplqnu7Ne9b7cFE_0J2fdxPSlcNbscLeN4NInpAEPGrLBPI81-wLU52nkpQ531-e5nKZwTNXFtEG8h0fJyBzt-IW75xjeWiQMmd6NQ/s320/thumbnail002.jpg" alt="" id="BLOGGER_PHOTO_ID_5199365670972291394" border="0" /></a><br /><br />So first of all, one of my compatriots here, who's an actual smart person, like probably everybody in this room, you're all waaay smarter than me — I got invited here for the booger jokes, all right? – he's a languages guy, and he said: "You know, you can't talk about dynamic languages without precisely defining what you mean."<br /><br />So I'm going to <em>precisely</em> define it. Dynamic languages are, by definition... <a type="amzn" asin="0596000278">Perl</a>, <a type="amzn" asin="0596007973">Python</a>, <a type="amzn" asin="0596516177">Ruby</a>, <a type="amzn" asin="0596101996">JavaScript</a>, <a type="amzn" asin="8590379825">Lua</a>, <a type="amzn" asin="020163337X">Tcl</a>... all right? [<em>(laughter)</em>] It's the working set of languages that people dismiss today as "dynamic languages." I'll also include <a type="amzn" asin="0201136880">Smalltalk</a>, <a type="amzn" asin="0133708756">Lisp</a>, <a href="http://en.wikipedia.org/wiki/Self_%28programming_language%29">Self</a>, <a type="amzn" asin="0262193388">Prolog</a>, some of our stars, you know, from the 70s and 80s that, uh, well they'll come up here today too.<br /><br />I'm deliberately not going down the path of "well, some static languages have dynamic features, and some dynamic languages have static types", because first of all it's this neverending pit of, you know, argument, and second of all, as you're going to see, it's completely irrelevant to my talk. The two... sort of qualities that people associate with "dynamic": one would be sort of... runtime features, starting with eval, and the other would be the lack of type tags, the lack of required type tags, or even just escapes in your type system. These things work together to produce the tools problems and the performance problems, ok? And I'll talk about them, and how they're going to be fixed.<br /><br />All right!<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjciDVXfZEz8NYhfNMGKSCh3iUzxIZP8IiHpRs05mzbhyphenhyphenLNX48S_TU6HmS2b7TMuYrcWeqeVlVL8Hn7bosHXG5ff3Vv6R036AFS4fHMwgIqMM_pIcelX8cGpC9g_gmJZSka69Phcw/s1600-h/thumbnail003.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjciDVXfZEz8NYhfNMGKSCh3iUzxIZP8IiHpRs05mzbhyphenhyphenLNX48S_TU6HmS2b7TMuYrcWeqeVlVL8Hn7bosHXG5ff3Vv6R036AFS4fHMwgIqMM_pIcelX8cGpC9g_gmJZSka69Phcw/s320/thumbnail003.jpg" alt="" id="BLOGGER_PHOTO_ID_5199365675267258706" border="0" /></a><br /><br />I just talked about that [slide].<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvcuVFfttQbX9zp3rvb1O3cKTWaBlnG-bXVg3aXQi-aYZJTcViFW5MY6ra_h6x8AcbRngNw_L-jQ0bXUI0n_j22bKX8cKavSF2Rq2Oa0yLS9Ve7sCDP3SympWUZxfSqE7dF2Wv3g/s1600-h/thumbnail004.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvcuVFfttQbX9zp3rvb1O3cKTWaBlnG-bXVg3aXQi-aYZJTcViFW5MY6ra_h6x8AcbRngNw_L-jQ0bXUI0n_j22bKX8cKavSF2Rq2Oa0yLS9Ve7sCDP3SympWUZxfSqE7dF2Wv3g/s320/thumbnail004.jpg" alt="" id="BLOGGER_PHOTO_ID_5199365679562226018" border="0" /></a><br /><br />So! Uh... yeah, that's right, I'm at Stanford! Forgot about that. So I've been interviewing for about 20 years, at a whole bunch of companies, and yeah, Stan– every school has this sort of profile, right? You know, the candidates come out with these ideals that their profs have instilled in them. And Stanford has a really interesting one, by and large: that their undergrads and their grad students come out, and they believe that C and C++ are the fabric with which God wove the Universe. OK? And they truly [think]: what is it with all these other languages?<br /><br />Whereas like MIT and Berkeley, they come out, and they're like "languages, languages, languages!" and you're like, uh, dude, you actually have to <em>use</em> C and C++, and they're like "oh." So it's funny, the kinds of profiles that come out. But this one [first slide bullet point], I mean, it's kind of a funny thing to say, because the guy's a Ph.D., and he's just discovered Turing's thesis. Of <em>course</em> all you need is C or C++. All you need is a Turing machine, right? You know?<br /><br />What we're talking about here is fundamentally a very personal, a very political, sort of a, it's almost a fashion statement about who you are, what kind of language you pick. So, you know... unfortunately we could talk, I mean I've got 2 hours of ranting in me about this topic, but I'm gonna have to, like, kinda like narrow it down to... we're gonna talk about dynamic languages because people are out there today using them. They're getting stuff done, and it works. All right? And they really do have performance and tools issues.<br /><br />But they're getting resolved in really interesting ways. And I'm hoping that those of you who are either going out into the industry to start making big things happen, OR, you're researchers, who are going to be publishing the next decade's worth of papers on programming language design, will take some interesting directional lessons out of this talk. We'll see.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi51C2rptG3uMUtcXKovt0jqILMfjj6Pw6kaHWn4XdFUpwnm6u2s92Ptv1ndQbYOyp83quq08-slwKmthcANf8jtN3QYgxnngTkdhwjW1luDJlEHt-fv-P2BR-yTrjhDYBs-xOVaA/s1600-h/thumbnail005.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi51C2rptG3uMUtcXKovt0jqILMfjj6Pw6kaHWn4XdFUpwnm6u2s92Ptv1ndQbYOyp83quq08-slwKmthcANf8jtN3QYgxnngTkdhwjW1luDJlEHt-fv-P2BR-yTrjhDYBs-xOVaA/s320/thumbnail005.jpg" alt="" id="BLOGGER_PHOTO_ID_5199366199253268850" border="0" /></a><br /><br />All right. So why are dynamic languages slow? Uh, we all know they're slow because... they're dynamic! Because, ah, the dynamic features defeat the compiler. Compilers are this really well understood, you know, really really thoroughly researched... everybody knows THIS [<em>brandish the Dragon Book</em>], right?<br /><br />Compilers! The <a type="amzn" asin="0321486811">Dragon Book</a>! From your school! OK? It's a great book. Although interestingly, heh, it's funny: if you implement everything in this book, what you wind up with is a really naïve compiler. It's really advanced a long way since... [the book was written] and they know that.<br /><br />Dynamic languages are slow because all the tricks that compilers can do to try to guess how to generate efficient machine code get completely thrown out the window. Here's one example. <a type="amzn" asin="0131103628">C</a> is really fast, because among other things, the compiler can inline function calls. It's gonna use some heuristics, so it doesn't get too much code bloat, but if it sees a function call, it can inline it: it patches it right in, ok, because it knows the address at link time.<br /><br /><a type="amzn" asin="0321334876">C++</a> — you've got your virtual method dispatch, which is what C++ you know, sort of evangelists, that's the first thing they go after, like in an interview, "tell me how a virtual method table works!" Right? Out of all the features in C++, they care a lot about that one, because it's the one they have to pay for at run time, and it drives them nuts! It drives them nuts because the compiler doesn't know, at run time, the receiver's type.<br /><br />If you call <code>foo.bar()</code>, <code>foo</code> could be some class that C++ knows about, or it could be some class that got loaded in afterwards. And so it winds up — this polymorphism winds up meaning the compiler can compile both the caller and the callee, but it can't compile them together. So you get all the overhead of a function call. Plus, you know, the method lookup. Which is more than just the instructions involved. You're also blowing your instruction cache, and you're messing with all these, potentially, code optimizations that could be happening if it were one basic-block fall-through.<br /><br />All right. Please – feel free to stop me or ask questions if I say something that's unclear. I know, just looking around the room, that most of you probably know this stuff better than I do.<br /><br />So! The last [bullet point] is really interesting. Because nobody has tried, for this latest crop of languages, to optimize them. They're <em>scripting languages</em>, right? They were actually designed to either script some host environment like a browser, or to script Unix. I mean the goal was to perform these sort of I/O-bound computations; there was no point in making them fast. Except when people started trying to build larger and larger systems with them: that's when speed really started becoming an issue.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKlbodZxiTXpSMq-cyo_vU-_-v1b22mr6Di2kXqLoDrbXFr_d3srF37HrMGZccqLWf34ot22w4UQYCQKUs2jhbQ68yya3mayo2bbhDLzHgZCWL8AkghS2475puX-cu5Xz33hd-tg/s1600-h/thumbnail006.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKlbodZxiTXpSMq-cyo_vU-_-v1b22mr6Di2kXqLoDrbXFr_d3srF37HrMGZccqLWf34ot22w4UQYCQKUs2jhbQ68yya3mayo2bbhDLzHgZCWL8AkghS2475puX-cu5Xz33hd-tg/s320/thumbnail006.jpg" alt="" id="BLOGGER_PHOTO_ID_5199366203548236162" border="0" /></a><br /><br />OK. So obviously there's a bunch of ways you can speed up a dynamic language. The number one thing you can do, is you can write a better program. The algorithm, you know, is gonna trump any of the stuff you're doing at the VM – you can optimize the hell out of Bubble Sort, but...<br /><br />Native threads would be really nice. Perl, Python, Ruby, JavaScript, Lua... <em>none</em> of them has a usable concurrency option right now. None of them. I mean, they kinda have them, but they're like, Buyer Beware! Don't ever use this on a machine with more than one processor. Or more than one thread. And then you're OK. It's just, you know...<br /><br />So actually, this is funny, because, all right, show of hands here. We've all heard this for fifteen years now – is it true? Is Java as fast as C++? Who says yes? All right... we've got a small number of hands... so I assume the rest of you are like, don't know, or it doesn't matter, or "No."<br /><br />[<em>Audience member: "We read your slides."</em>] You read my slides. OK. I don't know... I can't remember what I put in my slides.<br /><br />But it's interesting because C++ is obviously faster for, you know, the short-running [programs], but Java cheated very recently. With multicore! This is actually becoming a huge thorn in the side of all the C++ programmers, including my colleagues at Google, who've written vast amounts of C++ code that doesn't take advantage of multicore. And so the extent to which the cores, you know, the processors become parallel, C++ is gonna fall behind.<br /><br />Now obviously threads don't scale that well either, right? So the Java people have got a leg up for a while, because you can use ten threads or a hundred threads, but you're not going to use a million threads! It's not going to be <a type="amzn" asin="193435600X">Erlang</a> on you all of the sudden. So obviously a better concurrency option – and that's a huge rat's nest that I'm not going to go into right now – but it's gonna be the right way to go.<br /><br />But for now, Java programs are getting amazing throughput because they can parallelize and they can take advantage of it. They cheated! Right? But threads aside, the JVM has gotten really really fast, and at Google it's now widely admitted on the Java side that Java's just as fast as C++. [<em>(laughter)</em>]<br /><br />So! It's interesting, because every once in a while, a C++ programmer, you know, they <em>flip</em>: they go over to the Dark Side. I've seen it happen to some of the most brilliant C++ hackers, I mean they're computer scientists, but they're also C++ to the core. And all of a sudden they're stuck with some, you know, lame JavaScript they had to do as an adjunct to this beautiful backend system they wrote. And they futz around with it for a while, and then all of a sudden this sort of light bulb goes off, and they're like "Hey, what's up with this? This is way more productive, you know, and it doesn't seem to be as slow as I'd sort of envisioned it to be."<br /><br />And then they maybe do some build scripting in Python, and then all of a sudden they come over to my desk and they ask: "Hey! Can any of these be <em>fast</em>?" Ha, ha, ha! I mean, these are the same people that, you know, a year ago I'd talk to them and I'd say "why not use... <em>anything</em> but C++? Why not use <a href="http://www.digitalmars.com/d/">D</a>? Why not use <a type="amzn" asin="0596003013">Objective-C</a>? Why not use <em>anything</em> but C++? Right?<br /><br />Because we all know that C++ has some very serious problems, that organizations, you know, put hundreds of staff years into fixing. Portability across compiler upgrades, across platforms, I mean the list goes on and on and on. C++ is like an evolutionary sort of dead-end. But, you know, it's fast, right?<br /><br />And so you ask them, why not use, like, D? Or Objective-C. And they say, "well, what if there's a garbage collection pause?"<br /><br />Oooh! [<em>I mock shudder</em>] You know, garbage collection – first of all, generational garbage collectors don't have pauses anymore, but second of all, they're kind of missing the point that they're still running on an operating system that has to do things like process scheduling and memory management. There <em>are</em> pauses. It's not as if you're running DOS! I hope. OK?<br /><br />And so, you know, their whole argument is based on these fallacious, you know, sort of almost pseudo-religious... and often it's the case that they're actually based on things that used to be true, but they're not really true anymore, and we're gonna get to some of the interesting ones here.<br /><br />But mostly what we're going to be talking about today is the compilers themselves. Because they're getting really, really smart.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikTdp3cGLGz7-j3MJGfhYiMzKeVd4ebiocRnmSjVfwBnjqniVZn1xBt93c7Qpw0a2Qc42ZKEAxCzVrS8ytjZddG4cTpYRCLAiv99T_dG0rWR_xquMYggNTSv8Jy8hQ17z3_GKr7w/s1600-h/thumbnail007.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikTdp3cGLGz7-j3MJGfhYiMzKeVd4ebiocRnmSjVfwBnjqniVZn1xBt93c7Qpw0a2Qc42ZKEAxCzVrS8ytjZddG4cTpYRCLAiv99T_dG0rWR_xquMYggNTSv8Jy8hQ17z3_GKr7w/s320/thumbnail007.jpg" alt="" id="BLOGGER_PHOTO_ID_5199366207843203474" border="0" /></a><br /><br />All right, so first of all I've gotta give a nod to these languages... which nobody uses. OK? <a type="amzn" asin="1590592395">Common Lisp</a> has a bunch of really high-quality compilers. And when they say they achieve, you know, "C-like speed", you've gotta understand, you know, I mean, there's more to it than just "does this benchmark match this benchmark?"<br /><br />Everybody knows it's an ROI [calculation]. It's a tradeoff where you're saying: is it <em>sufficiently</em> fast now that the extra hardware cost for it being 10 or 20 percent slower (or even 2x slower), you know, is outweighed by the productivity gains we get from having dynamic features and expressive languages. That's of course the rational approach that everyone takes, right?<br /><br />No! Lisp has all those parentheses. Of course nobody's gonna look at it. I mean, it's ridiculous how people think about these things.<br /><br />But with that said, these were actually very good languages. And let me tell you something that's NOT in the slides, for all those of you who read them in advance, OK? This is my probably completely wrong... it's certainly over-generalized, but it's a partly true take on what happened to languages and language research and language implementations over the last, say 30 years.<br /><br />There was a period where they were kind of neck and neck, dynamic and static, you know, there were Fortran and Lisp, you know, and then there was a period where dynamic languages really flourished. They really took off. I mean, I'm talking about the research papers, you can look: there's paper after paper, proofs...<br /><br />And implementations! StrongTalk was really interesting. They added a static type system, an optional static type system on top of Smalltalk that sped it up like 20x, or maybe it was 12x. But, you know, this is a prototype compiler that never even made it into production. You've gotta understand that when a researcher does a prototype, right, that comes within, you know, fifty percent of the speed gains you can achieve from a production compiler... because they haven't done a tenth, a hundredth of the optimizations that you <em>could</em> do if you were in the industry cranking interns through the problem, right?<br /><br />I mean HotSpot's VM, it's got like ten years of Sun's implementation into not one, but two compilers in HotSpot, which is a problem they're trying to address. So we're talking about, you know, a 12x gain really translates to something a lot larger than that when you put it into practice.<br /><br />In case I forget to mention it, all these compiler optimizations I'm talking about, I do mean <em>all</em> of them, are composable. Which is really important. It's not like you have to choose this way or you have to choose that way. They're composable, which means they actually reinforce each other. So God only knows how fast these things can get.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpZml-gxw0gs4Ibg6yHAWvZQdaIWURfQdkRrp2oHMttSWzCgnuttDY5JFaqVz50xwgxfGYnYN7V9LHbYQKUurf1fogVqCNO8rjmu3D6zELSyO0hktHW4exf38eVfUBQKdi-j1eNg/s1600-h/thumbnail008.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpZml-gxw0gs4Ibg6yHAWvZQdaIWURfQdkRrp2oHMttSWzCgnuttDY5JFaqVz50xwgxfGYnYN7V9LHbYQKUurf1fogVqCNO8rjmu3D6zELSyO0hktHW4exf38eVfUBQKdi-j1eNg/s320/thumbnail008.jpg" alt="" id="BLOGGER_PHOTO_ID_5199366207843203490" border="0" /></a><br /><br />This is the only interesting... this is actually the only, I would say, probably <em>original</em>, sort of compelling thought for this talk today. I really – I started to believe this about a week ago. All right? Because it's an urban legend [that they change every decade]. You know how there's Moore's Law, and there are all these conjectures in our industry that involve, you know, how things work. And one of them is that languages get replaced every ten years.<br /><br />Because that's what was happening up until like 1995. But the barriers to adoption are really high. One that I didn't put on the slide here, I mean obviously there's the marketing, you know, and there's the open-source code base, and there are legacy code bases.<br /><br />There's also, there are also a lot more programmers, I mean many more, orders of magnitude more, around the world today than there were in 1995. Remember, the dot-com boom made everybody go: "Oooh, I wanna be in Computer Science, right? Or I just wanna learn Python and go hack." OK? Either way. (The Python hackers probably made a lot more money.)<br /><br />But what we wound up with was a bunch of entry-level programmers all around the world who know <em>one</em> language, whichever one it is, and they don't want to switch. Switching languages: the second one is your hardest. Because the first one was hard, and you think the second one's going to be that bad, and that you wasted the entire investment you put into learning the first one.<br /><br />So, by and large, programmers – you know, the rank-and-file – they pretty much pick a language and they stay with it for their entire career. And that is why we've got this situation where now, this... See, there's plenty of great languages out there today. OK?<br /><br />I mean obviously you can start with Squeak, sort of the latest Smalltalk fork, and it's beautiful. Or you can talk about various Lisp implementations out there that are smokin' fast, or they're smokin' good. Or in one or two cases, both.<br /><br />But also there's, like, the <a href="http://boo.codehaus.org/">Boo</a> language, the <a href="http://www.iolanguage.com/">io</a> language, there's the <a href="http://www.scala-lang.org/">Scala</a> language, you know, I mean there's <a href="http://nice.sourceforge.net/">Nice</a>, and <a href="http://en.wikipedia.org/wiki/Pizza_%28programming_language%29">Pizza</a>, have you guys heard about these ones? I mean there's a bunch of good languages out there, right? Some of them are really good dynamically typed languages. Some of them are, you know, strongly [statically] typed. And some are hybrids, which I personally really like.<br /><br />And nobody's using <em>any</em> of them!<br /><br />Now, I mean, Scala might have a chance. There's a guy giving a talk right down the hall about it, the inventor of – one of the inventors of Scala. And I think it's a great language and I wish him all the success in the world. Because it would be nice to have, you know, it would be nice to have that as an alternative to Java.<br /><br />But when you're out in the industry, you <em>can't</em>. You get lynched for trying to use a language that the other engineers don't know. Trust me. I've tried it. I don't know how many of you guys here have actually been out in the industry, but I was talking about this with my intern. I was, and I think you [<em>(point to audience member)</em>] said this in the beginning: this is 80% politics and 20% technology, right? You know.<br /><br />And [my intern] is, like, "well I understand the argument" and I'm like "No, no, no! You've never been in a company where there's an engineer with a Computer Science degree and ten years of experience, an architect, who's in your face <em>screaming</em> at you, with spittle flying on you, because you suggested using, you know... <a href="http://www.digitalmars.com/d">D</a>. Or <a href="http://www.haskell.org/">Haskell</a>. Or Lisp, or <a type="amzn" asin="193435600X">Erlang</a>, or take your pick."<br /><br />In fact, I'll tell you a funny story. So this... at Google, when I first got there, I was all idealistic. I'm like, wow, well Google hires all these great computer scientists, and so they must all be completely language-agnostic, and ha, ha, little do I know... So I'm up there, and I'm like, we've got this product, this totally research-y prototype type thing, we don't know. We want to put some quick-turnaround kind of work into it.<br /><br />But Google is really good at building infrastructure for <em>scaling</em>. And I mean scaling to, you know, how many gazillion transactions per second or queries per second, you know, whatever. They scale like nobody's business, but their "Hello, World" takes three days to get through. At least it did when I first got to Google. They were <em>not</em> built for rapid prototyping, OK?<br /><br />So that means when you try to do what Eric Schmidt talks about and try to generate luck, by having a whole bunch of initiatives, some of which will get lucky, right? Everybody's stuck trying to scale it from the ground up. And that was unacceptable to me, so I tried to... I made the famously, horribly, career-shatteringly bad mistake of trying to use Ruby at Google, for this project.<br /><br />And I became, very quickly, I mean almost overnight, the Most Hated Person At Google. And, uh, and I'd have arguments with people about it, and they'd be like Nooooooo, WHAT IF... And ultimately, you know, ultimately they actually convinced me that they were right, in the sense that there actually <em>were</em> a few things. There were some taxes that I was imposing on the systems people, where they were gonna have to have some maintenance issues that they wouldn't have [otherwise had]. Those reasons I thought were good ones.<br /><br />But when I was going through this debate, I actually talked to our VP Alan Eustace, who came up to a visit to Kirkland. And I was like, "Alan!" (after his talk) "Let's say, hypothetically, we've got this team who are really smart people..."<br /><br />And I point to my friend Barry [pretending it's him], and I'm like: "Let's say they want to do something in a programming language that's not one of the supported Google languages. You know, like what if they wanted to use, you know, Haskell?"<br /><br />What I really wanted to do at the time was use Lisp, actually, but I didn't say it. And [Alan] goes, "Well!" He says, "Well... how would <em>you</em> feel if there was a team out there who said they were gonna use... LISP!" [<em>(laughter)</em>]<br /><br />He'd pulled his ace out of his [sleeve], and brandished it at me, and I went: "that's what I wanted to use." And he goes, "Oh." [<em>(turning away quickly)</em>] And that was the end of the conversation. [<em>(laughter)</em>]<br /><br />But you know, ultimately, and it comes up all the time, I mean we've got a bunch of famous Lisp people, and (obviously) famous Python people, and you know, famous language people inside of Google, and of course they'd like to do some experimentation. But, you know, Google's all about getting stuff done.<br /><br />So that brings us full circle back to the point of this topic, which is: the languages we have today, sorted by popularity at this instant, are probably going to stay about that popular for the next ten years.<br /><br />Sad, isn't it? Very, very sad. But that's the way it is.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBndPYVkNpFclsc6w6P5l43zhyphenhyphenobSv3bOb6KpLiVlp3-HE9idWChURk-VtSIKG8dv95D8ZwF7iDa3u-uBCk7g2t1tezr1LJcYVrocOeEhmXe7AtOfsUflO9KLFYlw6108KhKBIaQ/s1600-h/thumbnail009.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBndPYVkNpFclsc6w6P5l43zhyphenhyphenobSv3bOb6KpLiVlp3-HE9idWChURk-VtSIKG8dv95D8ZwF7iDa3u-uBCk7g2t1tezr1LJcYVrocOeEhmXe7AtOfsUflO9KLFYlw6108KhKBIaQ/s320/thumbnail009.jpg" alt="" id="BLOGGER_PHOTO_ID_5199366212138170802" border="0" /></a><br /><br />So how do we fix them?<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlisXbRr3RY9I7XbznTwwYVEIhSKjjsxJRfpPCM-liVKjVi9Onx5_UbHmaKMraWCaks9pLVS7psQae1IBPnHx1RE7PwB8YqbBtaapFFpQIt58Jr6Hs3wV1Vrr0Vayqp2gnro3fqg/s1600-h/thumbnail010.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlisXbRr3RY9I7XbznTwwYVEIhSKjjsxJRfpPCM-liVKjVi9Onx5_UbHmaKMraWCaks9pLVS7psQae1IBPnHx1RE7PwB8YqbBtaapFFpQIt58Jr6Hs3wV1Vrr0Vayqp2gnro3fqg/s320/thumbnail010.jpg" alt="" id="BLOGGER_PHOTO_ID_5199367526398163394" border="0" /></a><br /><br />How – how am I doing for time? Probably done, huh? Fifteen minutes? [<em>(audience member: no, more than that)</em>] OK, good.<br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgr5kmwGkK91ASkc8LauWU5SSAZRP6y8-ladRgvuehiZXnwMWyGI8ox-39huWF_lslfiEf_npII5_wxj2mNGsLl8h3z0J8wlugf3JzrnhYbkaR7GPpCLHL3WDUzxBNnvYlj82eKHw/s1600-h/thumbnail011.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgr5kmwGkK91ASkc8LauWU5SSAZRP6y8-ladRgvuehiZXnwMWyGI8ox-39huWF_lslfiEf_npII5_wxj2mNGsLl8h3z0J8wlugf3JzrnhYbkaR7GPpCLHL3WDUzxBNnvYlj82eKHw/s320/thumbnail011.jpg" alt="" id="BLOGGER_PHOTO_ID_5199367526398163410" border="0" /></a><br /><br />So! I'm gonna talk a little bit about tools, because one interesting thing I noticed when I was putting this thing together, right, was that the ways you solve tools problems for dynamic languages are very similar to the way you solve perf problems. OK? And I'm not going to try to keep you guessing or anything. I'll tell you what the sort of... kernel of the idea is here.<br /><br />It's that... the notion of "static" versus "dynamic", where you kind of have to do all these optimizations and all these computations statically, on a language, is very old-fashioned. OK? And increasingly it's becoming obvious to <em>everybody</em>, you know, even the C++ crowd, that you get a lot better information at run-time. *Much* better information.<br /><br />In particular, let me come back to my inlining example. Java inlines polymorphic methods! Now the simplest way to do it was actually invented here at Stanford by Googler Urs Hoelzle, who's, you know, like VP and Fellow there, and it's called, it's now called Polymorphic Inline Caching. He called it, uh, <a href="http://research.sun.com/self/papers/pldi94.ps.gz">type-feedback compilation</a>, I believe is what he called it. Great paper. And it scared everybody, apparently. The rumors on the mailing lists were that people were terrified of it, I mean it seems too hard. And if you look at it now, you're like, dang, that was a good idea.<br /><br />All it is, I mean, I told you the compiler doesn't know the receiver type, right? But the thing is, in computing, I mean, heuristics work pretty well. The whole 80/20 rule and the <a href="http://en.wikipedia.org/wiki/Power_law">Power Law</a> apply pretty much unilaterally across the board. So you can make assumptions like: the first time through a loop, if a particular variable is a specific instance of a type, then it's probably going to be [the same type] on the remaining iterations of the loop. OK?<br /><br />So what he [Urs] does, is he has these counters at hot spots in the code, in the VM. And they come in and they check the types of the arguments [or operands]. And they say, all right, it looks like a bunch of them appear to be class B, where we thought it might be class A.<br /><br />So what we're gonna do is generate this fall-through code that says, all right, if it's a B – so they have to put the guard instruction in there; it has to be <em>correct</em>: it has to handle the case where they're wrong, OK? But they can make the guard instruction very, very fast, effectively one instruction, depending on how you do it. You can compare the address of the intended method, or you can maybe do a type-tag comparison. There are different ways to do it, but it's fast, and more importantly, if it's <em>right</em>, which it is 80-90% of the time, it falls through [<em>i.e., inlines the method for that type - Ed.</em>], which means you maintain your processor pipeline and all that stuff.<br /><br />So it means they have <em>predicted</em> the type of the receiver. They've successfully inlined that. I mean, you can do a whole bunch of branching, and they actually found out through some experimentation that you only need to do 2 to 4 of these, right, before the gain completely tails off. So you don't have to generate too much of this. And they've expanded on this idea now, for the last ten years.<br /><br />Getting back to my point about what's happening [over the past 30 years], there was an AI winter. You all remember the AI winter, right? Where, like, investors were pumping millions of dollars into Smalltalk and Lisp companies who were promising they'd cure world hunger, cure cancer, and everything?<br /><br />And unfortunately they were using determinism!<br /><br />They're using heuristics, OK, but you know... before I came to Google, you know, I was really fascinated by something <a href="http://en.wikipedia.org/wiki/Peter_Norvig">Peter Norvig</a> was saying. He was saying that they don't do natural language processing deterministically any more. You know, like maybe, conceivably, speculating here, Microsoft Word's grammar checker does it, where you'd have a Chomsky grammar, right? And you're actually going in and you're doing something like a compiler does, trying to derive the sentence structure. And you know, whatever your output is, whether it's translation or grammar checking or whatever...<br /><br />None of that worked! It all became way too computationally expensive, plus the languages kept changing, and the idioms and all that. Instead, [Peter was saying] they do it all probablistically.<br /><br />Now historically, every time you came along, and you just obsoleted a decade of research by saying, "Well, we're just gonna kind of wing it, probabilistically" — and you know, Peter Norvig was saying they get these big data sets of documents that have been translated, in a whole bunch of different languages, and they run a bunch of machine learning over it, and they can actually match your sentence in there to one with a high probability of it being this translation.<br /><br />And it's usually right! It certainly works a lot better than deterministic methods, and it's computationally a lot cheaper.<br /><br />OK, so whenever you do that, it makes people MAD.<br /><br />Their first instinct is to say "nuh-UUUUUUH!!!!" Right? I'm serious! I'm serious. It happened when John von Neumann [and others] introduced <a href="http://en.wikipedia.org/wiki/Monte_Carlo_method">Monte Carlo methods</a>. Everyone was like "arrgggggh", but eventually they come around to it. They go "yeah, I guess you're right; I'll go back and hit the math books again."<br /><br />It's happening in programming languages <em>today</em>. I mean, as we speak. I mean, there's a paper I'm gonna tell you about, from October, and it's basically coming along and... it's not <em>really</em> machine learning, but you're gonna see it's the same kind of [data-driven] thing, right? It's this "winging it" approach that's actually much cheaper to compute. And it has much better results, because the runtime has all the information.<br /><br />So let me just finish the tools really quick.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHeWikO-IMR5ATA920AxuodFKV28guTj6gsCM7TQwhjaLhs6Ei1vx_G0E-BuRfgp1v67WjSfPg5_Ps5_ySe2XPYv8bMondU8KBkP6Bl7r5JfwvkCuDcqdh2dfOmCULH4Om10CxqQ/s1600-h/thumbnail012.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHeWikO-IMR5ATA920AxuodFKV28guTj6gsCM7TQwhjaLhs6Ei1vx_G0E-BuRfgp1v67WjSfPg5_Ps5_ySe2XPYv8bMondU8KBkP6Bl7r5JfwvkCuDcqdh2dfOmCULH4Om10CxqQ/s320/thumbnail012.jpg" alt="" id="BLOGGER_PHOTO_ID_5199367530693130722" border="0" /></a><br /><br />And I'm not talking to you guys; I'm talking to the people in the screen [i.e. watching the recording] – all these conversations I've had with people who say: "No type tags means no information!" I mean, effectively that's what they're saying.<br /><br />I mean...<br /><pre>function foo(a, b) { return a + b; }<br /><br />var bar = 17.6;<br /><br />var x = {a: "hi", b: "there"};</pre> What's <code>foo</code>? It's a <em>function</em>. How did I know that? [<em>(laughter)</em>] What's <code>bar</code>? What's <code>x</code>? You know, it's a composite type. It's an Object. It has two fields that are strings. Call it a record, call it a tuple, call it whatever you want: we know what it is.<br /><br />The syntax of a language, unless it's Scheme, gives you a lot of clues about the semantics, right? That's actually the one place, maybe, where lots of syntax actually wins out [over Scheme]. I just thought of that. Huh.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKrT5BeaOU-VLl9W8fZ41WcaXkDD684B8Zii1as3ZwTpHTfkuNC8mY7oJNXRHG1DSwE-2IO-6NX8YFeTocELNQSWmiCYHmqx94rWmsJJIF6eiektp5dk-hO616GBdx1ywZf9E0XA/s1600-h/thumbnail013.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKrT5BeaOU-VLl9W8fZ41WcaXkDD684B8Zii1as3ZwTpHTfkuNC8mY7oJNXRHG1DSwE-2IO-6NX8YFeTocELNQSWmiCYHmqx94rWmsJJIF6eiektp5dk-hO616GBdx1ywZf9E0XA/s320/thumbnail013.jpg" alt="" id="BLOGGER_PHOTO_ID_5199367534988098034" border="0" /></a><br /><br />OK, so... then you get into dynamic languages. This [code] is all JavaScript. This is actually something I'm working on right now. I'm trying to build this JavaScript code graph, and you actually have to know all these tricks. And of course it's undecidable, right, I mean this is, you know, somebody could be defining a function at the console, and I'm not gonna be able to find that.<br /><br />So at some point you've gotta kind of draw the line. What you do is, you look at your corpus, your code base, and see what are the common idioms that people are using. In JavaScript, you've got a couple of big standard libraries that everybody seems to be including these days, and they all have their slightly different ways of doing function definitions. Some of them use Object literals; some of them use the horrible <code>with</code> statement, you know, that JavaScript people hate.<br /><br />But your compiler can figure all these out. And I was actually going through this <a type="amzn" asin="0321486811">Dragon Book</a>, because they can even handle aliasing, right? Your IDE for JavaScript, if I say "<code>var x = some object</code>", and you know...<br /><br />Did I handle this here [in the slides]?<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivfsE6ZyYDzLMEFE3xeeQzKFROAqCVgSc1mZ7rRPAaWs1-mRdv1-YZezJO9Y50KPH8r62dotCMqAIifAz075qNeBMR1iKmF1EI5JJfZWfhVmmaIE5hIP6y3eDbifUqakKm1U8qOg/s1600-h/thumbnail014.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivfsE6ZyYDzLMEFE3xeeQzKFROAqCVgSc1mZ7rRPAaWs1-mRdv1-YZezJO9Y50KPH8r62dotCMqAIifAz075qNeBMR1iKmF1EI5JJfZWfhVmmaIE5hIP6y3eDbifUqakKm1U8qOg/s320/thumbnail014.jpg" alt="" id="BLOGGER_PHOTO_ID_5199367539283065346" border="0" /></a><br /><br />Yeah, right here! And I say, <code>foo</code> is an object, <code>x</code> is <code>foo</code>, and I have an alias now. The algorithm for doing this is right here in the <a type="amzn" asin="0321486811">Dragon Book</a>. It's data-flow analysis. Now they use it for compiler optimization to do, you know, live variable analysis, register allocation, dead-code elimination, you know, the list kind of goes on. It's a very useful technique. You build this big code graph of basic blocks...<br /><br />So it's actually one of the few static-analysis that's actually carrying over in this new dynamic world where we have all this extra information. But you can actually use it in JavaScript to figure out function declarations that didn't actually get declared until way later in the code.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEs4gdPMchFO0ytjNFMqney-3QCcrRKqglxVu_F2YHXxVLCa5F9N_4TQsNBs07OVLTjSVMQb0znfDK2LaVmwEr8P-M8XW6eGevqT9QfGFwbIxjyL4llU8c4wovU62Ja7mQ9s46bg/s1600-h/thumbnail015.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEs4gdPMchFO0ytjNFMqney-3QCcrRKqglxVu_F2YHXxVLCa5F9N_4TQsNBs07OVLTjSVMQb0znfDK2LaVmwEr8P-M8XW6eGevqT9QfGFwbIxjyL4llU8c4wovU62Ja7mQ9s46bg/s320/thumbnail015.jpg" alt="" id="BLOGGER_PHOTO_ID_5199368338146982418" border="0" /></a><br /><br />Another big point that people miss is that the Java IDEs, you know, that are supposedly always right? They're wrong. If you miss <em>one time</em>, you're wrong. Right? In Java Reflection, obviously, the IDE has no information about what's going on in that string, by definition. It's a string: it's quoted; it's opaque.<br /><br />And so they always wave their hands and say "Ohhhhh, you can't do Rename Method!"<br /><br />Even though Rename Method came from the <a type="amzn" asin="0201136880">Smalltalk</a> environment, of course, right? And you say, "It came from the Smalltalk environment, so yes, you can do Rename Method in dynamic languages."<br /><br />And they say "NO! Because it'll miss sometimes!"<br /><br />To which, I say to you people in the screen, you'd be <em>astonished</em> at how often the Java IDEs miss. They miss every single instance of a method name that shows up in an XML configuration file, in a reflection layer, in a database persistence layer where you're matching column names to fields in your classes. Every time you've deployed some code to some people out in the field...<br /><br />Rename Method only works in a small set of cases. These Refactoring tools that, really, they're acting are like the Holy Grail, you can do ALL of that in dynamic languages. That's the proof, right? [<em>I.e., static langs miss as often as dynamic – Ed.</em>]<br /><br />It's not even a very interesting topic, except that I just run across it all the time. Because you ask people, "hey, you say that you're ten times as productive in Python as in your other language... why aren't you using Python?"<br /><br />Slow? Admittedly, well, we'll get to that.<br /><br />And tools. Admittedly. But I think what's happened here is Java has kind of shown the new crop of programmers what Smalltalk showed us back in the 80s, which is that IDEs can work and they can be beautiful.<br /><br />And <em>more importantly</em> – and this isn't in the slides either, for those of you who cheated – they <em>have</em> to be tied to the runtime. They complain, you know, the Java people are like "Well you have to have all the code loaded into the IDE. That's not scalable, it's not flexible, they can't simulate the program just to be able to get it correct."<br /><br />And yet: any sufficiently large Java or C++ system has health checks, monitoring, it opens sockets with listeners so you can ping it programmatically; you can get, you know, debuggers, you can get remote debuggers attached to it; it's got logging, it's got profiling... it's got this long list of things that you need because the static type system failed.<br /><br />OK... Why did we have the static type system in the first place?<br /><br />Let me tell you guys a story that, even if you know all this stuff, is still going to shock you. I credit Bob Jervis for sharing this with me (the guy who wrote Turbo C.)<br /><br />So javac, the Java compiler: what does it do? Well, it generates bytecode, does some optimizations presumably, and maybe tells you some errors. And then you ship it off to the JVM. And what happens to that bytecode? First thing that happens is they build a tree out of it, because the bytecode verifier has to go in and make sure you're not doing anything [illegal]. And of course you can't do it from a stream of bytes: it has to build a usable representation. So it effectively rebuilds the source code that you went to all that effort to put into bytecode.<br /><br />But that's not the end of it, because maybe javac did some optimizations, using the old <a type="amzn" asin="0321486811">Dragon Book</a>. Maybe it did some constant propagation, maybe it did some loop unrolling, whatever.<br /><br />The next thing that happens in the JVM is the JIT undoes all the optimizations! Why? So it can do <em>better</em> ones because it has runtime information.<br /><br />So it undoes all the work that javac did, except maybe tell you that you had a parse error.<br /><br />And the weird thing is, Java keeps piling... I'm getting into rant-mode here, I can tell. We're never going to make it to the end of these slides. Java keeps piling syntax on, you know, but it's not making the language more expressive. What they're doing is they're adding red tape and bureacracy for stuff you could do back in Java 1.0.<br /><br />In Java 1.0, when you pulled a String out of a Hashtable you had to cast it as a String, which was really stupid because you said <pre>String foo = (String) hash.get(...)</pre> You know, it's like... if you had to pick a syntax [for casting], you should at least pick one that specifies what you think it's supposed to be, not what it's becoming – <em>obviously</em> becoming – on the left side, right?<br /><br />And everybody was like, "I don't like casting! I don't like casting!" So what did they do? What they <em>could</em> have done is they could have said, "All right, you don't have to cast anymore. We know what kind of variable you're trying to put it in. We'll cast it, and [maybe] you'll get a <code>ClassCastException</code>."<br /><br />Instead, they introduced generics, right, which is this huge, massive, <a type="amzn" asin="0262660717">category-theoretic</a> type system that they brought in, where you have to under[stand] – to actually use it you have to know the difference between <a href="http://en.wikipedia.org/wiki/Covariance_and_contravariance_%28computer_science%29">covariant and contravariant</a> return [and argument] types, and you have to understand why every single mathematical... [<em>I tail off in strangled frustration...</em>]<br /><br />And then what happens on mailing lists is users say: "So I'm trying to do X." And they say: "WELL, for the following category-theoretic reasons ...there's no way to do it." And they go: "Oh! Oh. Then I'm gonna go use JavaScript, then." Right?<br /><br />I mean, it's like, what the hell did this type system do for Java? It introduced inertia and complexity to everybody who's writing tools, to everybody who's writing compilers, to everybody who's writing runtimes, and to everybody who's writing code. And it didn't make the language more expressive.<br /><br />So what's happening? Java 7 is happening. And I encourage you all to go look at <em>that</em> train wreck, because oh my God. Oh, God. I didn't sleep last night. I'm all wired right now because I looked at Java 7 last night. And it was a <em>mistake</em>. [<em>(laughter)</em>] Ohhh...<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg3ejvNKj1UL3NxYhh1cHIKZ_uTV3NYxm1R_KQrlpB_XkPyzdltNd73-gVtQAl0pEVhCSmMGwzuwHlcacSmhIdaQSSi4YYy5LPEw_rurE2gfKLZDE_VVUYnHmm3hlU66kmFFe1NtQ/s1600-h/thumbnail016.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg3ejvNKj1UL3NxYhh1cHIKZ_uTV3NYxm1R_KQrlpB_XkPyzdltNd73-gVtQAl0pEVhCSmMGwzuwHlcacSmhIdaQSSi4YYy5LPEw_rurE2gfKLZDE_VVUYnHmm3hlU66kmFFe1NtQ/s320/thumbnail016.jpg" alt="" id="BLOGGER_PHOTO_ID_5199368342441949730" border="0" /></a><br /><br />OK. So! Moving right back along to our <em>simple</em> dynamic languages, the lesson is: it's not actually harder to build these tools [for dynamic languages]. It's different. And nobody's done the work yet, although people are starting to. And actually <a type="amzn" asin="1932394443">IntelliJ</a> is a company with this <a href="http://www.jetbrains.com/idea/features/javascript_editor.html">IDEA</a> [IDE], and they... my friends show off the JavaScript tool, you know, and it's like, man! They should do one for Python, and they should do one for every single dynamic language out there, because they kick butt at it. I'm sure they did all this stuff and more than I'm talking about here.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFK-6pyUd1PDrQXmDlkN6ColkjMnjJQlOaeF7qVH2L44fa1TvLZMoQJ9kW03skXloIcdZkPKDuwXr1o93zvsKF16bWv2lnQTM9qejyTN_QpbOzow2MDQ65avJITA0PtNF8ExhBLQ/s1600-h/thumbnail017.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFK-6pyUd1PDrQXmDlkN6ColkjMnjJQlOaeF7qVH2L44fa1TvLZMoQJ9kW03skXloIcdZkPKDuwXr1o93zvsKF16bWv2lnQTM9qejyTN_QpbOzow2MDQ65avJITA0PtNF8ExhBLQ/s320/thumbnail017.jpg" alt="" id="BLOGGER_PHOTO_ID_5199368342441949746" border="0" /></a><br /><br />All right. Now we can talk about perf. This is the Crown Jewels of the talk. Yeah. So... unfortunately I have to make the disclaimer that everybody thinks about performance wrong, except for you guys 'cuz you all know, right? But seriously, I mean, you know, you understand, I started out of school... <em>*sigh*</em><br /><br />OK: I went to the University of Washington and [then] I got hired by this company called Geoworks, doing assembly-language programming, and I did it for <em>five years</em>. To us, the Geoworkers, we wrote a whole operating system, the libraries, drivers, apps, you know: a desktop operating system in assembly. 8086 assembly! It wasn't even good assembly! We had four registers! [Plus the] si [register] if you counted, you know, if you counted 386, right? It was <em>horrible</em>.<br /><br />I mean, actually we kind of liked it. It was Object-Oriented Assembly. It's amazing what you can talk yourself into liking, which is the real irony of all this. And to us, C++ was the ultimate in Roman decadence. I mean, it was equivalent to going and vomiting so you could eat more. They had IF! We had jump CX zero! Right? They had "Objects". Well we did too, but I mean they had syntax for it, right? I mean it was all just such weeniness. And we knew that we could outperform any compiler out there because at the time, we could!<br /><br />So what happened? Well, they went bankrupt. Why? Now I'm probably disagreeing – I know for a fact that I'm disagreeing with every Geoworker out there. I'm the only one that holds this belief. But it's because we wrote fifteen million lines of 8086 assembly language. We had really good tools, world class tools: trust me, you need 'em. But at some point, man...<br /><br />The problem is, picture an ant walking across your garage floor, trying to make a straight line of it. It ain't gonna make a straight line. And you know this because you have <em>perspective</em>. You can see the ant walking around, going hee hee hee, look at him locally optimize for that rock, and now he's going off this way, right?<br /><br />This is what we were, when we were writing this giant assembly-language system. Because what happened was, Microsoft eventually released a platform for mobile devices that was much faster than ours. OK? And I started going in with my debugger, going, what? What is up with this? This rendering is just really slow, it's like sluggish, you know. And I went in and found out that some title bar was getting rendered 140 times every time you refreshed the screen. It wasn't just the title bar. Everything was getting called multiple times.<br /><br />Because we couldn't see how the system worked anymore!<br /><br />Small systems are not only <em>easier</em> to optimize, they're <em>possible</em> to optimize. And I mean globally optimize.<br /><br />So when we talk about performance, it's all crap. The most important thing is that you have a small system. And then the performance will just fall out of it naturally.<br /><br />That said, all else being equal, let's just pretend that Java can make small systems. Heh, that's a real stretch, I know. Let's talk about actual optimization.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmWgbo6XDMbwvXAFbp_UBFbt_EXTHPWmAzH4gJzHZkOlwxxhVVAosdUENPlQbYGHJE18Ui3UKMK9aKZSOZJaggmHAiJlhc4TvM8eaOa_56nN3pZpjGP8wAzFrBSLIvK0ORjbEn8w/s1600-h/thumbnail018.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmWgbo6XDMbwvXAFbp_UBFbt_EXTHPWmAzH4gJzHZkOlwxxhVVAosdUENPlQbYGHJE18Ui3UKMK9aKZSOZJaggmHAiJlhc4TvM8eaOa_56nN3pZpjGP8wAzFrBSLIvK0ORjbEn8w/s320/thumbnail018.jpg" alt="" id="BLOGGER_PHOTO_ID_5199368346736917058" border="0" /></a><br /><br />And by the way, here are some real examples, sort of like the Geoworks one, where a slower language wound up with a faster system. It's not just me. I've seen it all over the place. Do you know why this one happened? Why was the <a type="amzn" asin="0977616630">Ruby on Rails</a> faster than Struts? This started one of the internet's largest flamewars since Richard Stallman dissed Tcl back in the 80s, you know. You guys remember that? [<em>(laughter)</em>]<br /><br />I mean, the Java people went <em>nuts</em>, I mean really really nuts, I mean like angry Orcs, they were just like AAAaaaaauuuugh, they did NOT want to hear it. OK? It was because they were serializing everything to and from XML because Java can't do declarations. That's why. That's the reason. I mean, stupid reasons, but performance comes from some strange places.<br /><br />That said, OK, disclaimers out of the way...<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrcjWo22mpytxBqTmi84bWy7Y44-9jt1od8cSwX8ZIdlW1zX_P-y3ybu1UIfxx7ipeIBLDeBtlNDCx7AgN3wWrzkW0mKWS4sxvzgDASUDg1-9KSsCy3yzviraucebytPUawemR7g/s1600-h/thumbnail019.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrcjWo22mpytxBqTmi84bWy7Y44-9jt1od8cSwX8ZIdlW1zX_P-y3ybu1UIfxx7ipeIBLDeBtlNDCx7AgN3wWrzkW0mKWS4sxvzgDASUDg1-9KSsCy3yzviraucebytPUawemR7g/s320/thumbnail019.jpg" alt="" id="BLOGGER_PHOTO_ID_5199368346736917074" border="0" /></a><br /><br />Yeah yeah, people are using them.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuBGNblCaLzKrIBvwy4M64p0Zqfj2hbYHuExhH-B0-Hb2WsgcAU8xGU4CFtrLV5zfcy6SsuIIWPUbbq52G2mpO6BqhKERDNXVqwsTjHY5wnJue3AqnP7Wm5VkrYd7Sb_ve-fpctA/s1600-h/thumbnail020.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuBGNblCaLzKrIBvwy4M64p0Zqfj2hbYHuExhH-B0-Hb2WsgcAU8xGU4CFtrLV5zfcy6SsuIIWPUbbq52G2mpO6BqhKERDNXVqwsTjHY5wnJue3AqnP7Wm5VkrYd7Sb_ve-fpctA/s320/thumbnail020.jpg" alt="" id="BLOGGER_PHOTO_ID_5199370528580303458" border="0" /></a><br /><br />Um, yeah. So JavaScript. JavaScript has been really interesting to me lately, because JavaScript actually does care about performance. They're the first of the modern dynamic languages where performance has become an issue not just for the industry at large, but also increasingly for academia.<br /><br />Why JavaScript? Well, it was Ajax. See, what happened was... Lemme tell ya how it was supposed to be. JavaScript was going away. It doesn't matter whether you were Sun or Microsoft or anybody, right? JavaScript was going away, and it was gonna get replaced with... heh. Whatever your favorite language was.<br /><br />I mean, it wasn't actually the same for everybody. It might have been C#, it might have been Java, it might have been some new language, but it was going to be a <em>modern</em> language. A fast language. It was gonna be a scalable language, in the sense of large-scale engineering. Building desktop apps. That's the way it was gonna be.<br /><br />The way it's <em>really</em> gonna be, is JavaScript is gonna become one of the smokin'-est fast languages out there. And I mean <em>smokin'</em> fast.<br /><br />Now it's not the only one that's making this claim. There's actually a lot of other... you guys know about <a href="http://codespeak.net/pypy/dist/pypy/doc/home.html">PyPy</a>? Python in Python? Those crack fiends say they can get C-like performance. Come on... COME ON! They... I mean, seriously! That's what they say.<br /><br />Here's the deal, right? They're saying it because they're throwing all the old assumptions out. They can get this performance by using these techniques here, fundamentally. But if nobody believes them, then even when they achieve this performance it's not gonna matter because still nobody's gonna believe them, so all of this stuff we're talking about is a little bit moot.<br /><br />Nevertheless, I'm going to tell you about some of the stuff that I know about that's going on in JavaScript.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-ucwNxDXwTGPkFzHlftOuJdDCUX0VAX6CWByAnhdAmER8ALV3-VElVvr05m_uWAqpdb4Z-8SXwTZhl43E_xoaj53QOuAn0ICEPCmj2jdpCRt-SsfXJgmZnc-PAgwiwCKKOh4Zfg/s1600-h/thumbnail021.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-ucwNxDXwTGPkFzHlftOuJdDCUX0VAX6CWByAnhdAmER8ALV3-VElVvr05m_uWAqpdb4Z-8SXwTZhl43E_xoaj53QOuAn0ICEPCmj2jdpCRt-SsfXJgmZnc-PAgwiwCKKOh4Zfg/s320/thumbnail021.jpg" alt="" id="BLOGGER_PHOTO_ID_5199370537170238066" border="0" /></a><br /><br />So type inference. You <em>can</em> do type inference. Except that it's lame, because it doesn't handle weird dynamic features like upconverting integers to Doubles when they overflow. Which JavaScript does, interestingly enough, which is I guess better behavior than... I mean, it still overflows eventually, right?<br /><br />We overflowed a <code>long</code> at Google once. Nobody thought that was possible, but it actually happened. I'll tell you about that later if you want to know.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYXUYgRww1ICYukdElTZ5-9KEcbZsdWWUxF7Ri4L8wxTAzKMV-zwYMtAhWjDr7loux9YFpNtuJ5rOLeiOSMqY3edkY2AsVBJxbE8grhxkh7YJu5rySDxZfi5IR7DvThxvoo8gSsg/s1600-h/thumbnail022.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYXUYgRww1ICYukdElTZ5-9KEcbZsdWWUxF7Ri4L8wxTAzKMV-zwYMtAhWjDr7loux9YFpNtuJ5rOLeiOSMqY3edkY2AsVBJxbE8grhxkh7YJu5rySDxZfi5IR7DvThxvoo8gSsg/s320/thumbnail022.jpg" alt="" id="BLOGGER_PHOTO_ID_5199370541465205378" border="0" /></a><br /><br />So... oh yeah, I already talked about Polymorphic Inline Caches. Great! I already talked about a lot of this stuff.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEii-kSNKMArDvj-RZrkpvpsQw9Vwrdba0oh1IJA6UDTxN7c4FC177CWKF2mGXEThX3lJO5KlWD_zWSfKXRM6B8kCXMBwjHUzF5vYqujNoZ4F4S-OGCuBRzyXCO5DAuS4VrMZzj4Og/s1600-h/thumbnail023.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEii-kSNKMArDvj-RZrkpvpsQw9Vwrdba0oh1IJA6UDTxN7c4FC177CWKF2mGXEThX3lJO5KlWD_zWSfKXRM6B8kCXMBwjHUzF5vYqujNoZ4F4S-OGCuBRzyXCO5DAuS4VrMZzj4Og/s320/thumbnail023.jpg" alt="" id="BLOGGER_PHOTO_ID_5199370541465205394" border="0" /></a><br /><br />This one's really cool. This is a trick that somebody came up with, that you can actually – there's a <a href="http://www.ics.uci.edu/%7Efranz/Site/pubs-pdf/ICS-TR-07-10.pdf">paper</a> on it, where you can actually figure out the actual types of any data object in any dynamic language: figure it out the first time through by using this double virtual method lookup. They've boxed these things. And then you just expect it to be the same the rest of the time through [the loop], and so all this stuff about having a type-tag saying this is an <code>int</code> – which might not actually be technically correct, if you're going to overflow into a Double, right? Or maybe you're using an int but what you're really using is a byte's worth of it, you know. The runtime can actually figure things out around bounds that are undecidable at compile time.<br /><br />So that's a cool one.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJCoM9qEKJYJXUxaK5rf3d_lcITo0JUSXwoAWDW4ycx3_gm5Q3EHIYLOp3bAdFsU_2pb9uzU1jgVKBV0d_xjflRcDxXsEhmmAsCIiUlCAB3pPNb0oXJxqEdwfkg-IKcRahPyIAvA/s1600-h/thumbnail024.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJCoM9qEKJYJXUxaK5rf3d_lcITo0JUSXwoAWDW4ycx3_gm5Q3EHIYLOp3bAdFsU_2pb9uzU1jgVKBV0d_xjflRcDxXsEhmmAsCIiUlCAB3pPNb0oXJxqEdwfkg-IKcRahPyIAvA/s320/thumbnail024.jpg" alt="" id="BLOGGER_PHOTO_ID_5199370545760172706" border="0" /></a><br /><br />This is the really cool one. This is the really, really cool one. Trace trees. This is a <a href="http://www.ics.uci.edu/%7Efranz/Site/pubs-pdf/C44Prepub.pdf">paper that came out in October</a>. This is the one, actually... I'll be honest with you, I actually have two optimizations that couldn't go into this talk that are even cooler than this because they haven't published yet. And I didn't want to let the cat out of the bag before they published. So this is actually just the tip of the iceberg.<br /><br />But trace trees, it's a really simple idea. What you do is your runtime, your VM, you know, it's interpreting instructions and can count them. Well, it can also record them! So any time it hits, basically, a branch backwards, which usually means it's going to the beginning of a loop, which usually means it's going to be a hot spot, especially if you're putting a counter there... Obviously [in] the inner loops, the hot spots will get the highest counts, and they get triggered at a certain level.<br /><br />It turns on a recorder. That's all it does. It starts recording instructions. It doesn't care about loop boundaries. It doesn't care about methods. It doesn't care about modules. It just cares about "What are you executing?"<br /><br />And it records these tree – well actually, traces, until they get back to that point. And it uses some heuristics to throw stuff away if it goes too long or whatever. But it records right through methods. And instead of setting up the activation, it just inlines it as it goes. Inline, inline, inline, right? So they're big traces, but they're known to be hot spots.<br /><br />And even here in the <a type="amzn" asin="0321486811">Dragon Book</a>, Aho, Sethi and Ullman, they say, you know, one of the most important things a compiler can do is try to identify what the hot spots are going to be so it can make them efficient. Because who cares if you're optimizing the function that gets executed once at startup, right?<br /><br />So these traces wind up being trees, because what can happen is, they branch any time an operand is a different type. That's how they handle the overflow to Double: there'll be a branch. They wind up with these trees. They've still got a few little technical issues like, for example, growing exponentially on the Game of Life. There's a <a href="http://andreasgal.com/2008/02/28/tree-folding/">blog about it</a>, um... I'm sorry, I've completely forgotten his name [<em>Andreas Gal</em>], but I will blog this. And the guy that's doing these trace trees, he got feedback saying that they've got exponential growth.<br /><br />So they came up with this novel way of folding the trace trees, right, so there are code paths that are almost identical and they can share, right?<br /><br />It's all the same kind of stuff they were doing with <em>these</em> [Dragon Book] data structures back when they were building static compilers. We are at the very beginning of this research! What has happened is, we've gone from Dynamic [to] AI Winter... dynamic research stopped, and anybody who was doing it was sort of anathema in the whole academic [community]... worldwide across all the universities. There were a couple of holdouts. [<a href="http://en.wikipedia.org/wiki/Daniel_P._Friedman">Dan Friedman</a> and] <a href="http://en.wikipedia.org/wiki/Matthias_Felleisen">Matthias Felleisen</a>, right, the <a type="amzn" asin="0262560992">Little Schemer</a> guys, right? Holding out hope.<br /><br />And everybody else went and chased static. And they've been doing it like crazy. And they've, in my opinion, reached the theoretical bounds of what they can deliver, and it has FAILED. These static type systems, they're WRONG. Wrong in the sense that when you try to do something, and they say: No, category theory doesn't allow that, because it's not elegant... Hey man: who's wrong? The person who's trying to write the program, or the type system?<br /><br />And some of the type errors you see in these Hindley-Milner type [systems], or any type system, like "expected (int * int * int)", you know, a tuple, and "but got (int * int * int)", you know [<em>(clapping my hands to my head)</em>] it's pretty bad, right? I mean, they've, I think they've failed. Which is why they're not getting adopted.<br /><br />Now of course that's really controversial. There are probably a bunch of type-systems researchers here who are really mad, but...<br /><br />What's happening is: as of this Ajax revolution, the industry shifted to trying to optimize JavaScript. And that has triggered what is going to be a landslide of research in optimizing dynamic languages.<br /><br />So these tricks I'm telling you about, they're just the beginning of it. And if we come out of this talk with one thing, it's that it's cool to optimize dynamic languages again! "Cool" in the sense of getting venture funding, right? You know, and research grants... "Cool" in the sense of making meaningful differences to all those people writing Super Mario clones in JavaScript.<br /><br />You know. It's <em>cool</em>.<br /><br />And so I encourage you, if you're a language-savvy kind of person, to jump in and try to help. Me, I'm probably going to be doing grunt implementations, since I'm not that smart.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_TaNgqaNwJIe2eirNalrW-TyNqEnCNnwgAe9I-UgKppAnJFweQ6pYmvvrSbwx6oMdoUdOAM1jfuNUFd5o4G135Qs6I1eUdq5K9dQBes2eCk4L3Eibp6z5E-omSdNj4pGBJS1o4Q/s1600-h/thumbnail025.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_TaNgqaNwJIe2eirNalrW-TyNqEnCNnwgAe9I-UgKppAnJFweQ6pYmvvrSbwx6oMdoUdOAM1jfuNUFd5o4G135Qs6I1eUdq5K9dQBes2eCk4L3Eibp6z5E-omSdNj4pGBJS1o4Q/s320/thumbnail025.jpg" alt="" id="BLOGGER_PHOTO_ID_5199371031091477170" border="0" /></a><br /><br />And I don't even need to talk about this [last optimization — Escape Analysis], since you already knew it.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZ1EDB94yzNycRetmvg7jj_nTwD6xVKs7RlnRzdtHNoA1h3dLOxMeQDDUYjerWgHRZ2vSrwDKHpDkqY3MYixdEi6IMRsx-14NxUrJAr9FJ6d7HYUraSufDl36unmc0Djs9UWkj9g/s1600-h/thumbnail026.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZ1EDB94yzNycRetmvg7jj_nTwD6xVKs7RlnRzdtHNoA1h3dLOxMeQDDUYjerWgHRZ2vSrwDKHpDkqY3MYixdEi6IMRsx-14NxUrJAr9FJ6d7HYUraSufDl36unmc0Djs9UWkj9g/s320/thumbnail026.jpg" alt="" id="BLOGGER_PHOTO_ID_5199371031091477186" border="0" /></a><br /><br />All right! So that's it. That's my talk. CPUs... you get all the good information about how a program is running <em>at run time</em>. And this has huge implications for the tools and for the performance. It's going to change the way we work. It's eventually – God, I hope sooner rather than later – going to obsolete C++ finally.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1_1dtdrDkDmy5i5Wj_sxfYorGLCmBGw4vwXMIRChxNXBUEv825u2DKSiEkkHV0eAr0KdcrZNmeq4sXMp7zsoXsP5GoAaRj9kF1cA-RtxI7jaXuCG_uhUrEUmG2nTkZHGLAMqBKQ/s1600-h/thumbnail027.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1_1dtdrDkDmy5i5Wj_sxfYorGLCmBGw4vwXMIRChxNXBUEv825u2DKSiEkkHV0eAr0KdcrZNmeq4sXMp7zsoXsP5GoAaRj9kF1cA-RtxI7jaXuCG_uhUrEUmG2nTkZHGLAMqBKQ/s320/thumbnail027.jpg" alt="" id="BLOGGER_PHOTO_ID_5199371035386444498" border="0" /></a><br /><br />It's going to be a lot of work, right?<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiShjyOSZHDDfSLNCW13Mv3AY7_ae6ym3tk_RRRgK1sLAy77TQEZbCWg7gG-eMNyTCGj1C_mDyX-_gwI6fhmWYLpYBL6fb7LwzsxugG4pDbmLzCvRiah2utClR24bFQwXSrUncGfQ/s1600-h/thumbnail028.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiShjyOSZHDDfSLNCW13Mv3AY7_ae6ym3tk_RRRgK1sLAy77TQEZbCWg7gG-eMNyTCGj1C_mDyX-_gwI6fhmWYLpYBL6fb7LwzsxugG4pDbmLzCvRiah2utClR24bFQwXSrUncGfQ/s320/thumbnail028.jpg" alt="" id="BLOGGER_PHOTO_ID_5199371039681411810" border="0" /></a><br /><br />And then, when we finish, nobody's going to use it. [<em>(laughter)</em>] Because, you know. Because that's just how people are.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHwXn59q8qp-da5PhmWW5QG_cl8NkIt92miad64LP-sbI909KDofkVr0HPDX7q1VPdIIEcLca0x8ysDU0b4xWJc7ceie-JEPR97n2DFHW7exAeU1fA2oCQyW842WO0mUkQ_trXNg/s1600-h/thumbnail029.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHwXn59q8qp-da5PhmWW5QG_cl8NkIt92miad64LP-sbI909KDofkVr0HPDX7q1VPdIIEcLca0x8ysDU0b4xWJc7ceie-JEPR97n2DFHW7exAeU1fA2oCQyW842WO0mUkQ_trXNg/s320/thumbnail029.jpg" alt="" id="BLOGGER_PHOTO_ID_5199371039681411826" border="0" /></a><br /><br />That's my talk! Thanks. [<em>(applause)</em>]<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkoUHFp5JkKaT3s9DU-mVHO1TlRAY9c2KQMZWg9b9gqXqrCYyw9ogAntCrIZWSHPKinC5vJ1m2qx_WZuvFEhyOzhszsj3QQuHcbSXUFYVjRuztz5OCbWKpHaSXuep3dIIyy-1fZA/s1600-h/thumbnail030.jpg"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkoUHFp5JkKaT3s9DU-mVHO1TlRAY9c2KQMZWg9b9gqXqrCYyw9ogAntCrIZWSHPKinC5vJ1m2qx_WZuvFEhyOzhszsj3QQuHcbSXUFYVjRuztz5OCbWKpHaSXuep3dIIyy-1fZA/s320/thumbnail030.jpg" alt="" id="BLOGGER_PHOTO_ID_5199371396163697410" border="0" /></a><br /><br />Questions? No questions? I think we're out of time, right? [<em>(audience: no, we have half an hour)</em>]<br /><br /><b>Q: What's your definition of marketing?</b><br /><br />Hey man, I'm doing it <em>right now</em>. [<em>(laughter)</em>]<br /><br />I am! In a sense, right? I mean, like, Perl was a marketing success, right? But it didn't have Sun or Microsoft or somebody hyping it. It had, you know, the guy in the cube next to you saying "Hey, check out this Perl. I know you're using Awk, but Perl's, like, weirder!"<br /><br />The marketing can happen in any way that gets this message across, this meme out to everybody, in the Richard Dawkins sense. That's marketing. And it starts from just telling people: hey, it's out there.<br /><br /><b>Q: Do you see any of this stuff starting to move into microprocessors or instructions?</b><br /><br />Ah! I knew somebody was going to ask that. So unfortunately, the JITs that are doing all these cool code optimizations could potentially be running into these weird impedance mismatches with microprocessors that are doing their own sets of optimizations. I know <em>nothing</em> about this except that it's... probably gonna happen. And, uh, God I hope they talk to each other. [<em>Editor's note: after the talk, I heard that trace trees started life in hardware, at HP.</em>]<br /><br /><b>Q: You could imagine CMS (?) pulling all these stunts and looking at stuff and saying, "Oh, I know that this is just machine language... oh, look! That's an int, and..."</b><br /><br />Yes. I do know... that there's a compiler now that compiles [machine code] into microcode, a JIT, you know, I was reading about it.<br /><br /><b>Q: So one problem with performance is that it's not just fast performance vs. slow performance. What they're having a lot of trouble with is that a function one time takes a millisecond or a microsecond, and another time it takes 300 or 500 or 1000 times longer. [<em>part of question muted</em>] Any thoughts on how to improve the performance predictability of dynamic languages?</b><br /><br />Yeah... <em>*sigh*</em>. Well, I think for the forseeable future, I mean honestly having talked to several of the VM implementers, they're not making any claims that JavaScript's going to be as fast as C any time soon. Not for the forseeable future. It's going to be very fast, right, but it's not going to be quite... they're not going to make the crazy promises that Sun did.<br /><br />Which means that these dynamic speedups are primarily going to be useful in long-running distributed processes, for which a little glitch now and then isn't going to matter in the grand scheme of the computation. Or, they're going to be, you know, the harder one is in clients, where you've got a browser app, and you're hoping that the glitch you're talking about isn't on the order of hundreds of milliseconds.<br /><br />Generational garbage collectors is the best answer I've got for that, because it reduces the pauses, and frankly, the garbage collectors for all the [new] dynamic languages today are crap. They're mark-and-sweep, or they're reference counted. They've got to fix that. Right out of the starting gate, that's gonna nullify 80 percent of that argument.<br /><br />For the rest, I don't know. It's up to you guys.<br /><br /><b>Q: You also have to look at your storage management; you have to understand your storage in your program, and have some sort of better control over that... </b><br /><br />That's right. As you point out, it's domain-specific. If your bottleneck is your database, all bets are off.<br /><br /><b>Q: You seem to be equating dynamic language with dynamic encoding, that you have "dynamic language equals JIT"."</b><br /><br />For this talk, yes. [<em>(laughter)</em>]<br /><br /><b>Q: ...but the same thing can be done for static languages.</b><br /><br />Yeah, absolutely!<br /><br /><b>Q: ...and as soon as the marketing starts getting some market penetration, the C++ people will just simply come around and say, "You can have maintainability <em>and</em> performance".</b><br /><br />Yep! They can, actually. That's what they'll say. And I'll say: "All right. I'll give you a thousand dollars when you're done." OK? Because C++ have actually shot themselves in their own foot. By adding so many performance hacks into the language, and also actual features into the language for performance, like pointer manipulation, the language itself is large enough that it's very difficult. It's much more difficult to envision doing a JIT that can handle pointers properly, for example, right? You can do it! It's just a <em>lot</em> more work than it is for these simple languages. [<em>In retrospect, I'm not so sure about this claim. Trace trees may not care about pointers, so maybe it wouldn't be that hard? Of course, they'd have to move to a JIT first, requiring an initial slowdown, so it'd probably never happen. -Ed.</em>]<br /><br />So they're winding up... they're winding up in a situation where they're gonna have to weigh it carefully, and say, "OK: when all is said and done, is my language actually gonna be faster than these other languages that have gone through this process already?" Because now we're on a more level playing field.<br /><br />Especially as it's getting increasingly difficult to predict exactly what the hardware architecture is going to be, and those mismatches tend to have a huge impact on what the JIT actually can do. I mean, hardware's getting really out there now, and the compiler writers are still trying to figure out what to do about it. I mean, even the stuff they're doing in the JITs today might not apply tomorrow.<br /><br />So I realize it's a weak answer, but I'm saying, you know, it's a hard proposition for me to imagine them doing. They'll try! Maybe they'll succeed.<br /><br /><b>Q: The other side of dynamic languages is second-order systems: the ability to do an eval. And the difficulty with that is intellectual tractability. Most people use second-order languages to write first-order programs. Is there any real reason to even have a second-order language for writing Cobol?</b><br /><br />Can they hear these questions in the audio? Because this is a really good point.<br /><br />So this is, I mean, I don't know the answer to this. This is a hard question, OK?<br /><br />Java has kind of gotten there without even having eval. They've tiered themselves into sort of second-order people who know how to manipulate the type-theory stuff, you know, right? People go off to them with batch requests: "Please write me a type expression that meets my needs". And it comes back. So we're already in sort of the same situation we were in with <a type="amzn" asin="1590592395">hygienic Scheme macros</a>, you know, or with any sort of macro system, or any eval system. Which is that really only a few people can be trusted to do it well, and everybody else kind of has to... right?<br /><br />So I don't know, maybe it's just a cultural... maybe it's already solved, and we just have to live with the fact that some programming languages are going to have dark corners that only a few people know. It's unfortunate. It's ugly.<br /><br />[<em>My PhD languages intern, Roshan James, replies to the questioner: Your usage of the phrase 'second-order', where does that come from? A comment as to what you're telling us, which is that some sort of phased evaluation, specific to Scheme at least, we're saying... some would say the complexity of writing Scheme macros is roughly on the order of writing a complex higher-order procedure. It's not much harder. A well thought out macro system is not a hard thing to use.</em>]<br /><br />...says the Ph.D. languages intern! He's representing the majority viewpoint, of course. [<em>(laughter)</em>] I'll tell you what: I'll let you two guys duke it out after the talk, because I want to make sure we get through anybody else's questions.<br /><br /><b>Q: You're assuming you have a garbage-collected environment. What do you think about reference counting, with appropriate optimization?</b><br /><br />Ummmm... No. [<em>(laughter)</em>] I mean, come on. Garbage collection... you guys know that, like, it's faster in Java to allocate an object than it is in C++? They've got it down to, like, three instructions on some devices, is that right? And the way the generational garbage collector works is that 90% of the objects get reused. Plus there's fine-grained interleaving with the way the memory model of the operating system works, to make sure they're dealing with issues with, whaddaya call it, holes in the heap, where you can't allocate, I mean there's a whole bunch of stuff going on. [<em>This was me doing my best moron impersonation. Sigh. -Ed.</em>]<br /><br />So, like, it works. So why throw the extra burden on the programmer? Even [in] C++, by the way, <a href="http://www.artima.com/cppsource/cpp0x.html">Stroustroup wants to add garbage collection</a>!<br /><br /><b>Q: If you believe your other arguments, you can do the reference counting or local pooling, and point out when it's actually wrong.</b><br /><br />Right. The philosophical answer to you guys is: compilers will eventually get smart enough to deal with these problems better than a programmer can. This has happened time and again. [For instance] compilers generate better assembly code [than programmers do].<br /><br />All the "tricks" that you learned to optimize your Java code, like marking everything final, so that the compiler can inline it – the VM does that for you now! And it puts [in] some <code>ClassLoader</code> hooks to see if you load a class that makes it non-final, and then [if the assumption is invalidated later] it undoes the optimization and pulls the inlining out.<br /><br />That's how smart the VM is right now. OK? You only need a few compiler writers to go out there and obsolete <em>all</em> the tricks that you learned. All the memory-pooling tricks...<br /><br />Hell, you guys remember <code>StringBuffer</code> and <code>StringBuilder</code> in Java? They introduced <code>StringBuilder</code> recently, which is an unsynchronized version, so they didn't have to have a lock? Guess what? <a href="http://java.sun.com/performance/reference/whitepapers/6_performance.html">Java 6 optimizes those locks away</a>. Any time <em>you</em> can see that the lock isn't needed, they can see it. [<em>Editor's Note: See "Biased locking" in the linked perf whitepaper. It's an astoundingly cool example of the predictive-heuristic class of techniques I've talked about today.</em>]<br /><br />So now all these tricks, all this common wisdom that programmers share with each other, saying "I heard that this hashtable is 1.75 times faster than blah, so therefore you should...", all the micro-optimizations they're doing – are going to become obsolete! Because compilers are smart enough to deal with that.<br /><br /><b>Q: You didn't mention APL, which is a very nice dynamic language...</b><br /><br />I didn't mention <a href="http://en.wikipedia.org/wiki/APL_%28programming_language%29">APL</a>!? Oh my. Well, I'm sorry. [<em>(laughter)</em>]<br /><br /><b>Q: The thing is, well – several of the APL systems, they incorporated most of the list of program transformations you're talking about. And they did it several decades ago.</b><br /><br />Yeah, so... so you could've said Lisp. You could've said Smalltalk. "We did it before!" And that was kind of, that was one of the important points of the talk, right? It <em>has</em> been done before. But I'm gonna stand by my – especially with APL – I'm going to stand by my assertion that the language popularity ranking is going to stay pretty consistent. I don't see APL moving 50 slots up on the [list].<br /><br />I'm sorry, actually. Well not for that case. [<em>(laughter)</em>] But I'm sorry that in general, the languages that got optimized really well, and were really elegant, arguably more so than the languages today, you know, in a lot of ways, but they're not being used.<br /><br />I tried! I mean, I tried. But I couldn't get anybody to use them. I got lynched, time and again.<br /><br /><b>Q: So what's the light at the end of the tunnel for multithreading?</b><br /><br />Oh, God! You guys want to be here for another 2 hours? [<em>(laughter)</em>] I read the scariest article that I've read in the last 2 years: an interview with, I guess his name was Cliff Click, which I think is a cool name. He's like the HotSpot -server VM dude, and somebody, Kirk Pepperdine was <a href="http://www.theserverside.com/tt/knowledgecenter/knowledgecenter.tss?l=MetalMeetsJVM">interviewing him on The Server Side</a>. I just found this randomly.<br /><br />And they started getting down into the threading, you know, the Java memory model and how it doesn't work well with the actual memory models, the hardware, and he started going through, again and again, problems that every Java programmer – like, nobody knows when the hell to use <code>volatile</code>, and so all of their reads are unsynchronized and they're getting stale copies...<br /><br />And he went through – went through problems to which he does not know the answer. I mean, to where I came away going Oh My God, threads are irreparably busted. I don't know what to do about it. I really don't know.<br /><br />I do know that I did write a half a million lines of Java code for this game, this <a href="http://www.cabochon.com/">multi-threaded game I wrote</a>. And a lot of weird stuff would happen. You'd get <code>NullPointerException</code>s in situations where, you know, you thought you had gone through and done a more or less rigorous proof that it shouldn't have happened, right?<br /><br />And so you throw in an "if null", right? And I've got "if null"s all over. I've got error recovery threaded through this half-million line code base. It's contributing to the half million lines, I tell ya. But it's a very robust system.<br /><br />You <em>can</em> actually engineer these things, as long as you engineer them with the certain knowledge that you're using threads wrong, and they're going to bite you. And even if you're using them right, the implementation probably got it wrong somewhere.<br /><br />It's really scary, man. I don't... I can't talk about it anymore. I'll start crying.<br /><br /><b>Q: These great things that IDEs have, what's gonna change there, like what's gonna really help?</b><br /><br />Well, I think the biggest thing about IDEs is... first of all, dynamic languages will catch up, in terms of sort of having feature parity. The other thing is that IDEs are increasingly going to tie themselves to the running program. Right? Because they're already kind of doing it, but it's kind of half-assed, and it's because they still have this notion of static vs. dynamic, compile-time vs. run-time, and these are... really, it's a continuum. It really is. You know, I mean, because you can invoke the compiler at run time.<br /><br /><b>Q: Is it allowed at Google to use Lisp and other languages?</b><br /><br />No. No, it's not OK. At Google you can use C++, Java, Python, JavaScript... I actually found a legal loophole and used server-side JavaScript for a project. Or some of our proprietary internal languages.<br /><br />That's for production stuff. That's for stuff that armies of people are going to have to maintain. It has to be high-availability, etc. I actually wrote a <a href="http://steve-yegge.blogspot.com/2007/06/rhino-on-rails.html">long blog about this</a> that I'll point you to that actually... Like, I actually came around to their way of seeing it. I did. Painful as it was. But they're right.<br /><br /><b>Q: [<em>question is hard to hear</em>]</b><br /><br />[<em>me paraphrasing</em>] Are we going to have something Lisp Machines didn't?<br /><br /><b>Q: Yes.</b><br /><br />Well... no. [<em>(loud laughter)</em>]<br /><br />I say that in all seriousness, actually, even though it sounds funny. I, you know, I <em>live</em> in <a type="amzn" asin="188211485X">Emacs</a>. And Emacs is the <a type="amzn" asin="1882114566">world's last Lisp Machine</a>. All the rest of them are at garage sales. But Emacs <em>is</em> a Lisp Machine. It may not be the best Lisp, but it is one.<br /><br />And you know, T.V Raman, you know, research scientist at Google, who, he doesn't have the use of his sight... he's a completely fully productive programmer, more so than I am, because Emacs is his window to the world. It's his remote control. <a href="http://emacspeak.sourceforge.net/">EmacsSpeak</a> is his thesis. It's amazing to watch him work.<br /><br />Emacs, as a Lisp Machine, is capable of doing <em>anything</em> that these other things can. The problem is, nobody wants to learn Lisp.<br /><br /><b>Q: And it doesn't have closures.</b><br /><br />And it doesn't have closures, although you can fake them with macros.<br /><br />I'm actually having lunch with an [ex-]Emacs maintainer tomorrow. We're going to talk about how to introduce concurrency, a better rendering engine, and maybe some Emacs Lisp language improvements. You know, even Emacs has to evolve.<br /><br />But the general answer to your question is No. Lisp Machines pretty much had it nailed, as far as I'm concerned. [<em>(shrugging)</em>] Object-oriented programming, maybe? Scripting? I dunno.<br /><br /><b>Q: Many years ago, I made the great transition to a fully type-checked system. And it was wonderful. And I remember that in the beginning I didn't understand it, and I just did what I had to do. And one dark night, the compiler gave me this error message, and it was right, and I thought "Oh wow, thank you!" I'd suddenly figured it out.</b><br /><br />Yes! "Thank you." Yes.<br /><br />Although it's very interesting that it took a long time before it actually told you something useful. I remember my first experience with a C++ compiler was, it would tell me "blublblbuh!!", except it wouldn't stop there. It would vomit for screen after screen because it was <a href="http://en.wikipedia.org/wiki/Cfront">Cfront</a>, right?<br /><br />And the weird thing is, I realized early in my career that I would actually rather have a runtime error than a compile error. [<em>(some laughs)</em>] Because at that time... now this is way contrary to popular opinion. Everybody wants early error detection. Oh God, not a runtime error, right? But the debugger gives you this ability to start poking and prodding, especially in a more dynamic language, where you can start simulating things, you can back it up... You've got your time-machine debuggers like the <a type="amzn" asin="159059620X">OCaml</a> one, that can actually save the states and back up.<br /><br />You've got amazing tools at your disposal. You've got your print, your console printer, you've got your logging, right? [<em>Ummm... and eval. Oops. -Ed.</em>] You've got all these assertions available. Whereas if the compiler gives you an error that says "expected expression angle-bracket", you don't have a "compiler-debugger" that you can shell into, where you're trying to, like – you could fire up a debugger on the compiler, but I don't recommend it.<br /><br />So, you know, in some sense, your runtime errors are actually kind of nicer. When I started with Perl, which was pretty cryptic, you know, and I totally see where you're coming from, because every once in a while the compiler catches an error. But the argument that I'm making is NOT that compilers don't occasionally help you catch errors. The argument that I'm making is that you're gonna catch the errors one way or the other. Especially if you've got unit tests, or QA or whatever.<br /><br />And the problem is that the type systems, in programming in the large, wind up getting in your way... way too often. Because the larger the system gets, the more desperate the need becomes for these dynamic features, to try to factor out patterns that weren't evident when the code base was smaller. And the type system just winds up getting in your way again and again.<br /><br />Yeah, sure, it catches a few trivial errors, but what happens is, when you go from Java to JavaScript or Python, you switch into a different mode of programming, where you look a lot more carefully at your code. And I would argue that a compiler can actually get you into a mode where you just submit this batch job to your compiler, and it comes back and says "Oh, no, you forgot a semicolon", and you're like, "Yeah, yeah, yeah." And you're not even really thinking about it anymore.<br /><br />Which, unfortunately, means you're not thinking very carefully about the algorithms either. I would argue that you actually craft better code as a dynamic language programmer in part because you're <em>forced</em> to. But it winds up being a good thing.<br /><br />But again, I – this is all very minority opinion; it's certainly not majority opinion at Google. All right? So this is just my own personal bias.<br /><br /><b>Q: [<em>question too hard to hear over audio, something about is it possible for the compiler at least to offer some help</em>]</b><br /><br />You know, that's an interesting question. Why do compiler errors have to be <em>errors</em>? Why couldn't you have a compiler that just goes and gives you some advice? Actually, this is what IDEs are excelling at today. Right? At <em>warnings</em>. It's like, "ah, I see what you're doing here, and you don't really need to. You probably shouldn't."<br /><br />It's weird, because Eclipse's compiler is probably a lot better than javac. Javac doesn't need to be good for the reasons I described earlier, right? It all gets torn down by the JIT. But Eclipse's compiler needs to give you that exact help. The programmer help, the day-to-day help, I missed a semicolon, I missed this, right? And Eclipse and IntelliJ, these editors, their compilers are very very good at error recovery, which in a static batch compiler usually just needs to be: BLAP, got an error!<br /><br />OK? So to an extent I think we <em>are</em> getting tools that come along and act like the little paper-clip in Microsoft Office. You know. Maybe not quite like that.<br /><br /><b>Q: The only thing I worry about is that there's a chunk of code that you really want to work sometimes, but the error-recovery parts are hard to test.</b><br /><br />That's the part you <em>have</em> to do at runtime, right? Well, I mean, when you get into concurrency you're just screwed, but if you're talking about situations where it's very difficult to... I mean, it's computationally impossible to figure out whether all paths through a code graph are taken. I mean, it's NP-complete, you can't do this, right? But the VM can tell you which code paths got taken, and if it doesn't [get taken], you can change your unit test to force those code paths to go through, at which point you've now exercised all of your code. Right? That's kind of the way, you know, they do it these days.<br /><br />And I would say it's a pain in the butt, but I mean... it's a pain in the butt because... a static type-systems researcher will tell you that unit tests are a poor man's type system. The compiler ought to be able to predict these errors and tell you the errors, way in advance of you ever running the program. And for the type systems they've constructed, this is actually true, by and large, modulo assertion errors and all these weird new runtime errors they actually have to, heh, inject into your system, because of type-system problems.<br /><br />But by and large, I think what happens is unless the type system actually delivers on its promise, of always being right and always allowing you to model any lambda-calculus computation your little heart desires, OK? Unless it can do that, it's gonna get in your way at some point.<br /><br />Now again, this is all personal choice, personal preference. I think, you know, static compilers and error checkers, they have a lot of value and they're going to be around for a long time. But dynamic languages could get a lot better about it.<br /><br />I'm not trying to refute your point. I'm just saying that... there are tradeoffs, when you go to a dynamic language. I have come around... I've gone from dynamic to static and back to dynamic again, so I've done the whole gamut. And I've decided that for very large systems, I prefer the dynamic ones, in spite of trade-offs like the one you bring up.<br /><br /><b>Q: Actually you said, for hybrid systems, where pieces of it are dynamic and pieces are static...</b><br /><br />I think some of it has been pretty... there's a great paper from Adobe about it, right? <a href="http://www.ecmascript.org/es4/spec/evolutionary-programming-tutorial.pdf">Evolutionary programming</a>? Talks about how you prototype your system up, and then you want to lock it down for production, so you find the spots where there are API boundaries that people are going to be using. You start putting in contracts, in the way of type signatures and type assertions.<br /><br />Why not build that functionality into the language? Then you don't have to build a prototype and re-implement it in C++. That's the thinking, anyway. It seems like a great idea to me, but it hasn't caught on too much yet. We'll see. [<em>Editor's note: The main hybrid examples I'm aware of are StrongTalk, Common Lisp, Groovy and EcmaScript Edition 4. Any others?</em>]<br /><br />All right, well, now we're definitely over time, and I'm sure some of you guys want to go. So thank you very much!<br /><br /><hr /><br /><br />At this point I got mobbed by a bunch of grad students, profs, and various interested people with other questions. They dumped an absolutely astounding amount of information on me, too – papers to chase down, articles to follow up on, people to meet, books to read. There was lots of enthusiasm. Glad they liked it!Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com84tag:blogger.com,1999:blog-13674163.post-49276512112036021412008-04-28T04:01:00.000-07:002008-04-28T05:53:10.554-07:00XEmacs is Dead. Long Live XEmacs!<table><tr><td width="10%"> </td><td>"We're going to get lynched, aren't we?" — <em>Phouchg</em></td></tr></table><br /><br />And you thought I'd given up on controversial blogs. Hah!<br /><br /><b>Preamble</b><br /><br />This must be said: Jamie Zawinski is a hero. A living legend. A major powerhouse programmer who, among his many other accomplishments, wrote the original Netscape Navigator and the original <a href="http://en.wikipedia.org/wiki/XEmacs">XEmacs</a>. A guy who can use the term "<a href="http://www.jwz.org/doc/java.html">downward funargs</a>" and then glare at you just daring you to ask him to explain it, you cretin. A dude with arguably the best <a href="http://jwz.livejournal.com/">cat-picture blog</a> ever created.<br /><br />I've never met him, but I've been in awe of his work since 1993-ish, a time when I was still wearing programming diapers and needing them changed about every 3 hours.<br /><br />Let's see... that would be 15 years ago. I've been slaving away to become a better programmer for fifteen years, and I'm still not as good — nowhere <em>near</em> as good, mind you — as he was then. I still marvel at his work, and his shocking writing style, when I'm grubbing around in the guts of the Emacs-Lisp byte-compiler.<br /><br />It makes you wonder how many of him there are out there. You know, programmers at that level. He can't be the only one. What do you suppose they're all working on? Or do they all eventually make 25th level and opt for <a href="http://www.dnalounge.com/">divine ascension</a>?<br /><br />In any case, I'm sad that I have to write the obit on one of his greater achievements. Sorry, man. Keep up the cat blog.<br /><br /><b>Forking XEmacs</b><br /><br />I have to include a teeny history lesson. Bear with me. It's short.<br /><br /><a href="http://www.xemacs.org">XEmacs</a> was a fork of the GNU Emacs codebase, created about 17 years ago by a famous-ish startup called <a href="http://en.wikipedia.org/wiki/Lucid_Inc.">Lucid Inc.</a>, which, alas, went Tango Uniform circa 1994. As far as I know, their two big software legacies still extant are a Lisp environment now sold by <a href="http://www.lispworks.com/products/lispworks.html">LispWorks</a>, and XEmacs.<br /><br />I'd also count among their legacies an absolutely outstanding collection of software essays called <a type="amzn" asin="0195121236">Patterns of Software</a>, by Lucid's founder, <a href="http://en.wikipedia.org/wiki/Richard_p_gabriel">Richard P. Gabriel</a>. I go back and re-read them every year or so. They're that good.<br /><br />Back when XEmacs was forked, there were some fireworks. Nothing we haven't seen many times before or since. Software as usual. But there was a Great Schism. Nowadays it's more like competing football teams. Tempers have cooled. At least, I think they have.<br /><br />As for the whole sordid history of the FSF-Emacs/XEmacs schism, you can read about it online. I'm sure it was a difficult decision to make. There are pros and cons to forking a huge open-source project. But I think it was the right decision at the time, just as decomissioning it is the right decision today, seventeen years later.<br /><br />XEmacs dragged Emacs kicking and screaming into the modern era. Among many other things, XEmacs introduced GUI widgets, inline images, colors in terminal sessions, variable-size fonts, and internationalization. It also brought a host of technical innovations under the hood. And XEmacs has always shipped with a great many more packages than GNU Emacs, making it more of a turnkey solution for new users.<br /><br />XEmacs was clearly an important force helping to motivate the evolution of GNU Emacs during the mid- to late-1990s. GNU Emacs was always playing catch-up, and the dev team led by <a href="http://en.wikipedia.org/wiki/Richard_stallman">RMS</a> (an even more legendary hacker-hero) complained that XEmacs wasn't really playing on a level field. The observation was correct, since XEmacs was using a <a href="http://catb.org/~esr/writings/cathedral-bazaar/">Bazaar</a>-style development model, and could move faster as a direct consequence.<br /><br />A lot of people were switching over to XEmacs by the mid-1990s: the fancy widgets and pretty colors attracted GNU Emacs users like moths to a bug-zapper.<br /><br />Problem was, it could actually zap you.<br /><br /><b>The downside of the Bazaar</b><br /><br />I personally tried to use XEmacs many times over a period of many years. I was jealous of its features.<br /><br />However, I never managed to use XEmacs for very long, because it crashed a lot. I tried it on every platform I used between ~1996 and 2001, including HP/UX, SunOS, Solaris, Ultrix, Linux, Windows NT and Windows XP. XEmacs would never run for more than about a day under moderate use without crashing.<br /><br />I've argued previously that one of the most important survival traits of a software system is that it <a href="http://steve-yegge.blogspot.com/2007/01/pinocchio-problem.html">should never reboot</a>. Emacs and XEmacs are at the leading edge of living software systems, but XEmacs has never been able to take advantage of this property because even though it can live virtually forever, it's always tripping and falling down manholes.<br /><br />Clumsy XEmacs. Clumsy!<br /><br />I assume its propensity for inopportune heart attacks is a function of several things, including (a) old-school development without unit tests, (b) the need to port it to a gazillion platforms, including many that nobody actually uses, (c) a culture of rapid addition of new features. There are probably other factors as well.<br /><br />I'm just speculating though. All I know is that it's always been very, very crashy. It doesn't actually matter what the reasons are, since there's no excuse for it.<br /><br />Interestingly, most XEmacs users I've talked to say they don't notice the crashing. I'm sure this is because it's all relative. XEmacs doesn't crash any more often than Firefox, for instance. Probably less often. When Firefox crashes I make a joke about it and restart it, because the crashing rarely has an impact. It even restores your state properly most of the time, so it's just a minor blip, an almost trivial inconvenience, so long as whatever text field you happen to be editing has an auto-save feature. And most of the good ones do.<br /><br />XEmacs may crash even less than Eclipse and IntelliJ. Crashing editors usually aren't a big problem. Programmers all learn the hard way to save their buffers frequently. For me, saving is like punctuation; I save whenever my typing pauses, out of reflex. Doesn't matter whether it's Emacs or NeoOffice or GMail or... do I use any other apps? Oh yeah, or the Gimp. When I pause, I save, and if you're a programmer I bet you do too. So occasional crashes may seem OK.<br /><br />Another reason the crashes aren't called out more often is that most Emacs and XEmacs users are at best casual users. They open up an {X}Emacs session whenever they need to edit a file, and close it when they're done. It's just Notepad with colors and multi-level Undo.<br /><br />If your average session length is shorter than the editor's MTBF, then yeah, you're not going to notice much crashing.<br /><br />In contrast, your more... ah, <em>seasoned</em> (read: fanatical) Emacs users gradually come to live in it. Anything you can't do from within Emacs is an annoyance. It's like having to drive to a government building downtown to take care of some random paperwork they should have been offering as an online service a decade ago. You can live with it, but you're annoyed.<br /><br />Even Firefox, the other big place I live, really wants to be Emacs. Tabs don't scale. Tabbed browsing was revolutionary in the same way adding more tellers to a bank was revolutionary: it's, like, 4x better. w00t. Emacs offers the unique ability to manage its open buffers in another first-class buffer, as a <em>list</em>. Imagine what your filesystem life would be like if the only view of a directory was one tab per file. Go to your pictures directory and watch it start vomiting tabs out like it tried to swallow a box of chiclets. Fun!<br /><br />I feel sad when I see Eclipse users with fifty open tabs, an army of helpful termites eating away at their screen real-estate and their time.<br /><br />I have a feeling I've veered off course somewhere... where was I? Oh yeah. Crashing.<br /><br />So XEmacs has never been a particularly good tool for serious Emacs users because even though it's written in C, it crashes like a mature C++ application. You know the drill: major faceplants, all the fugging time.<br /><br />Your ability to become an inhabitant of Emacs is gated by how stable it is. GNU Emacs has always been famously stable. Sure, the releases happen less frequently than presidential inaugurations. Sure, for a long time it always lacked some XEmacs feature or other. But it's really, really stable. Its MTBF is measurable in weeks (or even months, depending on what you're doing with it) as opposed to hours or days.<br /><br />Emacs, like Firefox, can be configured to back up your state periodically, so that in theory it can recover after a crash. That's part of the problem: you didn't actually have to configure Firefox to get that behavior. It does it automatically. And to be honest, I've never had much luck with the Emacs save-state facilities. I'm a pretty competent elisp hacker these days, but the <code>desktop.el</code> has never worked reliably for me. I could probably get it to work, but I've always found it easier to write specialized startup "scripts" (lisp functions) that load up particular favorite configurations.<br /><br />If I can't get desktop-save working, I'd guess that fewer than 1/10th of 1 percent of Emacs users use that feature. So crashes blow everything away.<br /><br />If the state isn't being auto-saved, the next best thing is for it not to crash.<br /><br />XEmacs never got that right.<br /><br /><b>Don't get me wrong...</b><br /><br />I just realized I'm going to get screamed at by people who think I'm just an XEmacs-hater slash GNU-fanboy.<br /><br />Make no mistake: I'm a fan of XEmacs. I think it was a great (or at least, necessary) idea in 1991. I think the execution, aside from the stability issue, was top-notch. I think it had a good architecture, by and large, at least within the rather severe constraints imposed by Emacs Lisp. I think it spurred competition in a healthy way.<br /><br />I think the XEmacs development team, over the years, has consisted of engineers who are ALL better than I am, with no exceptions. And I even like certain aspects of the interface better, even today now that GNU Emacs has caught and surpassed XEmacs in features. For instance, I like the XEmacs "apropos" system better.<br /><br />If you're going to scream at me for irrational reasons, it really ought to be for the <em>right</em> irrational reasons. Legitimate dumb reasons for screaming at me include: you're lazy and don't want to learn anything new; you invested a lot of time in XEmacs and don't see why you should be forced to switch; you are a very slow reader, causing you to skip three out of every five words I write, resulting in your receipt of a random approximation of my blog content, with a high error bar; you're still mad about my OS X blog. All good bad reasons.<br /><br />Heck, you could even scream for rational reasons. Perhaps you have a philosophical beef with the FSF or GPL3. Perhaps XEmacs still has some vestiges of feature support that do not yet exist in GNU Emacs, and you truly can't live without them. I would think you're being a teeny bit uptight, but I would respect your opinion.<br /><br />Whatever you do, just don't yell at me for thinking I'm dissing XEmacs or taking some sort of religious stance. Far from it. I just want a unified Emacs-o-cratic party.<br /><br /><b>XEmacs vs. GNU Emacs today</b><br /><br />GNU Emacs pulled into the lead in, oh... I'd say somewhere around maybe 2002? 2003? I wasn't really keeping track, but one day I noticed Emacs had caught up.<br /><br />Even today I maintain XEmacs/FSF-Emacs compatibility for my elisp files – some 50k lines of stuff I've written and maybe 400k lines of stuff I've pilfered from <a href="http://www.emacswiki.org/cgi-bin/wiki">EmacsWiki</a>, friends, and other sources. I still fire up XEmacs whenever I need to help someone get un-stuck, or to figure out whether some package I've written can be coerced to run, possibly in restricted-feature mode, under XEmacs.<br /><br />For years I chose stability over features. And then one day GNU Emacs had... well, everything. Toolbars, widgets, inline images, variable fonts, internationalization, drag-and-drop in and out of the OS clipboard (even on Windows), multi-tty, and a long laundry-list of stuff I'd written off as XEmacs-only.<br /><br />And it was still stable. Go figure.<br /><br />I don't have the full feature-compatibility list. Does it even exist? You know, those tables that have little red X's if the Evil Competitor product is missing some feature your product offers, and little green checkmarks, and so on. We ought to make one of those. It would be useful to know what (if any) XEmacs features are preventing the last holdouts from migrating to FSF Emacs.<br /><br />But for the past five years or so, just about every time an XEmacs user on a mailing list has mentioned a feature that's keeping them from switching, it's been solved.<br /><br />If GNU Emacs isn't a perfect superset of XEmacs yet, I'm sure we could get it there if we had the big unified-platform carrot dangling in front of us. And I bet it's pretty close already.<br /><br />Features and stability aside, XEmacs is looking pretty shabby in the performance department. Its font-lock support has never been very fast, and a few years back GNU Emacs took a giant leap forward. XEmacs can take 4 or 5 seconds or longer to fontify a medium-sized source file. Sure, it shows that big progress bar in the middle of the screen, so you know it's not dead, but when you're used to it being almost instantaneous, coming back to XEmacs is a real shocker.<br /><br />And XEmacs has bugs. Man, it has a lot of bugs. I can't begin to tell you how many times I've had to work around some horrible XEmacs problem. It has bugs (e.g. in its fontification engine and cc-engine) that have been open for years, and they can be really painful to work around. I've had to take entire mode definitions and <code>if-xemacs</code> them, using an ancient version of the mode for XEmacs because nothing even remotely recent will run.<br /><br />You may not notice the bugs, but as elisp developers, we feel the pain keenly.<br /><br /><b>Fundamental incompatibilities</b><br /><br />As if issues with stability, performance and bugs weren't enough, XEmacs has <em>yet another</em> problem, which is that its APIs for dealing with UI elements (widgets and input events, but also including things like text properties, overlays, backgrounds and other in-buffer markup) are basically completely different from their GNU-Emacs counterparts. The two Emacsen share a great deal of common infrastructure at the Lisp level: they have mostly compatible APIs for dealing with files, buffers, windows, subprocesses, errors and signals, streams, timers, hooks and other primitives.<br /><br />But their APIs range from mildly to completely different for keyboard and mouse handling, menus, scrollbars, foreground and background highlighting, dialogs, images, fonts, and just about everything else that interfaces with the window system.<br /><br />The GUI and display code for any given package can be a significant fraction of the total effort, and it essentially has to be rewritten from scratch when porting from GNU Emacs to XEmacs or vice-versa. Unsurprisingly, many package authors just don't do it. The most famous example I can think of is James Clark's nxml-mode, which claims it'll never support XEmacs. I found that pretty shocking, since I thought it was basic Emacs etiquette to try to support XEmacs, and here James was cutting all ties, all public about it and everything. Wow.<br /><br />But I totally understand, since I really don't want to rewrite all the display logic for my stuff either.<br /><br />I'll be the first to admit: the API discrepancies are not XEmacs's fault. I can't see how they could be, given that for nearly all these features, XEmacs had them first.<br /><br />For a developer trying to release a productivity package, it doesn't really matter whose fault it is. You target the platform that will have the most users. I don't know what XEmacs's market share is these days, but I'd be very surprised if it's more than 30%. That's a big number, but when you're an elisp hacker creating an open-source project in your limited spare time, that number can start looking awfully small. Teeny, even.<br /><br /><b>XEmacs should drop out of the race</b><br /><br />At this point it's becoming painful to watch. GNU Emacs is getting all the superdelegates. That warmonger VIM is sitting back and laughing at us. But XEmacs just won't quit!<br /><br />I'm sure there are a few old-timers out there who still care about the bad blood that originally existed between the two projects. To everyone else it's ancient history. As far as I can tell, there has been an atmosphere of polite (if subdued) cooperation between the two projects. Each of them has incorporated some compatibility fixes for the other, although it's still mostly up to package authors to do the heavy lifting of ensuring compatibility, especially for display code.<br /><br />I haven't seen any XEmacs/GNU-Emacs flamewars in a long time, either. We're all just *Emacs users, keeping our community alive in the face of monster IDEs that vomit tabs, consume gigabytes of RAM, and attract robotic users who will probably never understand the critical importance of customizing and writing one's own tools.<br /><br />When the Coke/Pepsi discussion comes up these days, it's usually an XEmacs user asking, in all seriousness, whether they should transition to GNU Emacs, and if so, would someone volunteer to help migrate their files and emulate their favorite behaviors.<br /><br />Yes, someone will volunteer. I promise.<br /><br /><b>The dubious future of Emacs</b><br /><br />I've got good news and bad news.<br /><br />The good news is: Emacs is a revolutionary, almost indescribably QWAN-infused software system. Non-Emacs users and casual users simply can't appreciate how rich and rewarding it is, because they have nothing else to compare it to. There are other scriptable applications and systems out there — AppleScript, Firefox, things like that. They're fun and useful. But Emacs is <em>self-hosting</em>: writing things in it makes the environment itself more powerful. It's a feedback loop: a recursive, self-reinforcing, multiplicative effect that happens because you're enhancing the environment you're using to create enhancements.<br /><br />When you write Emacs extensions, sometimes you're automating drudgery (always a good thing), sometimes you're writing new utilities or apps, and sometimes you're customizing the behavior of existing utilities. This isn't too much different from any well-designed scriptable environment. But unlike in other environments, sometimes you're improving your editing tools and/or your programming tools for Emacs itself. This notion of self-hosting software is something I've been wanting to blog more about, someday when I understand it better.<br /><br />Eclipse and similar environments <em>want</em> to be self-hosting, but they're not, because Java is not self-hosting. In spite of Java's smattering of dynamic facilities, Java remains as fundamentally incapable of self-hosting as C++. Self-hosting only works if the code can "fold" on itself and become more powerful while making itself smaller and cleaner. I'm not really talking about macros here, even though that's probably the first thing you thought of. I'm thinking more along the lines of implementing JITs and supercompilers in the hosted runtime, rather than in the C++ or Java "hardware" substrate, which is where everyone puts them today.<br /><br />I suspect (without proof) that in self-hosted environments, you can eventually cross a threshold where your performance gains from features implemented in the hosted environment outpace the gains from features in the substrate, because of this self-reinforcing effect: if code can make _itself_ faster and smarter, then it will be faster and smarter at making itself faster and smarter. In C++ and Java, making this jump to the self-reinforcing level is essentially intractable because, ironically, they have so many features (or feature omissions) for the sake of performance that they get in their own way.<br /><br />To be sure, Emacs, the current crop of popular scripting languages, and other modestly self-hosting environments are all pretty far from achieving self-reinforcing <em>performance</em>. But Emacs has achieved it for <em>productivity</em> – at least, for the relatively small percentage of Emacs users who learn enough elisp to take advantage of it. There are just enough of us doing it to generate a steady supply of new elisp hackers, and the general-purpose artifacts we produce are usually enough to keep the current crop of casual users happy.<br /><br /><b>The bad news: the competition isn't the IDEs</b><br /><br />I've argued that Emacs is in a special self-reinforcing software category. For productivity gains, that category can <em>only</em> be occupied by editors, by definition, and Emacs is currently way ahead of any competition in most respects. So most Emacs users have felt safe in the assumption that IDEs aren't going to replace Emacs.<br /><br />Unfortunately, Emacs isn't immunized against obsolescence. It still needs to evolve, and evolve fast, if it's going to stay relevant. The same could be said of any piece of software, so this shouldn't be news. But it's particularly true for Emacs, because increasing numbers of programmers are being lured by the false productivity promises of IDEs.<br /><br />They really are false promises: writing an Eclipse or IntelliJ (or God help you, Visual Studio) plugin is a monumental effort, so almost nobody does it. This means there's no community of building and customizing your own tools, which has long been the hallmark of great programmers. Moreover, the effort to create a plugin is high enough that people only do it for really significant applications, whereas in Emacs a "plugin" can be any size at all, from a single line of code up through enormous systems and frameworks.<br /><br />Emacs has the same learning-curve benefit that HTML had: you can start simple and gradually work your way up, with no sudden step-functions in complexity. The IDEs start you off with monumental API guides, tutorials, boilerplate generators, and full-fledged manuals, at which point your brain switches off and you go over to see what's new on reddit. ("PLEASE UPMOD THIS PIC ITS FUNNY!")<br /><br />And let's not even get into the Million Refactorings yet. It's a blog I've been working on for years, and may never finish, but at some point I'd like to try to show IDE users, probably through dozens or even hundreds of hands-on examples I've been collecting, that "refactoring" is an infinite spectrum of symbol manipulation, and they have, um, twelve of them. Maybe it's thirteen. Thirteen out of infinity – it's a start!<br /><br />Programmers are being lured to IDEs, but the current crop of IDEs lacks the necessary elements to achieve self-hosting. So the only damage to Emacs (and to programmers in general) is that the bar is gradually going down: programmers are no longer being taught to create their own tools.<br /><br />IDEs are draining users away, but it's not the classic fat-client IDEs that are ultimately going to kill Emacs. It's the browsers. They have all the power of a fat-client platform and all the flexibility of a dynamic system. I said earlier that Firefox wants to be Emacs. It should be obvious that Emacs also wants to be Firefox. Each has what the other lacks, and together they're pretty damn close to the ultimate software package.<br /><br />If Emacs can't find a way to evolve into (or merge with) Firefox, then Firefox or some other extensible browser is going to eclipse Emacs. It's just a matter of time. This wouldn't be a bad thing, per se, but there's a good chance it would be done poorly, take forever, and wind up being less satisfying than if Emacs were to sprout browser-like facilities.<br /><br /><b>Emacs as a CLR</b><br /><br />So Emacs needs to light a fire and hurry up and get a better rendering engine. Port to XUL, maybe? I don't know, but it's currently too limited in the application domains it can tackle. I realize this is a very hard problem to solve, but it needs to happen, or at some point a rendering engine will emerge with just enough editing power to drain the life from Emacs.<br /><br />Emacs also needs to take a page from the JVM/CLR/Parrot efforts and treat itself as a VM (that's what it is, for all intents) and start offering first-class support for other languages. It's not that there's anything wrong with Lisp; the problem is <em>X programmers</em>. They only want to use X, so you have to offer a wide range of options for X. Emacs could be written in any language at all, take your pick, and it wouldn't be good enough.<br /><br />RMS had this idea a long, long time ago (when he was making the rather controversial point that Tcl isn't a valid option for X), and it eventually led to Guile, which led more or less nowhere. Not surprising; it's a phenomenally difficult challenge. There are really only two VMs out there that have achieved even modest success with hosting multiple languages: the CLR and the JVM. CLR's winning that race, although it's happening in a dimension (Windows-land) that most of us don't inhabit. Parrot is... trying really hard. Actually, I should probably mention LLVM, which (like Parrot) was designed from the ground up for multi-language support, but took a lighter-weight approach. So let's call it four.<br /><br />In any case, it's a small very group of VMs, and they still haven't quite figured out how to do it: how to get the languages to interoperate, how to get languages other than the first to perform decently, and so on.<br /><br />This is clearly one of the hardest technical challenges facing our industry for the next 10 years, but it's also one of the most obviously necessary. And Emacs is going to have to play that game. I'm not talking about hacked-together process bridges like PyMacs or el4r, either — I mean first-class support and all that it entails.<br /><br />I've mentioned the rendering engine and the multi-language support; the last major hurdle is concurrency. I don't know the answer here, either, but it needs an answer. Threads may be too difficult to support with the current architecture, but there are other options, and someone needs to start thinking hard about them. Editing is becoming a complicated business — too complicated for hand-rolling state machines.<br /><br /><b>Compete or die</b><br /><br />So Emacs has some very serious changes ahead.<br /><br />Let's face it: we're not going to see real change unless ALL the Emacs developers out there – today's crop of JWZs – band together to make it happen. But today we're divided. Two groups of brilliant C hackers working on separate, forked code bases? That's bad. Two groups of maniacal elisp hackers working on incompatible packages, or at best wasting time trying to achieve compatibility? Also bad.<br /><br />Developers are starting to wake up and realize that the best "mainstream" extensible platform (which excludes Emacs, on account of the Lisp) is Firefox or any other non-dead browser (which excludes IE). Dynamic typing wins again, as it always will. Dynamic typing, property-based modeling and non-strict text protocols won the day for the web, and have resisted all incursions from heavyweight static replacements. And somehow the web keeps growing, against all the predictions and lamentations of the static camp, and it still works. And now the browsers are starting to sprout desktop-quality apps and productivity tools. It won't be long, I think, before the best Java development environment on the planet is written in JavaScript.<br /><br />Emacs has to compete or die. If Firefox ever "tips" and achieves even a tenth of the out-of-the-box editing power of Emacs, not just for a specific application but for all web pages, widgets, text fields and system resources, Emacs is going to be toast. I may be the last rat on the ship, but I'm sure not going down with it; even _I_ will abandon Emacs if Firefox becomes a minimally acceptable extensible programmer's editor. This is a higher bar than you probably think, but it could happen.<br /><br />We no longer need XEmacs to spur healthy competition. The competition is coming in hard from entirely new sources. What we need now is unity.<br /><br /><b>Then why not unify behind XEmacs?</b><br /><br />I threw this in just in case you blew through the article, which I'd find perfectly understandable. To summarize, I've argued that XEmacs has a much lower market share, poorer performance, more bugs, much lower stability, and at this point probably fewer features than GNU Emacs. When you add it all up, it's the weaker candidate by a large margin.<br /><br />Hence there's only one reasonable strategy: Hill, er, I mean XEmacs has to drop out of the race.<br /><br />I'm really sorry about this. I'm a close personal friend of XEmacs, but I just can't endorse it anymore. I used to be a laissez-faire kinda guy, as long as you were using <em>some</em> flavor of Emacs. But at this point, if you're using XEmacs you're actively damaging not only your long-term productivity, but mine as well. So I'd like to ask you to think long and hard about switching. Soon.<br /><br />If you're a local Emacs-Lisp guru, please offer your services to XEmacs users who would like to switch over. The more pain-free the migration is, the faster it will happen.<br /><br />If you're a graphic artist, consider making a nice, tasteful "Euthanize XEmacs!" logo. Not that message, precisely, but something along those lines. Make sure it's tasteful. Perhaps "XEmacs is dead – long live XEmacs"? Yes, I think that would do nicely.<br /><br />If you happen to know someone on the XEmacs development team, send them some chocolates, or movie tickets, or something. A thank-you, even. We should honor their service. But those guys are the most qualified on the planet to step in and start helping drive GNU Emacs forward, assuming the FSF team will have them. Emacs is in very bad shape indeed if they will not.<br /><br />If you're a local system administrator, consider <code>sudo rm -rf xemacs</code>. Sorry, I mean consider politely asking your emacs-users mailing list if they might be willing to set a timeline for deprecating XEmacs, thereby starting the most massive flamewar in your company's history. Can't hurt!<br /><br />If you're seeing red, and you skipped most of this article so you could comment on how amazingly lame this idea is, I recommend taking a little walk and getting some fresh air first.<br /><br />If you're RMS, thank you for making Emacs and all that other stuff, and for keeping it free. Please be nice to those who wish to help. You're scary to the point of unapproachability, just 'cuz you're you.<br /><br />XEmacs team, JWZ, and XEmacs package authors: thank you all for helping drive progress in the greatest piece of software of all time. I can only hope that someday I may have chops like that.<br /><br />Now how about we turn this into the most famous reverse-<a href="http://en.wikipedia.org/wiki/Category:Software_forks">fork</a> in history?Steve Yeggehttp://www.blogger.com/profile/14812997485690838920noreply@blogger.com74