|Dr. Alan Kay|
The traditional form of programming that came about partly because of the separation of computers into processors and memories is a bit like recipes that a very quick chef has to follow. You basically have an active principal and everything else is kind of inert material that gets pushed around the data. The recipes are the programs. The interesting thing is that if you back away from a computer, you can't see either the recipes or the programs. You can only see behavior.
The only thing you can do with a computer from the outside is send it a message of some kind and it may or may not send you a message back … This seemed quite reasonable in the '50s. In fact, most programming today is still done like that. Even the operating systems on your laptop have 60 or 70 million lines of code. The problem is that this material way of doing things doesn't scale well. Is this why your team decided to come up with a better way?
In graduate school , I had to deal with some new ideas [as part of] one of the ARPA (Advanced Research Projects Agency) research projects. ARPA at that time was trying to basically invent interactive computing via a wide variety of needs and widespread networking.
People were trying to figure out how to put graphic displays in front of people and let them make amplifiers for their various endeavors. At that time, they were trying to invent what they called an intergalactic network, a network that would be as pervasive as the power plugs in the wall. Today, that network is called the Internet. Back then it was called the ARPAnet and it hadn't been done yet.
Every graduate student was thinking about how you could make a network that would scale by 12, 13, 14, even 15 orders of magnitude. At some point, it just occurred to me that if you use a computer as a building block … that you could model every hardware component including your computer, you could model every software thing, you could get rid of data, you could get rid of procedures and you'd have a kind of a universal system building element that would model the smallest things from the largest things.
My original thought was to have something like recursive biological cells. We have one hundred trillion cells in our body. That is a hell of a lot more cells than there are nodes on the Internet. Those cells spend almost all of their effort keeping themselves normal. … They're self-repairing, and you don't have to stop the organism in order to affect repairs. And then there are some interesting mathematical properties of this kind of thing that also occurred to me, and I called those things objects.
Well, the difficult thing about theoretical ideas that also have some power is that they also need to be practical. So, it actually took a while, some years, in fact, before a real practical version of that was done. The first really comprehensive, practical version that spanned everything was SmallTalk. That happened at Xerox Park in the early '70s, and it also gave rise to the overlapping window interface that we use today, desktop publishing, screen painting, all that stuff. Is Smalltalk still in use today?
It has been for many years one of IBM's largest implementations. It's called VisualAge, and there is another version called Visual Works. Some very big systems on Wall Street were done in it. Your institute works to help children better understand computing. Why is that so important to you?
An interesting thing to look at is the influence of the printing press on our entire civilization. For the first 100 or 150 years, we were pretty much printing the old discourse, and, then in the 17th century, people started arguing and thinking in a different way, in part because of the printing press. The world we live in today came from that influence.
One of the reasons we were doing all this stuff in the '60s is that the computer allows you to argue about important ideas in new and more powerful ways than standard media. The thought was that this would lead to a new style of thinking and more powerful ways of thinking about a lot of different things. I realized that Seymour Papert's idea about teaching children powerful ideas with the help of a computer is one of the best ideas anyone has ever had [in terms of] what a computer is good for. That led to a whole bunch of things. [For instance,] the overlapping Windows interface was originally done for children. Was the idea for the laptop aimed at children as well?
The idea that I came up with in the '60s was a two-pound thing we call a laptop today, that I called the Dynabook. That was prompted by the thinking about what would a child carry around and what would they be able to do with it. Are you actually credited with building the first laptop?
Well, we couldn't build one back then, and success has a thousand fathers. … There were about a hundred people who counted back then and I was one of them. Do you have any advice for up and coming developers?
I think the thing the enterprise developer should actually do is look to the past a little bit more. One of the guys who was incredibly inspiring and had one of the best conceptions of personal computing was a guy by the name of Douglas Engelbart. He was the guy who invented the mouse. He invented it not just to point at things on the screen, but to try and provide people in enterprises with a way of boosting the collective IQ of their group.
Since most adults do things by working together, one of the best things you can ever do with a computer is to try and figure out ways to augment groups of adults working together.
One of the most interesting things about today in 2004 is that almost nothing being done has taken heed of Engelbart's ideas. Virtually everything that is out there today is far inferior to the stuff that he showed us in the late '60s. This is what you might call the difficulty of Americans, or perhaps businesspeople in general, being other than active. Most people are tremendously active, maybe as a way of avoiding getting sophisticated. Whereas they could profitably spend a lot more time understanding what the best ideas are and designing things.
But, in fact, the urge of everyone is to just implement. You know, talent comes in a bell curve. The middle of the bell curve is 67%. When you have 2 million people implementing without dealing with the best ideas from the 2 million, you're going to get maybe 1.4 million bad systems.