The vision has been clear all along, but vision is hard to critique effectively. The various implementations we have done, on the other hand, are complete earthly artifacts, and thus admit of criticism both by ourselves and others, and this has helped to move us forward, both on the earth and in our vision. – Dan Ingalls, 2005
Ingalls’ quote speaks to an important distinction in this study: between the Dynabook and Smalltalk itself, between the vision and what Ingalls has called the image.1 The Dynabook vision emerged powerfully and clearly in Kay’s writings in the early 1970s, and he was able to coalesce a team of colleagues around him—PARC’s Learning Research Group (LRG)—on the strength of that vision. But we cannot follow the trajectory of the vision itself. If we are to follow the actors, in Latour’s phrase, we have to look for tangible or visible manifestations. Fortunately, in the case of the Dynabook story, the tangible and visible is provided by Smalltalk, the programming language Kay designed in 1971 and which was soon after made real by Ingalls. Smalltalk is not merely an effect of or a spin-off of the Dynabook idea; it is in many ways the embodiment of a major portion of the Dynabook—enormously conveniently so for this story. But, of course, Smalltalk itself is not the Dynabook: it is the software without the hardware, the vehicle without the driver, the language without the literature. Nevertheless, Smalltalk and its well-documented evolution provide an enormously valuable vector for the telling of the Dynabook story.
From the very beginning, there seems to have been an essential tension within Smalltalk and within its community. The tension concerns Smalltalk as the articulation of an educational vision—that is, its utopian idealism—vs. Smalltalk as a powerful innovation in computer programming and software engineering—that is, its sheer technical sweetness.2 That being said, among the key characters in Smalltalk’s history—Alan Kay, Dan Ingalls, Adele Goldberg, Ted Kaehler, and a host of others—it is difficult to label anyone clearly on one side or the other of this seeming divide. While Alan Kay has remained overtly focused on the educational vision for 35 years now, there can be no denying his role as a computer scientist, both in Smalltalk’s early design and in any number of evolutionary moves since. Adele Goldberg, hired on at Xerox PARC in the early 1970s as an educational specialist, ironically became the chief steward of Smalltalk’s trajectory into industry a decade later. Even Dan Ingalls, the programmer who actually built all the major versions of Smalltalk over the years, has written perhaps more eloquently than anyone about Smalltalk’s purpose to “serve the creative spirit in everyone” (Ingalls 1981). But at several key moments in the project’s history, the appeal of the educational or the technical ideal has pulled it in one direction or another. With each movement, Smalltalk has been translated somewhat, into a slightly new thing.
To trace these movements is to watch the expansion of Smalltalk’s ‘network’ in a wide variety of directions, but also to watch the translation of elements of the vision into more durable but decidedly different things. Arguably, the sheer variety of these translations and alignments—and the absence of any one clearly dominant thrust—has led to Smalltalk’s marginality in any of its realms. Arguably too, this variety is what keeps it alive. I would like to note and trace here a few key translations, and to take the opportunity with each to point out the resulting conceptual “black boxes” that result and which go on to set the conditions for subsequent shifts. Each translation represents the ecological shifting of aspects of the project—adding new allies; allowing for new inputs and influences; conforming or reacting to constraints and threats—and each of these shifts results in a notable difference in what Smalltalk is. As Smalltalk changes, so subtly does the Dynabook vision. We will begin at Xerox PARC in the mid 1970s.
Origins: Smalltalk at PARC in the Early Years
By 1973, the Learning Research Group at Xerox PARC had an “interim Dynabook” to serve as the basis of their research efforts. The Alto minicomputer—arguably the first “personal computer”—had begun to be manufactured in small quantities and distributed within Xerox PARC. Kay remembered, “It had a ~500,000 pixel (606×808) bitmap display, its microcode instruction rate was about 6 MIPS, it had a grand total of 128k, and the entire machine (exclusive of the memory) was rendered in 160 MSI chips distributed on two cards. It was beautiful” (1996, p. 534).3 Dan Ingalls ported the Smalltalk-72 system to the Alto (they had been developing it previously on a minicomputer), thereby establishing a basic platform for the next six years’ work. Kay’s team originally had 15 Alto computers, and they immediately put children in front of them, though this was difficult, owing to tensions between Xerox corporate and the relatively chaotic atmosphere at PARC. Kay writes:
I gave a paper to the National Council of Teachers of English on the Dynabook and its potential as a learning and thinking amplifier—the paper was an extensive rotogravure of “20 things to do with a Dynabook” By the time I got back from Minnesota, Stewart Brand’s Rolling Stone article about PARC (Brand 1972) and the surrounding hacker community had hit the stands. To our enormous surprise it caused a major furor at Xerox headquarters in Stamford, Connecticut. Though it was a wonderful article that really caught the spirit of the whole culture, Xerox went berserk, forced us to wear badges (over the years many were printed on t-shirts), and severely restricted the kinds of publications that could be made. This was particularly disastrous for LRG, since we were the “lunatic fringe” (so-called by the other computer scientists), were planning to go out to the schools, and needed to share our ideas (and programs) with our colleagues such as Seymour Papert and Don Norman. (Kay 1996a, p. 533)
To compensate the LRG team smuggled Alto computers out of PARC (strictly against corporate regulations) and into a Palo Alto school, and also brought local kids in to work with the machines (p. 544).
Figure 4.1: Kids in front of Alto computer (from Goldberg 1988)
Adele Goldberg writes:
Most of the educational experimentation was done with specially conducted classes of students ages 12–13. These classes were held in cooperation with a local high school’s mentally gifted minors program. The students were driven to PARC during the school day. Saturday classes were held for the children of PARC employees. (Goldberg 1998, p. 62)
Smalltalk-72 running on the Alto machines proved good enough for the first round of research. Kay and LRG colleague Diana Merry first worked on implementing an overlapping-window mouse-driven screen interface, with text in proportional fonts. LRG team member Steve Purcell implemented the first animation system, and Ted Kaehler built a version of turtle graphics for Smalltalk. Larry Tesler created the first WYSIWYG page-layout programs. Music synthesis had already been implemented before the Alto, and so this was moved over and substantially developed on this first generation platform.
Figure 4.2: Original overlapping-window interfaces (from Kay & Goldberg 1976, p. 16).
All of this work was considerably enhanced when Ingalls, along with Dave Robson, Steve Weyer, and Diana Merry, re-implemented Smalltalk with various architectural improvements (a version unofficially referred to as Smalltalk-74), which brought enormous speed improvements to the system (Kay 1996a, pp. 542–543; Ted Kaehler, personal communication, Nov 2005).
With the Smalltalk-72 system, Adele Goldberg worked substantially on a scheme merging turtle graphics and the new object-oriented style, using the simple idea of an animated box on screen (named “Joe”). The box could be treated like a Logo turtle—that is, given procedural commands to move around the screen, grow and shrink, and so on; but it could also act as a ‘class’ from which specialized kinds of boxes could be derived.
Figure 4.3: Adele Goldberg’s Joe Box in action (Kay & Goldberg1976, p. 47)
Kay later reflected,
What was so wonderful about this idea were the myriad of children’s projects that could spring off the humble boxes. And some of the earliest were tools! This was when we got really excited. For example, Marion Goldeen’s (12 yrs old) painting system was a full-fledged tool. A few years later, so was Susan Hamet’s (12 yrs old) OOP illustration system (with a design that was like the MacDraw to come). Two more were Bruce Horn’s (15 yrs old) music score capture system and Steve Putz’s (15 yrs old) circuit design system. (Kay 1996, p. 544)
Figure 4.4: Marion’s painting system (from Kay & Goldberg 1976, p. 33)
As exciting as these early successes must have been (and frankly, as impressive as they still sound today), the limitations of this early work—appearing along two fundamentally different axes—would significantly shape further development and point it in divergent directions.
A serious problem Kay’s team encountered was the extent to which children and novice users hit a “wall” or critical threshold in the complexity of their designs and constructions.4 Kay reflects:
The successes were real, but they weren’t as general as we thought. They wouldn’t extend into the future as strongly as we hoped. The children were chosen from the Palo Alto schools (hardly an average background) and we tended to be much more excited about the successes than the difficulties. … We could definitely see that learning the mechanics of the system was not a major problem. The children could get most of it themsleves by swarming over the Altos with Adele’s JOE book. The problem seemed more to be that of design. (Kay 1996a, p. 544)
Kay agonized over the difficulty of teaching a group of “nonprogrammer adults” from PARC to construct a simple rolodex-like information manager application in Smalltalk. Kay noted how they moved more quickly than the children, but this critical threshold still appeared earlier than he had anticipated:
They couldn’t even come close to programming it. I was very surprised because I “knew” that such a project was well below the mythical “two pages” for end-users we were working within. Later, I sat in the room pondering the board from my talk. Finally, I counted the number of nonobvious ideas in this little program. They came to 17. And some of them were like the concept of the arch in building design: very hard to discover, if you don’t already know them.
The connection to literacy was painfully clear. It isn’t enough to just learn to read and write. There is also a literature that renders ideas. Language is used to read and write about them, but at some point the organization of ideas starts to dominate mere language abilities. And it helps greatly to have some powerful ideas under one’s belt to better acquire more powerful ideas. (p. 545)
Despite the intractibility of this problem, even three-and-a-half decades later, Kay puts the focus in the proper place: the issue is a cultural one, rather that a technical problem that can be fixed in the next version. This is an issue that still hasn’t been significantly addressed in educational technology community, despite any number of attempts to create computing environments for children or ‘novices.’ Goldberg’s emphasis on design, as in the “Joe box” example, seemed to Kay to be the right approach. But it was clear that the specifics of just how to approach the issue of design eluded them, and that they had a very long way to go.
As innovative and successful as Smalltalk-72 had proved, its weaknesses soon showed through: weaknesses inherent in Kay’s original parsimonious design. In Smalltalk-72, the actual syntax of the language in any particular instance was defined by the code methods attached to the particular object receiving the message. “This design came out of our assumption that the system user should have total flexibility in making the system be, or appear to be, anything that the user might choose” (Goldberg & Ross 1981, p. 348). The trouble with this approach is that the flexibility of the system tends to preclude consistency. The team agreed that the flexibility of Smalltalk-72 was beyond what was desirable (Kay 1996a, p. 547). Adele Goldberg and Joan Ross explained:
Our experience in teaching Smalltalk-72 convinced us that overly flexible syntax was not only unneccesary, but a problem. In general, communication in classroom interaction breaks down when the students type expressions not easily readable by other students or teachers. By this we mean that if the participants in a classroom cannot read each other’s code, then they cannot easily talk about it. (Goldberg & Ross 1981, p. 348)
A new Smalltalk seemed to be the next step. But the team were not in total agreement about how exactly this should be accomplished.
Smalltalk’s Initial Transformation at Xerox PARC
Translation #1: A Personal Computer for Children of All Ages becomes Smalltalk-80
In early 1976, Alan Kay worried that his project was getting off track, and concerns with the design and implementation of the system were leading the LRG farther away from research with kids. He wanted to refocus, and so he took his team on a three-day retreat under the title “Let’s Burn Our Disk Packs”—in other words, let’s scrap what we have now, return to first principles, and begin again (Kay 1996a, p. 549). He explained his feeling with reference to Marshall McLuhan’s chestnut, “man shapes his tools, but thereafter his tools shape him,” and wrote, “Strong paradigms like Lisp and Smalltalk are so compelling that they eat their young: when you look at an application in either of these two systems, they resemble the systems themselves, not a new idea” (p. 549).
Not surprisingly, the people who had spent the past four years building Smalltalk were not keen on throwing it all away and starting from scratch. Dan Ingalls, especially, felt that a new Smalltalk was indeed the right direction, and by now he had some well-developed ideas about how to do it better. Ingalls thus began the design of a major new version, called Smalltalk-76. This proved to be a turning point, as Ingalls’ new thrust with Smalltalk would generate enormous momentum, cementing the technical foundations of a whole paradigm of computer programming.
A response to the technical limitations the LRG had found in their work with Smalltalk-72, Smalltalk-76 clearly identified and established itself as the paradigm for object-oriented programming. The Smalltalk-76 language and environment was based on a cleaner and more consistent architecture than in Smalltalk-72: here, everything in the system was an object; objects communicate by passing messages; objects respond to messages sent to them via the code in methods. Furthermore, every object is an instance of a class; “the class holds the detailed representation of its instances, the messages to which they can respond, and methods for computing the appropriate responses” (Ingalls 1978, p. 9). These classes are arranged in a single, uniform hierarchy of greater and greater specialization. Much of the actual practice of programming in such a system is the definition of new (lower) levels of this hierarchy: “subclassing,” that is, taking a class which provides some functionality and extending it by defining a new class which inherits the old functionality plus some specialization. Goldberg and Ross wrote:
The Smalltalk-76 system was created primarily as a basis for implementing and studying various user-interface concepts. It gave the users, mostly adult researchers, further ability in refining existing classes through the use of subclassing. This meant that the programmer could now modify a running model without creating a change to already existing examples of that model. Programming-by-refinement, then, became a key idea in our ability to motivate our users. (Goldberg & Ross 1981, p. 354)
Ingalls and his colleagues made the system into more and more of a working environment for themselves; more and more of the development of the system was done within the Smalltalk-76 system itself. A key point which is often overlooked in glosses of Smalltalk’s capabilities is that the system was completely live or “reactive” (Ingalls 1978), and so such changes to the system could be made on-the-fly, in real time (imagine changing how Microsoft Word works in the middle of writing a paragraph). In fact, parts of current Smalltalk environments were developed in the Smalltalk-76 environment at Xerox PARC.5
There emerged a unique community of practice within Xerox PARC surrounding the Smalltalk-76 environment. Even Kay was “bowled over in spite of my wanting to start over. It was fast, lively, could handle big problems, and was great fun.” The momentum of this community of practice firmly established some of Smalltalk’s enduring and influential features: a multi-paned class “browser” which allowed one to quickly and easily traverse all the classes and methods in the system; an integrated, window-based debugging environment, in which errors in the system pointed the user directly to the methods requiring fixes; and the first consistent, system-wide windowing user interface, the prototype for the ones we all use today.
Figure 4.5: A Smalltalk “browser,” showing classes and methods arranged in arbitrary categories. This browser is a direct descendant of the ones developed by Larry Tesler in Smalltalk-76.
By the late 1970s, Smalltalk’s generic qualities had begun to be evident: the consistency of the system and the design methodologies it encouraged had an effect on the practices of its users. I deliberately mean to treat Smalltalk as an actor in this sense; here is an example of a tool which, once shaped, turns and shapes its users—or “eats its young,” as Kay put it. This is the system which excited computer scientists about object-oriented programming, and the user-interface genre established by Smalltalk-76 was adhered to in Apple and Microsoft’s later systems. A broad set of practices and approaches coalesced around Ingalls’ new design:
When programmers started writing class definitions in these browsers, a new era of design began. The average size of a method tended to correspond to the screen space available for typing the method text (which consisted of around seven message expressions in addition to the method header information). Software evolved (actually it felt like software was molded like clay). The programmer could write a partial class description, create an instance to try out its partial capabilities, add more messages or modify existing methods, and try these changes out on that same instance. The programmer could change the behavior of software describing an instance while that instance continued to exist. (Goldberg 1998, p. 64)
The educational research that followed on the development of Smalltalk-76, however, was not aimed at children, but at adult end-user programming. In 1978, Adele Goldberg led the LRG team through a watershed experience: they brought in Xerox upper management and taught them to construct a business simulation using Smalltalk. A key departure here was that these end-users were not exposed to the entire Smalltalk-76 environment, but to a special simulation framework designed for the exercise (Kay 1996a, p. 556ff; Goldberg 1998, pp. 65–66). Much of the research in the late 1970s seems to have taken the form of such domain-specific environments and focusing on design, rather than teaching the programming environment itself (Goldberg 1979; Goldberg & Ross, 1981).
Parallel to the Smalltalk-76 development was Kay’s work on a machine called the NoteTaker. This was the first “portable” computer in the sense in which we understand it today; it was a lot bigger than a laptop, but Kay writes that he did use it on an airplane (1996a, p. 559). This isn’t just a footnote; the important point about the NoteTaker, from the standpoint of Smalltalk’s trajectory is that for the first time since the Alto’s introduction, Smalltalk was made to run on a non-Xerox processor. Ingalls and colleague Bruce Horn ported the Smalltalk-76 system over to the NoteTaker (making Smalltalk-78), which was built with the new, inexpensive microprocessor chips (such as would appear in the first microcomputers). So, despite the NoteTaker project being officially cancelled by Xerox management in 1978, Smalltalk had taken its first steps toward portability—that is, independence from Xerox’ own hardware. Goldberg reports that further work toward making Smalltalk run on other vendors’ hardware was key to its continued evolution (1998, p. 69ff), convincing the team that
…Smalltalk would run well on standard microprocessors. We no longer needed to rely on microcoding Xerox’ proprietary machines, so we decided it was time to expand the audience for Smalltalk. We decided to create a Smalltalk that the rest of the world could use. [...] In 1979 we asked Xerox for the right to publish Smalltalk, the language and its implementation and the applications we had built to test the Smalltalk model of computing. Xerox officially gave this permission, remarking that no one inside Xerox wanted Smalltalk. (Goldberg 1998, p. 71)
Outside Xerox, there was interest, and the popular history of computing records the occasion of Apple Computer’s Steve Jobs and his team visiting Xerox in 1979, and coming away with substantial inspiration for their Macintosh project. Interestingly, what Jobs and his team really took away from their 1979 visit was the look of what Kay’s team had designed and not so much of how it worked; Jobs was so bowled over by the windows-and-menus interface that he ignored the dynamic object-oriented development environment and the local-area network connecting the Xerox workstations.
The symbolic importance of this event relates to these ideas ‘escaping’ from Xerox. The distillation of the Smalltalk-76 environment over the following three years into Smalltalk-80 made this motif central. In preparing Smalltalk-80 for release, what was required was to abstract the language from any hardware assumptions in order to allow implementations on any number of target platforms (Goldberg 1998, p 73).
Significantly, Kay was not a part of this development. In 1979, Kay took a sabbatical from Xerox, and did not return. Adele Goldberg led the group toward the definition of Smalltalk-80 and the publication of a series of books (Goldberg & Robson 1983; Krasner 1983; Goldberg 1984) and a special issue of BYTE magazine (Aug 1981) with a cover illustration showing a colourful hot-air balloon ascending from a tiny island with an ivory tower. But Smalltalk was escaping to where? Certainly not to schools and schoolchildren; rather, Smalltalk-80 was headed for professional systems programming—electronics, banking, shipping—and academic computer science research. The flexibility and elegance of Smalltalk’s development environment won it a small but dedicated following of systems programmers; this would be what Smalltalk was known for in programming circles. The resulting “black box” (in Latour’s sense) was Smalltalk as an interesting dynamic programming environment for research and systems modelling, but far from the mainstream of either professional software development or personal computing.
Translation #2: From educational research platform to software development tool
Xerox licensed Smalltalk in 1980 to four hardware companies who had their own software divisions and could therefore participate in the documentation of its implementation in different contexts: Hewlett Packard (HP), DEC, Apple, and Tektronix. Of these, electronic instrument manufacturer Tektronix (an electronic equipment manufacturer rather than a computer company per se) did the most with Smalltalk, offering it with a short-lived line of research workstations, and also embedding in the hardware of its popular line of oscilliscopes (Thomas n.d.). Significantly, Smalltalk got a bigger boost in the late 1980s with the formation of a spinoff from Xerox called ParcPlace Systems, in which Goldberg and colleagues commercialized Smalltalk, selling licenses to more companies and maintaining a portable base system which would hedge against the language’s fate being tied to any one hardware platform (Goldberg 1998, p. 80ff). Ultimately, two Smalltalk licensees came to dominate: IBM and Digitalk—the latter was a spinoff from Italian business products company Olivetti, which ultimately merged with Goldberg’s ParcPlace Systems; the company was acquired in 1999 by Cincom, a major software consulting house.
If this historical detail sounds somewhat arcane and far removed from the trajectory of the Dynabook vision, it should. This later history of Smalltalk, from the early 1980s on, has a decidedly different character from that which preceded it. The focus had shifted entirely away from education (with a handful of minor exceptions6) and toward leading-edge computer science research and development. Much of the activity in the Smalltalk community was academic, centered around teams at the Xerox PARC of the 1980s as well as research at Universities of Massachussetts, Washington, Carleton, Tokyo, Dortmund, and others worldwide (ibid.). Ironically, far from its origins in personal computing, Smalltalk in use is found in the realm of big systems development: banking and finance, importing/exporting and shipping, health care, insurance, and so on.7 The website for Cincom Smalltalk boasts, “how the French fries you get at McDonalds are sorted by Cincom Smalltalk.”
Smalltalk’s greatest impact on the computing world, however, was its role in the establishment of object-oriented programming and design, which by now has become one of the major genres of contemporary information technology. Smalltalk may have been the technology at the core of this movement in the 1980s, but it was quickly overtaken by much larger populations of developers working in the C++ language (which, strictly speaking was derived from the earlier Simula language rather than Smalltalk, and which added object and class constructs to the popular C programming language). C++ was a much smaller conceptual and practical leap for mainstream programmers used to working in static, procedural languages like C or Pascal; though the consequence of that ‘shorter leap’ also means that C++ has been called the worst of both worlds. Despite this, C++ grew in the 1990s to be the dominant object-oriented language, and its popularity was such that object-oriented programming became the new mainstream. Consider, as a measure of this, the fact that the U.S. “Advanced Placement” curriculum in computer science shifted to C++ from Pascal in 1999.
In 1996, Sun Microsystems released the Java language and development platform, an attempt to re-invent software development with the Internet in mind. Java is an object-oriented language much closer in spirit to Smalltalk, at least in that it was designed from the ground up with objects in mind (unlike C++, which was an adaptation of an older language and conceptual model), and with a virtual-machine architecture like Smalltalk’s to ensure portability across a wide variety of platforms. According to programmer mythology,8 Sun Microsystems wanted “an industrial strength Smalltalk, written in C++.” They got neither, but what Java did represent after its late-’90s release was an enormous shift of programming practice—and, perhaps more importantly, discourse—away from C++ (The US Advanced Placement curriculum abandoned C++ for Java in 2003). Sun Microsystems spent the better part of the next decade fighting with Microsoft over this shift and who would be in control of it. Meanwhile, the Smalltalk community—consultants and developers at IBM, Digitalk, and a few others—continued on in the shadows of these enormous efforts. It is worth noting that while millions of people work in C++ and Java and Microsoft’s related .NET on a daily basis, the almost ubiquitous characterization of these environments is that they are overly complex, badly designed and implemented, and a general curse on their users. The way Smalltalk developers talk of their chosen environment couldn’t be more different.
The second major translation of Smalltalk, then, is from a research project—at its origin an educational research project—to its marginal place within a much larger current of industrial practice in object-oriented programming. Smalltalk’s status within this larger current sometimes reads like an origin myth (“In the beginning, there was Smalltalk…”). The related black-boxing of Smalltalk in the context of this historical shift relegates it to an intellectually interesting but ultimately ‘academic’ system, too far from evolving mainstream concerns to make much practical difference. Far from being the revolutionary step Kay had hoped, Smalltalk was merely subsumed within the emerging object-oriented paradigm.
Translation #3: From “designers” to “end-users”
It is against these large-scale, corporate systems trends that the Dynabook’s trajectory through the 1980s and 1990s must be evaluated. After 1980, Smalltalk almost completely shed its educational connections; very little of Smalltalk-80 was ever seen by children. Ironically, it was Adele Goldberg, who came to Xerox PARC as an educational specialist and who led the research with children there for years, who now led Smalltalk’s move into the wider world of professional programming.9 It is important to reflect on just how far Smalltalk had travelled from the Dynabook vision, and it would have been a reasonable observation in the mid 1980s that the two ideas had finally parted. Benedict Dugan’s commentary on this shift invokes Frankfurt-school theories of “technical rationalization”:
Clearly, at some point, the original, idealistic goals of Kay and company became commercialized. By commercialized, I mean that the design focus shifted away from social and political concerns, to an interest in efficiency. By exploiting the ability of class hierarchies to organize knowledge and share code, the designers created a language which was promoted for its ability to facilitate extremely efficient software engineering. Lost was the powerful notion of a programming system which would amplify the human reach and make it possible for novices to express their creative spirit through the medium of the computer. (Dugan 1994).
Despite the finality of Dugan’s statement, Smalltalk was far from finished; the contributions to computer science embodied in Smalltalk are still being realized and reconsidered today. I will not go into detail here on the long-term impact of Smalltalk on software engineering. Instead, I want to focus specifically on the rise of user-interface development.
Conventionally, the graphical user-interface as we know it today descends more or less from the Smalltalk environments at Xerox in the 1970s, via Steve Jobs’ visit to Xerox PARC and subsequent design of Apple’s Macintosh computers; what Xerox couldn’t bring to market, Apple could, and in a big way. The actual story is a little more complicated than this, not surprisingly.
In the first place, Xerox did attempt to commercialize some of the personal computing research that came out of PARC. In the late 1970s, a machine called the Xerox Star was developed and it was sold in the early 1980s.10 The Star was an attempt to market what Kay calls the “PARC genre” of computing: a pointer-driven graphical user interface, rich document-production tools (“desktop publishing”), peer-to-peer networking, and shared resources like a laser printer. The Star’s failure commercially has probably more to do with its selling price—close to $20,000 each, and they were sold in clusters of three along with a laser printer—in an era when the “personal computer” was being defined by Apple and IBM’s machines costing around $3000. Nevertheless, the Star is the machine which first brought the modern graphical desktop environment to market; while overlapping windows and menu-driven interaction were pioneered in Smalltalk, the process of turning these ideas into a packaged product and sold to an audience happened with the development of the Star. Interestingly, the use of icons—not a feature of the Smalltalk interface—was pioneered in the Star interface as a means of giving a direct-manipulation interface to mundane, hardware-defined things like disks and printers and files.
I do not mean to give the impression that the Xerox Star was completely distinct from the Smalltalk project—there was some significant overlap in the personnel of the two projects—but rather to point out yet another considerable translation which occured in the packaging and marketing of the Star. In Kay’s Dynabook concept, end-users were seen as designers and developers; system tools were accessible from top to bottom, and the “late binding” philosophy led to an expectation that the details of how a user actually worked with a system would be defined ongoingly by that user.11 This clearly presents difficulties from a business perspective; how on earth does one market such a concept to a potential audience? It is far too vague. The Xerox Star, then, was a distillation of one possible scenario of how typical office-based end-users would work. The Star operating system was not Smalltalk-based, so it would not be possible to easily change the configuration; instead, the Star’s developers worked according to a now-commonplace model: usability research. They were able to draw upon several years of internal Xerox use of the Alto computers, with and without Smalltalk—there were over 2000 of them in use at Xerox in the 1970s—and they developed a detailed model of tasks and use cases: what would end-users want to do with the Star, how would they go about it, how should the user interface be structured to enable this?
The shift here is from a notion of participatory designers (in Kay’s conception) to end-users as we understand the term now. For Thierry Bardini and August Horvath, in their article, “The Social Construction of the Personal Computer User” (1995), this is the point where the end-user role is formally established. Employing the language of actor-network theory, they write,
The first wave of researchers from SRI to PARC helped in opening the concept of design by the reflexive user, and the second wave got rid of the reflexive user to create a methodology of interface design based on a user model and task analysis. In this last translation, the very utility of the reflexive user … was questioned.
The result was a new set of principles for the design of the user interface and its new look and feel: icons and menus. The first step of this new methodology is also the last that we consider for this part of the history. Here begins the negotiation with real users over the script of personal computing. (Bardini & Horvath 1995 [italics added])
There are two substantial black boxes emerging here: first is Bardini & Horvath’s notion of “real users;” second, and at least as influential, is the trope of “user friendliness,” which is the target of user-centered design and possibly the key selling point of the microcomputer revolution, especially since the Macintosh.
But this is all too pat, both as history and as pedagogy. Bardini and Horvath seem content to close the box and write the history as done at this point—what follows is the unfortunate popular history of personal computing with its monolithic end-users and engineers. I am not prepared to close the box here, and neither are the community of people surrounding Kay and the Dynabook idea, as we shall see. In my reading of this history, it is imperative that we question whether the “realization” (in Bardini and Horvath’s language) of the User and the attendant reification of the qualities of user-friendliness (ease of use, ease of learning, not demanding too much of the user) is something which we are prepared to accept. It seems to me that the “reflexive users” of the early PARC research are in fact more real than the hypothetical one(s) inscribed in User-Centered Design—which performs a substitution not unlike what focus groups do for markets or audiences. The earlier reflexive users at least had agency in their scenarios, as they actively shaped their computing media. The later Users, though in vastly greater numbers, must be satisified with being folded into a pre-shaped role. It is, of course, indisputable that this latter version has become the dominant one. But one of the consequences of this closure is the rise of a genre of computing literature (especially within education) which diagnoses the problems stemming from the cultural disconnect between “engineers” and “end users” (e.g., see Shields 1995; Rose 2003). This diagnostic tendency is rarely constructive (see Papert’s 1987 defense of computer cultures); rather, it more effectively serves to further reify these oppositional roles.
It is significant, I think, that Kay’s original (and ongoing) conception of personal computing (especially in education) is an alternative to the now-commonplace notion of end-users. Bardini and Horvath suggest that Kay’s “reflexive users” are an artifact of history—back when computers were only for ‘computer people’—but I am not yet/quite convinced. Nor are the proponents of a movement in technology design and policy which emerged in Scandinavia in the late 1970s and early 1980s called participatory design (Ehn 1988), which saw a specifically political dimension in the division of labour—and power—between workers (increasingly seen as end-users) and the designers and engineers representing the interests of corporate power. The participatory design movement sought to address this head-on, with direct involvement from labour unions.12 It is, I think, instructive that a movement which significantly addresses issues of power in computing environments should seek to re-inscribe the user.
The Microcomputer Revolution of the Late 1970s
The history of the advent of the “PC”—the personal microcomputer13 as we have come to know it—has been copiously documented and is not a topic I will devote much time to here; there are several standard histories, the PBS documentary series Triumph of the Nerds (Cringely 1996) is probably sufficient as a touchstone. The storyline has become mostly conventional: unkempt hackers in their (parents’) northern California garages discovered what IBM—the market leader in computing—had missed, and, as a result, tiny startup companies like Apple and Microsoft had the opportunity to make hay for themselves, eventually eclipsing IBM. Significantly, these companies—aided in no small part by IBM itself—succeeded in making the personal computer a necessary part of modern life: we soon became convinced that we needed them in every office, every home, every classroom.
Translation #4: From a software research tradition to a “gadget” focus
The interesting thing about the conventional story of the microcomputer vanguard is that what had been accomplished at Xerox PARC in the 1970s is almost entirely absent; their creators (that is, proto-billionaires like Steve Jobs and Bill Gates) operated in a world nearly perfectly isolated from the kind of thinking Alan Kay was engaging in. Apple Computer’s promethean role is mostly as a hardware manufacturer: they created devices—boxes—that a computer hobbyist could afford. Microsoft’s part was to market a rudimentary operating system for IBM’s entry into the PC market. Both of these contributions—significant as they were in hindsight—were astonishingly unsophisticated by PARC’s standards. Kay reportedly “hated” the new microcomputers that were coming out: “there was no hint that anyone who had ever designed software was involved” (1996a, p. 554).
But this is not simply a matter of economics: the difference is not explained by the size of the budgets that separated, for instance, Xerox from Apple in 1978 or 1979. It is rather a cultural difference; Kay’s work—and that of his colleagues at PARC—drew upon a lengthy academic tradition of computing: these were all people with PhDs (and not necessarily in computer science, as Kay points out, but in “established” disciplines like mathematics, physics, engineering, and so forth). Apple founders Jobs and Wozniak were hobbyists with soldering irons, more in the tradition of hot rodding than systems engineering. Bill Gates was a Harvard dropout, a self-taught programmer who saw the business potential in the new microcomputers.
Not that these self-styled pioneers were positioned to draw on PARC’s research; Xerox publications were few and far between, and while the Dynabook and Smalltalk work was not secretive, what was published broadly was not the sort of thing that a self-taught, garage-based hacker could work with, despite Kay’s best intentions. Even today, with the accumulated layers of three decades of computing history at our fingertips, much of the PARC research comes across as slightly obscure, much of it is still very marginal to mainstream computing traditions. The microcomputer revolution was primarily about hardware, and there is no doubt that much of its early energy was based in a kind of gadget fetishism. As this early enthusiasm matured into a market, the resulting conceptual black box was the PC as a thing on your desk, a commodity. Who could conceive of how software might become a marketable item? It must have seemed more of a necessary evil than something important in and of itself, at least until the hardware market was sufficiently established for tools like VisiCalc—the first “killer app”—to be appreciated.
Translation #5: From a research focus to a market focus
The pioneers of the microcomputer succeeded most importantly as marketers, turning the hobbyist’s toy into something that ‘everybody’ needed. In this respect their place in history is secured. The scaling-up of the world of computing from what it looked like in 1970—according to PARC lore, of the 100 best computer scientists in the world, 80 of them were working at PARC—to its size even a decade later, is difficult to encapsulate, and I won’t try. Suffice it to say that it complicated Kay’s vision enormously. But wouldn’t, one might argue, the appearance of a personal computer on every desk be right in line with what Kay was driving at? Here, seemingly, was the prophecy fulfilled; what’s more, right from the beginning of the microcomputer age, advocates from both sides of the table—at schools and at technology companies—were trying to get them in front of kids.
Necessary, perhaps, but not sufficient. To look at it a little more closely, the microcomputer revolution of the late ’70s and early ’80s represents more of a cusp than a progression. As for the details—frankly, the early microcomputers were pretty useless: their hobbyist/tinkerer heritage made them more like gadgets than the personal media tools Kay had envisaged. Market pressures kept them woefully underpowered,14 and the lack of continuity with the academic tradition meant the software for the early microcomputers was uninspired,15 to say the least. The process of turning the microcomputer into an essential part of modern life was a much bumpier and more drawn-out process than the popular mythology suggests. The question, “what are these things good for?” was not convincingly answered for a good many years. Yet the hype and promise dealt by the early advocates was enough to drive things forward. Eventually, enough black boxes were closed, the answers were repeated often enough to begin to seem right (“yes, I do need Microsoft Word”), and the new application genres (spreadsheets—desktop publishing—video games—multimedia—etc.) were layered thickly enough that it all began to seem quite ‘natural.’ By the time the Internet broke through to public consciousness in the early 1990s, the personal computer was all but completely established as an indispensable part of daily life, and the rhetoric of determinism solidly won out: you really can’t function without one of these things; you really will be left behind without one; your children will be disadvantaged unless you get on board.
The resulting black box from this translation was the identification of the computer and computer industry as the “engine of the economy,” with the various elements of computing firmly established as market commodities. But what had happened to the Dynabook?
The Dynabook after Xerox PARC
Alan Kay went on sabbatical from PARC in 1979 and never came back. The period from 1979 to 1984 would witness a mass exodus from Xerox PARC (Hiltzik 1999 describes at length the complex dynamics leading to this). While Kay’s former colleagues, under Adele Goldberg’s leadership, worked to prepare Smalltalk for an audience beyond PARC, Kay took the opportunity to become chief scientist at Atari, which in the early 1980s was the rising star in the nascent video game industry.
Kay spent four years at Atari, setting up research projects with long-range (7–10 year) mandates, but has described his role there as a “trojan horse.” Inside each video game machine is a computer, and therein lies the potential to go beyond the video game. This period at Atari is represented in the literature more as a collection of war stories and anecdotes (see Rheingold 1985; Stone 1995) than of significant contributions from Kay himself.
A few important characters emerged out of Kay’s team at Atari: notably Brenda Laurel, who went on to be a leading user-interface design theorist (Laurel & Mountford 1990; Laurel 1993) and later head of Purple Moon, a software company that targeted adolescent girls; and Ann Marion, whose “Aquarium” simulation project at Atari became the prototype for the research project which would define Kay’s next phase. Ultimately, though, nothing of educational significance came out of Kay’s term at Atari, and in 1984, corporate troubles ended his time there.
In late 1985 Kay took a research fellowship at Apple Computer that—along with the patronage of new CEO John Scully—seems to have given him a great deal of personal freedom to pursue his educational ideas. Kay stayed at Apple for a decade, firmly establishing a link between him and the popular company. Apple had become known for its emphasis on educational markets, which included large-scale donations of equipment to schools and research programs like “Apple Classrooms of Tomorrow” (Apple Computer 1995). Apple’s presence in the education sector was key to their branding in the 1980s (as it remains today).
When Kay arrived, Apple had just released the first-generation Macintosh computer, the culmination of the work that had been inspired by the famous visit to Xerox PARC in 1979. The Mac was positioned as the alternative to the paradigm of personal computing defined by IBM’s PC; the Mac was branded as “the computer for the rest of us.” It was certainly the closest thing to the PARC genre of computing that the general public had seen. That the Mac was indeed different needs no re-telling here; the cultural and marketing battle between Macs and PCs (originally inscribed as Apple vs. IBM, later Apple vs. Microsoft) was the dominant metanarrative of 1980s computing. But despite the Mac’s mouse-and-windows direct manipulation interface, it remained a long way from the kind of personal computing Kay’s team had in mind (and had been literally working with) in the mid 1970s. The “look and feel” was similar, but there was no facility for the user shaping her own tools; nor was the Mac intended to be part of a network of users, as in the PARC vision. Nevertheless, Kay’s oft-quoted pronouncement was that the Mac was the first personal computer “good enough to critique.” And, as evidenced by the number of Kay’s colleagues who came to work at Apple in the 1980s, it must have seemed that the company was headed in the right direction.16
Kay’s first year at Apple seems to have been spent writing, furthering his thinking about education and computing and projects. An article Kay had published in Scientific American (Kay 1984) gives a sense of where his thinking was. One of the key innovations of the microcomputer revolution—and the first really important answer to the “what are they good for” question—was a software application called Visicalc, the first dynamic spreadsheet program, introduced in the late 1970s by Dan Bricklin and Bob Franston. The spreadsheet is an interesting example of a computing application that was born on microcomputers; it is significantly absent from the lengthy collection of innovations from Xerox PARC, and PARC designers were very impressed when they saw it (Hiltzik 1999, p. 357). The spreadsheet concept clearly impressed Kay, too, and he framed it as a key piece of end-user empowerment:
The dynamic spreadsheet is a good example of such a tissuelike superobject. It is a simulation kit, and it provides a remarkable degree of direct leverage. Spreadsheets at their best combine the genres established in the 1970s (objects, windows, what-you-see-is-what-you-get editing and goal-seeking retrieval) into a “better old thing” that is likely to be one of the “almost new things” for the mainstream designs of the next few years. (Kay 1984, p. 6)
Kay and his colleagues had come to recognize that the microcomputer was a serious force in the development of personal computing and not just a hobbyist’s niche. The extent of Kay’s engagement with the spreadsheet idea shows, if nothing else, that a current of new ideas from outside sources were a welcome addition, after a decade of research within PARC.
The Vivarium Project
Alan Kay’s work at Apple was characterized by a single-minded return to the problem of how to bring scientific literacy to children via computing. His work on Smalltalk was over, and the corporate politics of Xerox were long gone. Kay took up his research fellowship at Apple by dedicating himself to working with kids again, something which had practically eluded him since the mid 1970s at Xerox PARC.
The project which most defined Kay’s tenure at Apple through the mid and late 1980s and into the early 1990s was the Vivarium: a holistic experiment in technology integration that is possibly unparalleled in its scope. Kay’s team moved in to the Los Angeles Open School for Individualization—LA’s first “magnet school”—and stayed there for seven years. The scale of Apple’s investment of time and resources in the school, and the Open School’s contributions to Kay’s research, make the Vivarium a very special educational technology project. Though little has been written about the Vivarium—compared, say, with the Apple Classrooms of Tomorrow (ACOT) program, which had a much higher public profile—it has had a lasting impact on Kay’s work, educational computing research, and, of course, the LA Open School.
Ann Marion was the project’s manager, and the basic idea for the project had come from her Masters thesis, written while working with Kay’s team at Atari: to build a game from semi-autonomous cartoon characters. She wrote extensively about the project in a summative report for Apple entitled Playground Paper (Marion 1993). Marion decided on an ecological setting, with fish interacting with (e.g., eating) one another, and this became the core storyline behind the ambitious educational program at the Los Angeles Open School. Within an explicitly progressivist and flexible educational setting at the Open School, Vivarium put the design and development of a complex ecological simulation in primary-school children’s hands. The challenge for the kids was to create more realistic interactions among the fish while learning about ecological models along the way; the challenge for Kay’s team was to extend computing technology to the children so that they could effectively carry this out. Larry Yaeger, one of the team members, later wrote:
The literal definition of a “Vivarium” is an enclosure or reserve for keeping plants and animals alive in their natural habitat in order to observe and study them. The Apple Vivarium program is a long-range research program with the goal of improving the use of computers. By researching and building the many tools necessary to implement a functioning computer vivarium, an ecology-in-the-computer, we hope to shed light on many aspects of both the computer’s user interface and the underlying computational metaphor. We are exploring new possibilities in computer graphics, user interfaces, operating systems, programming languages, and artificial intelligence. By working closely with young children, and learning from their intuitive responses to our system’s interface and behavior, we hope to evolve a system whose simplicity and ease of use will enable more people to tailor their computer’s behavior to meet their own needs and desires. We would like untrained elementary school children and octogenarians to be able to make specific demands of their computer systems on a par with what today requires a well trained computer programmer to implement. (Yaeger 1989)
The project was as broad-based as it was ambitious; it boasted an advisory board composed of various luminaries from the world of cognitive science, Hitchiker’s Guide to the Galaxy author Douglas Adams, and even Koko, the famous gorilla (at one point, the team was engaged in creating a computer interface for Koko). The computer-based work was enmeshed in a much larger, exploratory learning environment (the school featured extensive outdoor gardens that the children tended) at the Open School. Yaeger wrote:
The main research test site of the Vivarium program is a Los Angeles “magnet” school known as the Open School. Alan chose this primary school, grades 1 through 6 (ages 6 through 12), because of their educational philosophy, founded on the basic premise that children are natural learners and that growth is developmental. Based on Piaget’s stages of cognitive development and Bruner’s educational tenets, the Open School was seen not as an institution in need of saving, but as an already strong educational resource whose fundamental philosophies aligned with our own. With the support of the Open School’s staff, some 300 culturally and racially mixed children, and our principal liaison with the school, Dave Mintz, we have developed an evolving Vivarium program that is included in their Los Angeles Unified Public Schools curriculum. (Yaeger, 1989)
The LA Open School had been established in 1977, “by a group of parents and teachers who wanted an alternative to the ‘back-to-basics’ approach that dominated the district at that time. The group wanted to start a school based on the principles of Jerome Bruner and the practices of the British infant schools” (SRI International 1995). It was the LA Unified School District’s (LAUSD) first magnet school, mandated to pursue a highly progressive agenda, with few of the restrictions that district schools worked within. The school held 384 students (K–5) and 12 teachers, arranged in 2-year multigraded “clusters,” team-taught by two teachers with 62 kids in each (BJ Allen-Conn, personal communication, Nov 2004). Ann Marion characterized the school setting:
The L.A. school we chose to work with we believed had no need of “saving.” The Open School for Individualization, a public magnet school in Los Angeles, emphasizes theme-based projects around which children learn by bringing all the classroom subjects together in service of the theme. Construction projects evoke a whole person approach to learning which engages many different mentalitites. In this regard we share the common influence of Jerome Bruner which is evident throughout these different activities. We find models of group work. Variety and effectiveness of working groups are to be seen of different sizes, abilities and ages, in which children collaborate and confront each other. (Marion 1993, ch 2, p. 1)
By 1985—before Apple’s involvement—the school had apparently already begun to integrate microcomputers, and the teachers had used Logo. At that time there was already a strong notion of how computers should be used: in the classroom, not in labs, and for creative work, as opposed to drill-and-practice work or games (Allen-Conn, personal communication). In that year, Alan Kay had contacted the Board of Education looking for a school to serve as a research bed; he had also inquired at the Museum of Science and Industry, and made a list of possible schools. After visiting several, Kay felt that the Open School was philosophically closest to what he had in mind, that the school was indeed the right kind of environment for his research. Originally, Kay and Marion planned to run the Vivarium in one classroom, but principal Bobby (Roberta) Blatt insisted that any resources be used for the entire school; that it would disrupt the democratic and consensual nature of the school to have one class with an inordinate amount of technology. Kay’s research could focus on one classroom (it largely did), but the resources had to be managed across the whole school (Bobby Blatt, personal communication, Nov 2004).
Blatt, the teachers, and the parents at the Open School negotiated extensively with Kay to establish the terms of the relationship; while they were open and excited by the possibilities of using technology intensively, they were concerned that Apple’s involvement would change the “tenor” of the school (Blatt, personal communication); instead, they wanted the technology to be “invisible” and fully integrated into the constructivist curriculum they were creating. The focus had to be on the children and their creations. For instance, the school had a policy against video games on the school’s computers—unless the children themselves created the games. Blatt reports that this spawned a culture of creating and sharing games, mostly developed in Apple’s HyperCard authoring software. In January 1986, when the Vivarium project was launched, Apple Computer installed one computer per two children,17 began training the teachers and staff and provided a technical support staffperson at the school.
The Open School also negotiated an investment by Apple into maintaining the school’s arts, music, and physical education curriculum—since arts curriculum funding was under the axe in California. Kay was more than happy to comply with this, and so an investment on the order of $100,000 was made annually to the Open School for curriculum and activities that had nothing directly to do with computing, but served to keep academics, arts, and technology in balance—and which also provided time for the core teaching staff at the Open School to do group planning. Both Blatt and long-term teacher BJ Allen-Conn (personal communication, Nov 2004) reported that once everyone got to know Kay, their fears of being overwhelmed by the technological agenda quickly abated, owing to Kay’s “respectful attitude.” Ongoing collaboration between Kay’s group and the Open School teachers took place at monthly “brown bag” lunches and teas.
By the late 1980s, the twelve teachers at the Open School were working part-time with Kay’s team—10 hours per week plus 6–10 weeks per summer, as well as conferences and other professional-development events through the year (Marion 1993, ch. 2, p. 10)—and being paid consulting fees to help develop curriculum for themes at the Open School. Kim Rose claims that some of the teachers bowed out after a few years of this, simply because they wanted to have regular summer vacation for a change (Rose, personal communication, Oct 2004), but this general pattern continued right into the early 1990s.
What is obvious here, but which bears dwelling upon for a moment, is that the relationship between Kay’s team at Apple and the LA Open School was an unprecedented and unparalleled situation of support, funding, and devotion to pursuing the ends of technology integration. It is hard to imagine any other school situation even comparable to this. But rather than thinking of the Vivarium project at the Open School as any sort of model for technology integration, we should consider the Open School part of Kay’s research lab. At Xerox PARC, it had been a challenge to sustain access to children and school settings, or conversely to sustain children’s access to Xerox labs. At Apple, Kay’s agenda seems to have been to set up a rich and ongoing relationship with a school as a foundational element of the project, and then to move forward with the technological research—somewhat the reverse of the arrangement at Xerox PARC.
The Vivarium project itself had a broad and holistic vision; it modelled both an exploratory educational vision and a way of integrating technology with education. Apple management seems to have given Kay the room to pursue a pure research agenda, but this statement needs qualification: the Open School was in every way a ‘real’ and applied setting. Rather, Kay colleagues’ research there seems to have had little impact on Apple’s products or its ostensible presence in the marketplace and the ‘purity’ of the research should be seen on this corporate level rather than the decidely messy pedagogical level. Ann Marion’s summative report on the project goes into some detail about the practical difficulty of attempting to keep the technology subservient to curriculum ends. But Vivarium was a low-profile ‘skunkworks’ research project, interested in furthering blue-sky research into a wide variety of computing themes—simulation environments, animation systems, user-interface techniques (both hardware and software)—in comparison with the much higher-profile “Apple Classrooms of Tomorrow” program, which sought to deploy existing Apple technologies to schools and to be focused on “technology transfer” (Ann Marion, personal communication, Nov 2004).
The Vivarium project had run out of steam by 1993, when Apple fell on hard times economically and organizationally (Kay’s patron, John Sculley, was ousted as CEO in 1993). Bobby Blatt reports that Apple’s pullout from the LA Open School was conducted with lots of advance warning, and that the parents were motivated to keep the same level of computer integration at the school, having “tasted the wine” (Blatt, personal communication), a challenge which they have appearently succeeded at. In 1993, principal Blatt, looking at retirement and anticipating Apple’s withdrawal from the school, pursued Charter School status for the LA Open School, which would ensure its continued autonomy; she succeeded, and it became the Open Charter School in 1994, continuing along the same lines today. Apple Computer’s investment of people and research has not been duplicated. However, Kay’s own team has maintained some level of involvement with the Open School (and in particular, with teacher BJ Allen-Conn) ever since. In fact, the foundation of a research group that would provide Kay’s working context for the next decade was by this point established. Kim Rose, for instance, was hired on to the Vivarium project in 1986, and remains Kay’s closest working partner today.
The research conducted through the Vivarium years seems to have two facets: the first, with the Open School in mind, was the creation of a simulation environment (of an underwater ecology) in which primary-school kids could act as designers and systems modellers as they developed their understanding of ecosystem dynamics. The second, which potentially had more application to Apple’s own agenda, was an investigation of end-user programming, with a definition of “end-user” beyond the children at the LA Open School.
The simulation research drew largely on the work Kay’s team had done with Smalltalk while at Xerox; the essential challenge is to figure out what sort of basic scaffolding will allow children to work at the level of design and problem-solving rather than wrestling with the syntax and mechanics of the environment (Kay & Goldberg 1976; Goldberg 1979; Goldberg & Ross 1981; Marion 1993). The Vivarium project engaged several teams of developers (often drawn from the MIT Media Lab) to try out various approaches to this challenge. Mike Travers’ MS Thesis from MIT, entitled “Agar: An Animal Construction Kit” (1988) was the result of one early project. Agar provided a customizable, agent/rules-based environment for setting up autonomous virtual actors and scripting their prototypical reactions to one another. Another example is Jamie Fenton and Kent Beck’s first-generation “Playground: An Object Oriented Simulation System with Agent Rules for Children of All Ages” (1989); Playground—in the Fenton and Beck version and in subsequent versions developed by Scott Wallace—was a more abitious agent/rules system which prototyped a scripting language and environment designed for children to describe interrelationships between actors. There were numerous other prototypes, including icon-based graphical programming environments and a variety of other ideas. The Playground system eventually emerged as the dominant platform for the simulation projects at the Open School. Ann Marion characterized it thus:
The playground was chosen as our metaphor for a computer programming environment … The playground is a place where rule-governed activities have a natural place, involving play, invention, and simulation. On the playground, children assume roles which limit their behavior to that of defined and shared characters. Rules and relationships are endlessly debated and changed. The nature and structure of playground play resembles some of the strategy children might exercise on the computer, to set up computer instructions in construction and play with simulations of multiple players. (Marion 1993, preface, p. 3.)
One way to think about Playground is as having a spreadsheet view, a HyperCard view, and a textual programming view, all simultaneously available, where the user can make changes in whatever view seems easiest to work with, and have all views updated appropriately. (ch 1. p. 1)
A point which is easy to overlook from the vantage point of the 21st century is the sheer challenge of making systems of this sophistication workable on the computers of the mid 1980s. The later versions of the Playground software were developed using a version of Smalltalk for the Macintosh. This might seem like a straightforward thing to do, given the historical sequence, but performance limitations of 1980s-era Macintosh computers meant that this must have been a constant headache for the team; what had been possible on expensive, custom-designed hardware at Xerox was not nearly as practical on relatively inexpensive Macs, even a decade later. At one point, a special Smalltalk accelerator circuit-board had to be installed in the Macs at the Open School to get an acceptable level of performance in Playground. In a sense, the Vivarium project can be seen as a massive logistical challenge for Kay: how to move from a context in which all the technical facets are (reasonably speaking) within his team’s control to one where one’s ideas are constantly running up against basic implementation obstacles. At the same time, of course, the engagement with the Open School and the children there was far deeper and longer-term than anything Kay had experienced at Xerox.
Figure 5.6: Playground environment, circa 1990 (from Marion 1993)
The other research avenue, into end-user programming as a general topic, is an example of Kay’s body of well thought-out ideas coming in contact with a wealth of related ideas from others and other contexts. Clearly, after a decade of work with Smalltalk, and having had the opportunity to define much of the problem space (of how personal computing would be done) from scratch, Kay and his colleagues from PARC had done a huge amount of thinking already. Within Apple Computer, though, were a number of people who had come at the topic of end-user programming from different perspectives. An Apple “Advanced Technology Research Note” from the early 1990s (Chesley et al. 1994) reveals a rich and fecund discourse going on within Apple, despite a relative dearth of results being released to the PC marketplace and computer-buying public. Apart from the already-mentioned spreadsheet model, the standout example was HyperCard. Kay and his team had the opportunity to engage with and learn from some substantial development efforts, and to watch how real end-users—both children and adults (teachers among them)—reacted to various systems.
Message-passing vs. Value-pulling
The Playground environment (developed twice: in the late 1980s by Jay Fenton and Kent Beck, and in the early 1990s by Scott Wallace) ironically represents a conceptual reversal of the computing model Kay had pioneered in the 1970s. Kay’s fundamental contribution to computer science (via Smalltalk) is the centrality of message-passing objects. The term is “object orientation,” but Kay has noted that the more important concept is that of message passing. In a 1998 mailing-list posting, Kay clarified:
The Japanese have a small word—ma—for “that which is in between”—perhaps the nearest English equivalent is “interstitial.” The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be. (Kay 1998b)
Playground, however, completely eschewed message-passing. Inspired by the spreadsheet model, Playground was made up of objects which had various parameters. The values of these parameters could then be looked up, and chains of effect could be built out of such retrievals (just like in a complex spreadsheet). Instead of an object actively sending a mesage to another object, objects continually watched for changes in other objects, just as a dynamic formula cell in a spreadsheet watches for changes in the data cells it takes as input. When the data cells change, the dynamic formula updates its own value.
The architecture was perhaps apt. Playground was designed for a marine ecosystem simulation, and, arguably, plants and animals do more observation—“noticing”—of each other’s states than sending messages (Ted Kaehler, personal communication, Oct 2004). Playground provided a wealth of insights, but proved difficult to program complex behaviour. Later software projects returned to the message-passing model, combined with pieces of the dynamic-retrieval model.
Keeping the black boxes open
What must be underscored in this discussion about Alan Kay’s tenure at Apple Computer is that despite the low profile of the Vivarium project18 and the dearth of overt outcomes (software, publications, further research programs), Kay must be credited with sticking tightly to his agenda, resisting the translation of the project into other trajectories, be they corporate or technical, as had happened at Xerox. In short, Kay appears to have fought to keep the black boxes open at all costs (Ann Marion’s report supports this). Ultimately, though, by the mid 1990s, with the Vivarium project wound up and Apple in corporate trouble, Kay found himself with rather less social capital than he would have liked; without over-dramatizing too much, we might conclude that Kay’s efforts to keep his research project “pure” risked its ongoing support at Apple—and perhaps beyond. The ultimate outcomes of the Vivarium project had impacts on the persons involved, and we shall see how this affects Kay’s subsequent work, but it is hard to see the broader educational (or even technical) results—to say nothing of the influence—of Vivarium. This observation is, I believe, in line with the general sense of Latour and Callon’s network theory: it is in being translated that a project or system gains greater currency and interconnectedness. A project which remains tightly defined perhaps does so at the expense of its larger reach.
HyperCard and the Fate of End-User Programming
In assessing the fate of Kay’s projects and ideas while at Apple, it is instructive to consider the contemporaneous example of HyperCard, a piece of personal media software similar in spirit to parts of the Dynabook vision. HyperCard was relatively successful in comparison, but had a decidedly different history and aesthetic. Like the dynamic spreadsheet, HyperCard forced Kay and his team to take notice, and to look very closely at its success. HyperCard’s ultimate fate, however, points to larger cultural-historical trends which significantly affected Kay’s projects.
Translation #6: From media environment to “Multimedia Applications”
HyperCard was a pet project of Bill Atkinson, who was perhaps the key software architect of the original Macintosh and its graphic interface (he had been part of the team from Apple that went to Xerox PARC to see Smalltalk in 1979). Atkinson’s place in Macintosh history was further cemented with his release of MacPaint, the original Macintosh graphics application and the ancestor of software like Adobe Photoshop. After MacPaint, Atkinson began playing with the notion of an interactive presentation tool called WildCard. WildCard—rebranded HyperCard in 1985—was based on the metaphor of a stack of index cards which could contain any combination of graphic elements, text, and interactive buttons. The main function of such interactive buttons was to flip from one ‘card’ view to another, thereby making HyperCard a simple hypermedia authoring tool. HyperCard put together an unprecedented set of features—graphics tools like MacPaint, a simple text editor, support for audio, and, with the addition of a scripting language called HyperTalk in 1987, a simple and elegant end-user programming environment.
Despite what the historical sequence might suggest, and despite some similarities which appear in HyperCard (a rough object model, message passing, and the “HyperTalk” scripting language), Smalltalk was not a direct influence on HyperCard; rather, it apparently came more-or-less fully formed from Bill Atkinson’s imagination. Atkinson had of course seen Smalltalk, and there were notable ex-PARC people including Alan Kay and Ted Kaehler at Apple (and even involved in HyperCard’s development), but HyperCard and its workings were Atkinson’s own (Ted Kaehler, personal communication, July 2004).
HyperCard’s great innovation was that it brought the concept of hypermedia authoring down to earth; it was the first system for designing and creating non-linear presentations that was within the reach of the average PC user. A designer could put any combination of media elements on a given card, and then create behaviours which would allow a user to move between cards. The system was simple to grasp, and in practice, proved easy for users of all ages to create “stacks,” as HyperCard documents were called.
Key to HyperCard’s success was Apple’s decision to pre-install HyperCard on all new Macintosh computers after 1987. The result was a large HyperCard community that distributed and exchanged thousands of user-created Hypercard stacks, many of which took the form of curriculum resources for classrooms.19 Alternatively, HyperCard was seen as a multimedia authoring toolkit and was put to use as a writing and design medium (or multimedium, as it were), again, often in classrooms; the genre of multimedia “authoring” was first established in this period, indicating the design and construction of hypermedia documents in a tool such as HyperCard. Ambron & Hooper’s 1990 book, Learning with Interactive Multimedia, is a snapshot of the kinds of uses to which HyperCard was being put in the late 1980s.
Not surprisingly, HyperCard was introduced very early on to the teachers and staff at the Open School, and met with considerable zeal; the teachers there could quickly see applications for it, and could quickly figure out how to realize these. The children were able to work with it easliy, too. This experience was in some contrast with Playground and the simulation environments, which, although being much more sophisticated, were barely usable on the limited hardware of the day. This, in combination with HyperCard’s elegant balance of simplicity and flexibility, proved to be a lesson Kay took to heart; here was a system that managed to acheive the low threshold of initial complexity that Kay had been shooting for over a decade or more.
Still, HyperCard’s limitations frustrated Kay. As elegant in conception and usability as it was, HyperCard was nowhere near the holistic media environment that Smalltalk had been. And while Kay praised HyperCard for its style and its obvious appeal to users, he railed against its limited conceptual structures: “That wonderful system, HyperCard, in spite of its great ideas, has some ‘metaphors’ that set my teeth on edge. Four of them are ‘stack,’ ‘card,’ ‘field,’ and ‘button’” (Kay 1990, p. 200)—the entirety of HyperCard’s object model! This is not mere griping or sour grapes on Kay’s part; the object paradigm that Smalltalk pioneered meant that objects were fundamental building blocks for an unlimited range of conceptual structures; to restrict a system to four pre-defined objects misses the entire point. That said, that HyperCard in its conceptual simplicity was an immediate success—not just at the Open School, but with users around the world—was not lost on anyone, least of all Kay, and its influence would be felt in his later work.
HyperCard’s limitations were felt by others, too. It became clear that, though HyperCard could do animation, a dedicated tool like VideoWorks (the prototype for MacroMedia’s “Director” software) was a better animation tool. Similarly, MacPaint and like graphics programs were more flexible than HyperCard (which, despite its decade-long life, never went beyond black-and-white graphics). The very idea of an all-encompassing media environment like Smalltalk, and, to a lesser extent, HyperCard, was doomed to buck the trend toward a genre of discrete application programs: individual word processors, spreadsheets, paint programs, and so on. If nothing else, a dedicated tool like a paint program was easier to market than an open-ended authoring environment. The “what are they good for” question is directly the point here. This drawing program is excellent for making diagrams; this word processor is excellent for writing letters; but what was HyperCard good for, exactly? Its hundreds of thousands of users all had an idea, but it is important to remember that none of these early users had to make a purchasing decision for HyperCard, since it had been given away with new Macs. Even HyperCard ultimately had to justify its existence at Claris, the software company spun off from Apple in the early 1990s, by claiming to be a “multimedia” toolkit, and was branded by Apple as a “viewer” application for HyperCard stacks; the authoring functionality was sold separately. HyperCard was ultimately abandoned in the 1990s.
HyperCard’s relative success in the personal computing world underscores two points, which, seen through the lenses of Latour’s translations and subsequent black boxes, appear thus: first is the shift from media environment to “application toolkit,” with the attendant metaphors: standardized palettes, toolbars, and the establishment of the commodity “application” as the fundamental unit of personal computing. The second shift is from a generalized media environment to that of “Multimedia” applications, reifying “Multimedia” as an industry buzzword. As these concepts become commoditized and reified in the marketplace, the interesting work of defining them shifts elsewhere.
Translation #7: From epistemological tools to “Logo-as-Latin”
Seymour Papert’s work with the Logo programming language had begun as early as 1968, but despite its significant impact on Alan Kay’s personal mission and despite a good number of published articles from the early 1970s, Logo made very little impact on the public imagination until 1980, with the publication of Papert’s signature work, Mindstorms: Children, Computers, and Powerful Ideas. The appearance of this book set the stage for a significant commercialization and marketing effort aimed at getting Logo onto the new personal microcomputers. Programming in Logo grew into a popular computing genre through the early 1980s. A look at library holdings in the LB1028.520 range reveals a huge surge of output surrounding Logo in the classroom in the mid 1980s. Papert and Logo had become practically synonymous with educational technology in these years. But of course any substantial movement of an idea—let alone a technological system—into very different and (and vastly larger) contexts brings with it necessary translations. In the case of Logo, this shift was in the form of branding. What was Logo, that it could be rapidly picked up and spread across school systems in North America and Europe in just a few short years (Aglianos, Noss, & Whitty 2001)? Chakraborty et al. (1999) suggest that the effort to make Logo into a marketable commodity effectively split the Logo community into “revolutionists” like Papert, interested in a radical redefinition of mathematics pedagogy, and more moderate “reformers,” who were more interested in spreading Logo as widely as possible.
This means that what Logo became in the marketplace (in the broad sense of the word) was a particular black box: turtle geometry; the notion that computer programming encourages a particular kind of thinking; that programming in Logo somehow symbolizes “computer literacy.” These notions are all very dubious—Logo is capable of vastly more than turtle graphics; the ‘thinking skills’ strategy was never part of Papert’s vocabulary; and to equate a particular activity like Logo programming with computer literacy is the equivalent of saying that (English) literacy can be reduced to reading newspaper articles—but these are the terms by which Logo became a mass phenomenon. Papert, for better or worse, stuck by Logo all the while, fighting something of a rear-guard action to maintain the complex and challenging intellectual foundation which he had attempted to lay. It was perhaps inevitable, as Papert himself notes (1987), that after such unrestrained enthusiasm, there would come a backlash. It was also perhaps inevitable given the weight that was put on it: Logo had come, within educational circles, to represent computer programming in the large, despite Papert’s frequent and eloquent statements about Logo’s role as an epistemological resource for thinking about mathematics. In the spirit of the larger project of cultural history that I am attempting here, I want to keep the emphasis on what Logo represented to various constituencies, rather than appealing to a body of literature that reported how Logo ‘didn’t work as promised,’ as many have done (e.g., Sloan 1985; Pea & Sheingold 1987). The latter, I believe, can only be evaluated in terms of this cultural history.
Papert indeed found himself searching for higher ground, as he accused Logo’s growing numbers of critics of technocentrism:
Egocentrism for Piaget does not mean “selfishness”—it means that the child has difficulty understanding anything independently of the self. Technocentrism refers to the tendency to give a similar centrality to a technical object—for example computers or Logo. This tendency shows up in questions like “What is THE effect of THE computer on cognitive development?” or “Does Logo work?” … such turns of phrase often betray a tendency to think of “computers” and “Logo” as agents that act directly on thinking and learning; they betray a tendency to reduce what are really the most important components of educational situtations—people and cultures—to a secondary, faciltiating role. The context for human development is always a culture, never an isolated technology. (Papert 1987, p. 23)
But by 1990, the damage was done: Logo’s image became that of a has-been technology, and its black boxes closed: in a 1996 framing of the field of educational technology, Timothy Koschmann named “Logo-as-Latin” a past paradigm of educational computing. The blunt idea that “programming” was an activity which could lead to “higher order thinking skills” (or not, as it were) had obviated Papert’s rich and subtle vision of an ego-syntonic mathematics.
By the early 1990s, the literature on educational technology had shifted; new titles in the LB1028.5 section were scarce, as new call numbers (and thus new genres) were in vogue: instructional design (LB1028.38); topics in the use of office productivity software (LB1028.46) and multimedia in the classroom (LB1028.55). Logo—and with it, programming—had faded. This had obvious effects for other systems—like HyperCard (Ted Kaehler, personal communciation). In fact, HyperCard’s rise to relative popularity in this same period (and in similar call numbers) is probably despite its having a “programming” component; its multimedia strengths carried it through the contemporary trend. To my knowledge, there is no scholarship tracing the details of HyperCard’s educational use historically, but one piece of evidence is a popular competitor (perhaps it would be better to say “successor”) to HyperCard called HyperStudio. HyperStudio featured roughly the same stack-and-cards metaphor, and added colour graphics, but dropped HyperCard’s elegant scripting language. In fact, and somewhat ironically, later releases of HyperStudio incorporated a language called “HyperLogo” (perhaps to flesh out the program’s feature list), though it was not particularly well integrated,21 and there is little evidence that it made much of an impact on HyperStudio’s use.
Similarly, a new genre of simulation environments for teaching systems concepts (e.g., SimCalc) eschewed the notion of ‘whole’ environments, prefering instead to provide neatly contained microworlds with a minimum of dynamic scope; these are obviously quicker to pick up and easier to integrate into existing curriculum and existing systems.22
The message—or black box—resulting from the rise and fall of Logo seems to have been the notion that “programming” is over-rated and esoteric, more properly relegated to the ash-heap of ed-tech history, just as in the analogy with Latin. Moreover, with the coming of “multimedia” as the big news in early-1990s educational computing, the conclusion had seemingly been drawn that programming is antithetical to ‘user-friendliness’ or transparency. How far we had come from the Dynabook vision, or any kind of rich notion of computational literacy, as diSessa called it:
The hidden metaphor behind transparency—that seeing is understanding—is at loggerheads with literacy. It is the opposite of how media make us smarter. Media don’t present an unadulterated “picture” of the problem we want to solve, but have their fundamental advantage in providing a different representation, with different emphases and different operational possibilities than “seeing and directly manipulating.” (diSessa 2000, p. 225)
The Dynabook vision seemed further away than ever! Smalltalk, no matter what you may call it, is a programming language. Or is it? To answer that question, we first need a more comprehensive assessment of what personal computing means to us today.
1 Interestingly—and almost undoubtedly coincidentally—a Smalltalk environment saves its data, state, programs, and entire memory in a file called an “image.”
2 Arnold Pacey, in The Culture of Technology, wrote of the notion of technical sweetness, “the fact remains that research, invention, and design, like poetry and painting and other creative activities, tend to become compulsive. They take on purposes of their own, separate from economic or military goals” (1983, p. 81).
3 Compare the Alto’s specs with Apple Computer’s first-generation Macintosh, designed a decade later. According to PARC lore, the Alto was conceived—like Smalltalk—as a result of a bet, and the bravado of its creators. With money diverted from the LRG budget, Chuck Thacker from the PARC’s Computer-Science Lab intiated the project while the executive in charge of the lab was away, having boasted that they could create a whole machine in three months (Kay 1996, p. 532).
4 Ted Kaehler more dramatically called it “the cliff” (Kaehler, personal communication, July 7, 2004).
5 The significance of this is easy to miss, but the claim about lineage is not a trivial thing. Nearly all programming environments distinguish between a program—normally treated as a static text—and its execution. Smalltalk, like Lisp before it, can be written while the program is running, which means that software can be modified from within. Since Smalltalk is a live computing environment in which code dynamically co-exists with transient objects like state information and user input, later versions of Smalltalk were created within a Smalltalk-76 environment, and more recent versions constructed within these. In a sense, the Smalltalk-76 environment has upgraded itself a number of times in the past thirty years. See Ingalls 1983, pp. 24–25.
6 Adele Goldberg and Joan Ross wrote an article in the 1981 BYTE special issue on Smalltalk entitled “Is the Smalltalk-80 System for Children?”—the answer was a qualified yes, but it would appear that this article serves mostly to establish Smalltalk-80’s intellectual tradition rather than to introduce new material. Smalltalk found favour as a teaching language in a few academic computer science departments, but remained very far from the mainstream.
8 My source for this ‘mythology’ is the collected wisdom and commentary on contemporary programming practice at the original WikiWikiWeb (http://c2.com/cgi/wiki). The WikiWikiWeb was begun in the mid 1990s by Ward Cunningham, a Smalltalk programmer at Tektronix who wanted to host a collaboratively authored and maintained collection of software “design patterns”—a methodology inspired by architect Christopher Alexander’s A Pattern Language (1977). Cunningham and various colleagues (none of whom are still at Tektronix) became key figures in the OOP community, associated with the “design patterns” movement (see Gamma et al. 1995), and are also key figures in the newer “eXtreme Programming” movement (see, eg. Beck 2000). The WikiWikiWeb remains a central repository of commentary on these topics, boasting over 30,000 ‘pages’ of collected information.
9 In 1984, Adele Goldberg became president of the Association for Computing Machinery, computing’s largest professional association. Goldberg did remain connected to education, however; her work in the 1990s with NeoMetron employed Smalltalk in the design of learning management systems (Goldberg et al. 1997).
10 The Xerox Star was sold in reasonably large quantity for the day; the Wikipedia entry on the Star states that about 25,000 of the machines made it to market—as corporate office technology. http://en.wikipedia.org/wiki/Xerox_Star (RetrievedSept 15, 2005).
11 Note that in this formulation, “user” refers to an actual individual, as opposed to an hypothetical “User” for whom the system has been designed.
12 Interestingly, one of the leading figures in the early 1970s Scandinavian “experiment” was Kristin Nygaard, who had co-designed the original object-oriented Simula programming language a decade before.
13 I will use the term “microcomputer” here to distinguish between Alan Kay’s conceptualization of a “personal” computer and the small, inexpensive hobby- and home-targeted microcomputers which emerged in the late 1970s and early 1980s. The latter came to be known as “personal computers” especially after IBM branded their microcomputer offering as such in 1981.
14 Although Kay’s team in the 1970s foresaw a $500 personal computer, what they were actually working with cost vastly more; the transition to mass-produced (and therefore inexpensive) machines was not well thought out at Xerox. What was possible to create and bring to market for a few thousand dollars in the early 1980s was still a far cry from the Altos.
15 This reference to inspiration refers to software traditions beyond PARC too; it was not until the early 1990s (and publicly-accessible Internet) before Unix-based operating systems—another software tradition with roots in the 1970s—made any real impact on the PC market.
16 Core members of Alan Kay’s team—Larry Tesler, Ted Kaehler, and Dan Ingalls—went to Apple Computer in the early 1980s. Kay himself moved to Atari in 1980 and then Apple in 1984. Other ex-Xerox personalities spread to other key IT companies: word-processing pioneers Charles Simonyi and Gary Starkweather went to Microsoft, as did Alto designers Chuck Thacker and Butler Lampson. John Warnock and Charles Geschke, who worked on laser printing and the foundations of desktop publishing founded Adobe Systems. Bob Metcalf, who invented ethernet, founded 3Com.
17 The computers at the LA Open School were installed inside the desks, and the desktops replaced with a piece of plexiglass. This allowed the computers to be installed in regular classrooms without making the classroom desks useless for any other use. (see Kay 1991)
18 The published literature on the Vivarium is very thin, given that the project ran for almost a decade. Notable is an article Kay wrote for Scientific American in 1991, “Computers, Networks and Education,” which is heavy on Kay’s educational philosophy and light on project details. The project gets a brief mention in Stewart Brand’s popular book, The Media Lab: Inventing the Future at MIT (1987), due to the involvement of several Media Lab researchers.
19 HyperCard was used early on to control a videodisc player attached to one’s Macintosh; this gave HyperCard the ability to integrate large amounts of high-quality multimedia content: colour images and video, for instance.
20 LB1028.5 is listed in the Library of Congress Classification as “Computer assisted instruction. Programmed instruction”—ironic, given Papert’s comments on children programmming computers and vice-versa.
21 I had the opportunity to write high-school Information Technology curriculum materials for distance education in the late 1990s. HyperStudio was a popular resource and I was encouraged to write it into my materials. However, the HyperLogo implementation so underwhelmed me that I rejected it in favour of a plain and simple Logo implementation (UCBLogo) for a module on introductory programming.
22 Interestingly, Jeremy Roschelle and colleagues on the SimCalc project argue persuasively for a “component architecture” approach as an alternative to all-encompassing systems (Roschelle et al. 1998), but this must be read in historical context, appearing in a time when networking technology was re-appearing as a fundamental component of personal computing and monolithic application software was being challenged.