January 30, 2008

by Noah Wardrip-Fruin · , 6:37 am

When I was a teenager — in the 1980s — my mother bought a personal computer. It was an impressive machine for the day, decked out with two floppy drives, a dot matrix printer, a Hayes modem, and a monochrome amber display. At first I only used the machine for some minor programming experiments (in Basic and later Pascal), writing for school (in WordStar), and a few games. But that mysterious modem sat there. Probably intended to let my mother exchange data with the big Digital Equipment Corporation machines she had in her lab at the university, I knew modems could also be used for other things.

This was about a decade before the Internet began to make its way into homes like ours, and I had no interest in the manicured gardens of services like The Source or CompuServe. Rather than any long-distance journey, I wanted to use the modem to explore the local wilderness, to visit the unruly home BBS scene sprouting in the dens and basements of my neighbors.


7

While largely forgotten today, a BBS — short for “Bulletin Board System” — was the online destination of choice for 1980s teenagers. Most were run by individuals out of their homes: computer enthusiasts with machines much more powerful than ours, hooked to one or more dedicated phone lines. A user like me could call into a BBS, read messages, leave messages, download and upload files, play text-based games, and (if the owner of the BBS was at their computer, or if someone called in to one of the other phone lines) have real-time conversations, with total strangers, in text. In other words, the BBS wasn’t just a file repository. It was a window into what has now become obvious: the incredible social potential of combining computers and networks, which has given us email, instant messaging, wikis, blogs, social networking websites, and much more.

Given the glimpse of this potential, a BBS with multiple lines could feel a little lonely when no one else was on. But then one day I was over at the house of a childhood friend (we no longer went to the same school) and he showed me that, on his computer, conversation was always waiting. He showed me a program he’d downloaded from a BBS. He introduced me to Eliza.


7

Eliza today

Eliza — or, more properly, Eliza/Doctor — is a groundbreaking system created by computer science researcher Joseph Weizenbaum at MIT in the mid-1960s. In the two decades between when Weizenbaum created the system and I experienced it at my friend’s house, it had become one of the world’s most famous demonstrations of the potential of computing. First unveiled years before HAL 9000’s screen debut in 2001: A Space Odyssey, it seemed that Eliza made it possible to have a real conversation with a computer.

In the computer science literature, under the name Eliza, Weizenbaum’s system is a contribution to the field of natural language processing. On the other hand, when Eliza plays Doctor it is a well-known computer character, famous far beyond computer science, often also known by the name Eliza. And Eliza also has a third common usage in the computer world: “the Eliza effect.” This has generally been a term used to describe the not-uncommon illusion that an interactive computer system is more “intelligent” (or substantially more complex and capable) than it actually is. One of my purposes in this chapter is to revisit the Eliza effect and give it a further nuance, so that it names not only this initial illusion but also the authorial choice that comes with it: severely restricted interaction (on the one hand) or eventual breakdown that takes a form based on the actual underlying processes (on the other).

In the next chapter, with an examination of the Eliza effect as background, I will look at the options selected by today’s authors of digital fictions — particularly for computer games. These generally put aside the Eliza effect in favor of systems that more clearly communicate their structures to audiences. However, there are two problems with these that I will consider. Some of them employ processes that, while legible, tend toward a non-Eliza form of breakdown. Others, while avoiding breakdown, have very low ambitions in their use of computational processes. More ambitious routes will be the focus of the remaining chapters of this book.

But, for now, I’ll start with the illusion.

Total comments on this page: 19

How to read/write comments

Comments on specific paragraphs:

Click the icon to the right of a paragraph

  • If there are no prior comments there, a comment entry form will appear automatically
  • If there are already comments, you will see them and the form will be at the bottom of the thread

Comments on the page as a whole:

Click the icon to the right of the page title (works the same as paragraphs)

Comments

No comments yet.

nick on whole page :

Noah, I just wanted to comment, as we start reading the second chapter, that I’m really enjoying reading your book collaboratively with many who are a part of Grand Text Auto. I’m not sure how useful I will be as an online reviewer, because the incremental upload of the book prevents me from getting the whole picture of your overall project. At best we seem to be nitpicking, as Doug Sery at MIT Press aptly predicted and as the interface and serial method of posting encourages. But I nevertheless love the experience of reading along with others who have commented and the many others who haven’t. Progressing through your book together with these others gives me the sense of being part of a reading community.

January 30, 2008 7:16 am
noah :

Nick, I’m glad you’re enjoying the sense of reading community. My hope is to make Expressive Processing part of the flow of the blog — and so far that seems to be working.

One interesting thing, however, is that positioning this as a “review” seems to have raised an expectation of formality that isn’t usually present on the blog. Some GTxA readers seem more comfortable sending me backchannel responses via email, rather than having their comments be part of the public review.

In one sense that’s fine, because all these comments help with my work and thinking. On another hand it’s a shame, because wider discussion of these issues could have broader benefit. But obviously people should participate however they are moved.

That said, I’d disagree that the public comments, so far, are nit-picking. I think they range from the very high-level (Lev and Matt bringing up big issues for the field, which I need to bear in mind while viewing the project as a whole) to the very detailed (use of the word “early”) and also various spots in-between (Markku and Bryan asking how particular critical concepts relate to what I’ve written). Maybe the more detailed comments could be called nit-picking, but that has a negative feel — and I’m happy with any comment that helps the project improve.

As for it being hard to get a broader picture, I think that was particularly true of the introductory chapter. Hopefully that chapter has now provided a little background against which what comes can be read. But I’m perfectly happy if people wait for there to be another whole chapter, or two, or three, before going back and commenting on the very first section of the first chapter. This project has more than two months to go, and the ideal is to learn something from it, from people who comment when and how they feel motivated. I’m watching all the comments, not just those on the last couple posts.

January 30, 2008 7:59 am
Terry on paragraph 3:

Ahh, I remember the all powerful Sysop. You mention text games, but I also accessed graphical games as well.

January 30, 2008 9:57 am
Jim Whitehead on paragraph 3:

The main reason I never used The Source much was due to its pricing model. I knew if I used it much, I’d easily end up spending all of my allowance and grass-cutting income on online fees. Worse yet, I’d have to justify time spent online with my parents, who received the bill. Far better to frequent BBSs where I could contact other Apple users to arrange for disk exchanges.

January 30, 2008 9:58 am
Terry on paragraph 5:

Will Eliza’s connection to the Turing test be in the next installment?

January 30, 2008 10:00 am
noah :

Actually, the Turing test doesn’t get a full treatment until the chapter after next. This chapter presents, and then complicates, the traditional Eliza effect. The next chapter talks about how most computer games have pursued something different from the Eliza effect, even when including characters and stories. Finally, the following chapter gets into a more complex discussion of the relationship between surface interaction and underlying models — which is key to the questions raised by Turing’s game. Hopefully you’ll find it worth the wait.

January 30, 2008 10:58 am
Terry on paragraph 5:

Yeah, I’m in it for the long haul. Also, I’m really enjoying this collaborative reading process. I could see a school using a model like this to stimulate discussion about a book — maybe a future incarnation of Project Gutenberg will have something similar.

January 30, 2008 11:15 am
noah on paragraph 3:

Yes, for me the BBS world is a set of fond — and, admittedly, somewhat hazy — memories. (I can’t remember if I ever played a graphical game over a BBS connection.) Glad to see it evokes some nostalgia in others as well!

For those without personal BBS memories, I hope this section still helps set the stage by establishing the idea that I am among the many who have written about Eliza who first experienced it in a computing context quite different from the predominant present ones.

But it also strikes me that, for the BBS world more specifically, it would also be good to have a reference or two. I should obviously point people to Jason Scott’s documentary. Anything else spring to mind?

January 30, 2008 11:51 am
Mark Bernstein on paragraph 3:

> a groundbreaking system created by computer science researcher Joseph Weizenbaum at MIT in the mid-1960s

Eliza was a small program, even by the standards of the 1960s; I think the date could be pinned more precisely to its first publication.

> one of the world’s most famous demonstrations of the potential of computing

It is not clear to me that the text makes it clear that this is directly contrary to the intent of the program.

> HAL 9000

“Years before” might be “two years” (Eliza published in 1966, Kubrick’s film debuted in 1968 and so was filmed about the same time). But conversations with computers have a much longer history, of course; Heinlein’s MOON IS A HARSH MISTRESS is 1966, Star Trek has a computer testifying at a trial in 1967.

January 31, 2008 8:41 am
noah :

Mark, thanks for your careful reading! I’ll respond to each of your points in turn.

First, I consider Eliza/Doctor “groundbreaking” because it was so influential, rather than because it was particularly large.

Second, Weizenbaum says of Eliza, “The work was done in the period 1964-1966.” Since I was talking about its creation, rather than its publication, I chose “mid-1960s” — but if people find that confusing I could change it.

Third, turning to “the potential of computing” and “the intent of the program,” my impression is that Weizenbaum was interested in exploring possibilities for natural language processing. He was shocked to find that people viewed Eliza/Doctor’s simple, playful transformations as something very different (and much more significant). But he didn’t create the system with the intent of exposing these misunderstandings.

Finally, I certainly didn’t mean to imply that the HAL 9000 was the first representation of a talking computer. I was just referencing it as a famous talking computer of, as you say, roughly the same era. You’re right that changing “years before” to “two years before” would make this clearer for those who don’t know the release date of the film.

January 31, 2008 11:35 am
Matt K. on paragraph 3:

Noah, you might take a look at the pages on the RAMAC in Mechanisms–no NLP there at all, just brute force information retrieval, but the effect on people was still one of interacting with a computational persona.

January 31, 2008 4:58 pm
noah :

Yes, I very much enjoyed reading about that in Mechanisms. Good place for a footnote!

February 2, 2008 5:36 pm

[...] Grand Text Auto » EP 2.1: Meeting Eliza When I was a teenager — in the 1980s — my mother bought a personal computer. It was an impressive machine for the day, decked out with two floppy drives, a dot matrix printer, a Hayes modem, and a monochrome amber display. At first I only used the mac (tags: grandtextauto.org 2008 mes1 dia1 at_tecp Eliza artificial_intelligence blog_post) [...]

February 1, 2008 12:35 pm
Bonnie Ruberg on whole page :

Noah, I wonder if you’re familiar with these (much less successful, but still interesting) Eliza-based AI sessions testing the ability of a bot to have cybersex.
http://virt.vgmix.com/jenny18/logs/

Interesting stuff!

February 2, 2008 7:14 am
noah :

A great example. The key to the initial “boom” of the Eliza effect is the expectations of the audience. From all the stories of people being fooled in online chat sessions (see, for example, those reported in Turkle’s Life on the Screen) it appears that the force of expectation may be extremely strong in such situations. Or, to look at it more as Murray does, perhaps these are cases in which it’s especially fun (or, um, otherwise stimulating) to play along and deliberately maintain suspension of disbelief. That said, looking at the Jenny18 transcripts, it does seem as though the Eliza effect’s breakdown would have been hard for even the most motivated to ignore all the way through some of the interactions.

February 2, 2008 6:04 pm
Ishmael on paragraph 5:

Sorry to nit-pick, when I’m thoroughly enjoying the book as a whole. However:

“First unveiled years before HAL 9000’s screen debut in 2001: A Space Odyssey, it seemed that Eliza made it possible to have a real conversation with a computer.”

As I understand it, Eliza was first unveiled in 1966, with 2001 having its first screening in 1968, so the use of “years before” here is a bit disingenuous.

Actually, this may be a more interesting connection, given the close proximity of the two. Clarke and Kubrick employed Marvin Minsky as an advisor on the set of 2001, and it seems quite likely (though I haven’t checked) that Minsky could have heard about Wiezenbaum’s creation, such that it informed the treatment of Hal in the film. Hal seems to have quite a modulated, dispassionate voice, not unlike a psychotherapist. And (if I remember rightly) during the scenes as Bowman prepares to “lobotomise” Hal, the computer reverts to asking questions, akin to the Eliza technique of interrogation to conceal its lack of actual intelligence.

I’ve just started working on 2001 and Hal, so if I come across anything more concrete I will be happy to pass it on.

April 2, 2008 7:50 am
noah on paragraph 5:

Ishmael, yes, the fact that it’s only two years before is also something that Mark Bernstein thought worth mentioning. I’ve changed it in the final text.

I think you’re right that there may be an interesting connection between Eliza and HAL. Have you done any more work on this? Also, any thoughts on Michael’s related chapter (“Reading Hal: Representation and Artificial Intelligence”)?

June 25, 2008 1:53 pm
Ishmael on paragraph 5:

Noah,
Thanks for your reply to my earlier comment, and glad the text stands corrected.

I *have* been doing more work on HAL and Eliza, but have been a little frustrated. Having looked through most of Kubrick’s and Clarke’s recollections of the filming, I have found no explicit mention of Eliza. On the other hand, the fact that Marvin Minsky was an advisor on the set makes it not unlikely that he mentioned Weizenbaum’s report in the ACM to some one on the set.

Even if I could not find that vital historical fingerprint, however, I do think there is a more ephemeral linkage that can be drawn. In Clarke’s novelised version, he says that Hal is modelled on the human brain (following Minsky’s work) but that the interconnections are so numerous no one has any idea how the machine produces consciousness. In my argument, therefore, Hal stands as a critique of the behaviourist model of AI – this is, of course, precisely what the Turing test is all about (and also, as Michael’s article points out, chess playing computers), and it was Eliza’s ability to dupe human interlocutors that hammered home the limitations of this epistemological approach (likewise chess computers which became more powerful but therefore less like heuristic human players).

So even if Eliza is not the specific inspiration for Hal, my understanding is that Eliza is simply the most prominent manifestation of that period’s move away from Turing’s benchmark – to model the machine intelligence on the human – towards self-emergent intelligences and the honing of specialised processing systems rather than general problem solvers. I’ve got a fairly polished thesis chapter on 2001, and though it takes a slightly different tack to Hal than you are probably concerned with, I’d be happy to pass it along if you’d like.

Best,
Ishmael

June 26, 2008 1:45 am
noah :

Yes — I’d love to see that chapter. While you’re right that my main argument doesn’t pass through HAL and 2001, it sounds like an opportunity for a good read and useful footnote. Thanks!

June 30, 2008 6:05 pm
Name
E-mail
URI

Powered by WordPress