‘This strange process of typing on a glowing glass screen’: An Interview with Matthew Kirschenbaum
Manuel Portela
CLP | University of Coimbra

 

Matthew G. Kirschenbaum.

 

Matthew G. Kirschenbaum is Professor of English at the University of Maryland. He is also Associate Director of the Maryland Institute for Technology in the Humanities (MITH, an applied think tank for the digital humanities), and he served as the first Director of Digital Cultures and Creativity, a new “living/learning” program in the Honors College. He is an affiliated faculty member with the Human-Computer Interaction Lab at Maryland, and was Vice-President of the Electronic Literature Organization. Kirschenbaum specializes in digital humanities, electronic literature and creative new media (including games), textual studies, and postmodern/experimental literature. He has a Ph.D. in English from the University of Virginia, and was trained in humanities computing at Virginia’s Electronic Text Center and Institute for Advanced Technology in the Humanities (where he was the Project Manager of the William Blake Archive). With Pat Harrigan, he has recently co-edited Zones of Control: Perspectives on Wargaming (MIT Press, April 2016). Kirschenbaum has lectured as guest professor for the PhD Program in Materialities of Literature, and has been a project consultant for the “Book of Disquiet Digital Archive”, both at the University of Coimbra.

In April 2016, Track Changes: A Literary History of Word Processing was published by Belknap Press of Harvard University Press. This work follows Mechanisms: New Media and the Forensic Imagination (MIT Press, 2008), another major work in which Kirschenbaum applied a social text rationale to digital inscriptions, analyzing the multilayered nature of the computer as a writing technology [1]. With his analysis of hard-drive inscriptions, Kirschenbaum offered a convincing theory about the specifics of the digital computer as a writing technology materially dependent on the interfacing of physics and mathematics (matter and code) that enabled a cascade of symbolic processes from writing symbols through programming languages, through machine language, through differential voltages. Word processing is just another textual practice where that particular ontology of the computer can be observed and described.

Track Changes tells the early history of word processing, roughly situated between 1964—when the IBM Magnetic Tape/Selectric Typewriter (MT/ ST) was advertised as a word processing system for offices—and 1984—when the Apple Macintosh generalized the graphical user interface in personal computers. The history of word processing both as technological process and mode of textual production is deeply entangled with the changes in the technologies of writing as they reflect and contribute to efficiency and control in increasingly bureaucratic processes of social administration and organization. The literary history of word processing can be situated within this general computerization of the modes of production of writing. Kirschenbaum’s methods combine archival work in special collections and writers’ archives, oral interviews with writers and engineers, and hands-on descriptions of historical word processing machines. Track Changes is the subject of this interview [2].

 

1.

“The story of writing in the digital age is every bit as messy as the ink-stained rags that littered the floor of Gutenberg’s print shop or the hot molten lead of the Linotype machine.” This sentence suggests that word processing as a writing technology has a material and social history that is not particularly different from earlier writing and printing technologies. What is this messiness about word processing and digital writing that your research has uncovered?   


Word processing was no more inevitable than any other writing technology. We sometimes imagine it must have been—not only because almost everyone appreciates the ease and convenience of writing on a computer, but because of the physical resemblance of a typewriter and a computer keyboard. So word processing is just like a typewriter, only better, right? Typing ++. But it’s not: some early adopters thought that their word processor was actually more similar to writing longhand—the complete freedom of movement you had as you sent the cursor zipping around the document, as opposed to the relentlessly linear logic of typewriting, the carriage trundling forward one line at a time. One of my favorite technical details in the book concerns IBM’s first word processor with a screen, a now forgotten unit called the System 6; the screen was optimized to display six lines of text. Why? Because that was about the number that a typist would typically see as the paper in the rollers bent backwards under its own weight. So there was no easy, linear adoption curve between typewriting and computing. Yet another dimension of the “messiness” was the sheer variety of different word processing programs that once competed on the market, scores of them, all of them with different features and affordances, many of them mutually incompatible. The wrong choice could spell disaster for an aspiring writer who had just invested their savings in one.

There were also different kinds of social expectations about what the technology could or couldn’t do. Word processing manuals used to routinely explain that the program would not actually write the author’s text! (Nowadays, of course, that’s becoming an issue of concern once again, with auto-completion and other natural language algorithms.) Users also needed to be reassured that their text wasn’t really gone once it scrolled off the edge of the screen. When you or I delete a word or a passage (as I’ve done many times in preparing my answers for this interview) we take for granted that we can do it with a couple of mouse clicks of keystrokes; but early word processing programs frequently required much more complex sequences of input, setting parameters for the selection, and so forth—still regarded as near effortless at the time, but unthinkably cumbersome by today’s standards. So yes, all of this is what I mean by messiness, all of these minute material details that help us to see word processing not as a quantum leap, and not as inevitable or preordained, but as an incremental outgrowth of engineering, design, and socialization.

2.

Would you say that Ellenor Handley’s word processing of Len Deighton’s Bomber (published in 1970) using the IBM Magnetic Tape/Selectric Typewriter in the years 1968-69 (which your book suggests as the most likely candidate to being the first novel entirely written with a word processing machine) is the historical equivalent to Mark Twain’s typewritten (also typed by an assistant) Life on the Mississippi (1883)? What parallels are there between the early adoption of typewriters in the last decades of the nineteenth century and the early adoption of word processing for literary writing in the 1970s and 1980s? For instance—when one considers the gendered organization of the writing scene, or when one looks at the imagined effects of the mechanization of writing?


There are some striking parallels. Deighton, like Twain, was a popular writer, even something of a celebrity. In material terms, this meant he could afford the stratospheric price-tag on the IBM machine, $10,000 at the time (in fact he leased it). Most striking, of course, is the gendered relationship between author and typist, which holds true in both instances. In their book on the literary secretary, Leah Price and Pamela Thurschwell wryly note that the “opposite of genius is typist.” This speaks not only to questions of gender, but also labor: writing was (and is) hard work. By this I mean not just the emotional labor involved in the creative process, but the actual physical labor of typing and retyping draft after draft, not to mention filing, correspondence, keeping the books, etc. Many successful, high-volume writers retained secretaries, and many still do—a successful commercial writer is often something much more like an office manager than a Romantic solitary genius. For both Twain and Deighton, writing was a labor of love, but it was labor still, and it was labor that they outsourced to other people’s bodies and to machines. In both cases, the actual processing of their texts—polishing and perfecting the text, transforming the prose into a readily reproducible format—was done by someone else.

3.

Your book focuses mostly on the years between 1964 and 1984, before word processing becomes a dominant practice and a naturalized writing tool. What are the significant moments in the gradual adoption of word processing for literary writing? How does it change the production process? Are there stages in the development of particular assemblages of hardware and software, on the one hand, and modes of interaction between those assemblages and particular writers or writing practices, on the other?


The change-over happened very fast by most any historical standard: before 1979, someone writing with a word processing program was a pioneer; by 1983 or 1984, they were merely typical. But within that radically foreshortened timeframe there were an amazing number of innovations and landmark products: the first integrated systems, like the Apple II, TRS-80 Model I, and Commodore PET appearing in 1977; WordStar in 1979; WordPerfect in 1980; the IBM PC and Osborne 1 in 1981 (the first “luggable” computer); the Kaypro in 1982; Microsoft Word in 1983; the Macintosh in 1984. But linear history can also be deceiving: when Microsoft released Word 3.0 for Macintosh in 1987, its chief architect, Charles Simonyi, spoke of it as an asymptote, approaching—but never quite obtaining—a vision for word processing originally imagined at Xerox PARC back in the 1970s.

The term WYSIWYG was coined there, during a market demo that involved mirroring a sheet of company letterhead on the Alto computer’s vertically-oriented display screen and its laser-printed output. “What you see is what you get” was a line that at the time had been popularized by the comedian Flip Wilson, and someone in the audience was said to have shouted it out when they saw that page and screen were identical: WYSIWYG. This professional appearance was also made into a stigma of word processing however, with the suspicion being that writers would delude themselves into thinking that their work was more finished and polished than it really was, just because it looked so good. (Sometimes writers even hid the fact that they were using a word processor from their editor or agent.) Ironically, as anyone who has published something professionally will know, a publisher rarely wants writers to attempt laying out their own prose—authors are instructed to utilize minimum settings on their word processor and leave the formatting up to the pros. So while there are definitely technical landmarks to point to, the history itself is rarely one of simply linear progress.

4.

Another aspect that you have uncovered relates to conflicting representations of writing with word processors. There were writers who immediately grasped the freedoms of writing with light—which is seen as particularly liberating for the revision process, but also for the actual textual and structural composition; and there are those who fear the disembodied strangeness created by the layers of coding that have made writing processable. They are aware that a certain loss of grasp comes with the encoding of characters, and they resist imagining the word processor as something other than a typewriter. This self-awareness of the changing ontology of the written inscription was metaphorically used in stories and poems, for instance in works by Stephen King or John Updike. You also show how these conflicting representations circulated in the general culture—in industry advertisements, management textbooks, personal computing magazines. How was the discourse around word processing structured, and how did it evolve since the first introduction of the concept and its initial technical implementations? What was the contribution of the office efficiency discourse in creating new representations of writing practices?


Writer’s loved working mention of their new writing machines into their fiction. Umberto Eco did it. So did Anne Rice. And Stephen King, and others. Tracking down these “Easter Eggs” (as I thought of them) never got old. But there are other ways to look at word processing in relation to the image of authorship. 1984 was the year the illustrator David Levine began sometimes drawing authors with computers instead of typewriters or fountain pens in his caricatures for the New York Review of Books. Isaac Asimov, Robert Ludlum, Gordon Dickson, and others, meanwhile, appeared in advertising spots for companies like Radio Shack and Atari. In a magazine like Writer’s Digest, images of fountain pens were overtly used to garnish advertisements of word processors in order to provide visual continuity with the predecessor technology. Word processors also begin appearing in interviews in venues like the Paris Review, and in the images captured by literary photographers like Jill Krementz and Nancy Crampton, becoming increasingly commonplace by the end of the 1980s. Gag pieces positing feature-laden fountain pens or pencils as fully equipped “word processors” were a staple of the computer press. By the time R. Crumb drew Charles Bukowski in front of his Macintosh in 1995, computers were fully assimilate into the stock of cultural imagery around literature and literary authorship.

But writers themselves also mythologized or romanticized the technology. Stephen King played with the idea of an author copy-editing his own life on a paranormal word processor, and John Updike wrote a poem in which each.word.was.separated.by.a.spacing.dot, just like on his word processor. It’s title: INVALID.KEYSTROKE. He meditates on the possibility of the word processor erasing him. Writers were simultaneously captivated and a little terrified by the prospect of consigning their prose to the mutely glowing glass screen, wondering what would happen once the pixels went out.

5.

Track Changes brings together different methods: archival research, oral interviews, close readings of various types of text (novels, short stories, poems, advertisements, office handbooks, magazine articles), technical descriptions of many storage and processing technologies according to media archaeological methods, and—what I would describe as its unifying perspective—a social-material theory of textual production. Do you think this particular choice of methods and theory contains a new research model for making literary and cultural history? Was it the particular object of inquiry that led you to this eclectic and inflected approach? Why do you think the literary history of word processing had to be told in this particular way? Are the stories that you tell as arbitrary as you suggest in your Preface— “arbitrary in the sense that these were the stories that were recoverable to me in the course of my research” (xiii-xiv)? Or is this a way of highlighting the medial nature of literary and textual processes?


Well, no, of course they’re not entirely arbitrary, or at least I hope not! But there was a lot of serendipity involved, and criteria for inclusion are always going to be arbitrary to some extent. By stopping the historical master narrative in the mid-1980s—a moment when according to statistics nearly half the writers in the US had switched over to word processors—I felt like I could cover most of the early adopters with a fairly high degree of confidence. After that the storylines just become too diffuse as word processing enters into the commonplace.

The big burst of visibility the book had early on in my research process was invaluable, garnering me numerous additional contacts and research directions to run down. In this the research proved once again my rule that it’s always better to be working out in the open, where people can see you, then to stay tucked away in a library carrel (or private account) out of fear that someone will run away with your “ideas.”

I did a fair amount of work in archival collections for the book, ranging from the Houghton Library at Harvard to Microsoft Corporation in Redmond, Washington, but it was also clear to me early on that much of this history still resided in the memories of individuals. To that end I eventually conducted some three dozen oral history interviews, something for which I had no formal training but which was one of the most fascinating and enjoyable parts of the research. Finally, I built up a collection of old computers and software so I could try out different historical word processors for myself, literally hands-on research—the physical feel of the different platforms was very important to me.

6.

What also strikes me as one of the defining features of your book is your ability to integrate an ethnographic attention to the minute particulars of multiple writing scenes with a strong sense of narrative structure—with flashbacks, flash forwards, interpolated stories, recapitulations, micro-plots. There are many moments in the book that read almost like embedded short-stories—such as the communication exchanges between Arthur C. Clarke (in Sri Lanka) and the director Peter Hyams (in Los Angeles) writing the movie adaptation of Clarke’s 2010: Odyssey Two, described in Chapter 3, or John Hersey’s use of the DEC PDP-10 machine and the LINTRN software to write My Petition for More Space at Yale University, described in Chapter 6. How do you see your own writing in this book in relation to the textual and literary history that you are trying to make here? Is it just a question of making the text more readable or do you see it as a methodological aspect of your research?


Thank you, this was an aspect of the book I worked hardest on—beyond the underlying research itself of course—and it’s been picked up on by many of the reviewers. The bottom line is that I enjoyed these stories so much, and felt that they had so much to offer—often in the unexpected details—that I wanted to relay them to the reader. The book took on a very deliberate curatorial aspect, which also included relying heavily on quotation. Listening in as a writer like Michael Crichton tries to articulate what word processing is—this strange process of typing on a glowing glass screen—reminds us of just how strange an experience it once was, and that was what I wanted to recapture in the book, that moment when the technology arrived humming, glowing, whirring, and vibrating on the writer’s desktop.

7.

You refrain from offering an overall theory of how word processing affected the practices and forms of writing. Although a social text rationale in the analysis of textual production and textual transmission connects this work to your previous book (Mechanisms, 2008), in that earlier work you offered a general theory of the computer as a writing technology. Here you seem to resist making a theoretical leap of the kind that Kittler, for instance, has made when he described the printing of the microchip circuits as the moment when all human symbolic production enters the loop of automation. Is it just because the actual word processing practices, when observed at the scale of the daily messiness of writing in progress, are too varied and too rich, and they resist abstraction? Or are there larger patterns—stylistic, structural, cultural—when we track all these changes?


That’s the question, isn’t it? Nietzsche really had the first and last word here: “Our writing tools are shaping our thoughts.” It’s interesting in this context that the German word he uses, Gedanke, is thoughts and not, say, “style.” I think the answer to whether word processing changed an individual writer’s style would have to be addressed through close textual analysis of each individual corpus. Undoubtedly some writers did change not only how but also what they wrote, but I found no conclusive evidence to suggest a single, unified arc of change. One also has to remember that what writers write is dictated by myriad factors—technology, yes, but also what the marketplace will bear, and of course by the circumstances of their own life, history, and creative practice. (Kittler is finally too much the determinist for my taste.)

What word processing is absolutely changing, I believe, is what the composition theorist Christina Haas has called the “sense of the text.” This is the mental model we hold of the document space we are constructing. Word processing allowed writers an unprecedented level of access to the textual field in its entirety, essentially allowing it to be folded and extruded through what Jerry McGann might call n-dimensions. In practical terms, this manifested through features as basic as search. The ability to instantly locate instances of specific words and phrases allowed writers a remarkable degree of control of the texture of their prose, its contours and rhythms. Where typewriting enforced linearity on the writing process, word processing meant you didn’t necessarily have to begin at the beginning or end with the end—the document space was instantly centered on any place that the cursor could be inserted, and what came before and after would be reimagined accordingly.

8.

In the last chapters you raise questions related the paradox of the excess of information—which is a consequence of the vestigial and self-documenting presence of the micro-events of writing tracked by the machine—and the loss of information—caused by difficulties in recovering particular word processing technologies and preserving digital information in general. How do you see this paradox impacting on the methods for critical and genetic editing, and for textual scholarship? Do you see methods of macro-analysis and pattern-finding being applied to authors’ word-processed archives? Will these analyses change our understanding of writing processes, both at the individual and social levels?


I would like to think so, but I also honestly don’t know. We haven’t seen it yet, though there is promising work that has been done. The most advanced example of a genetic edition based on forensic computing techniques is likely the ongoing work of Thorsten Ries on the German poet Thomas Kling. Doug Reside has similarly demonstrated the art of what’s possible with recoveries of some alternate versions of Jonathan Larson’s lyrics from RENT, and I’ve delved into some of John Updike’s digital remains. And Adam Bradley has done pioneering work on the posthumous reconstruction of Ralph Ellison’s unfinished second novel, aided in large part by digital files from Ellison’s diskettes (Ellison acquired a word processor in 1982, a relatively early adopter). I think further instances are inevitable, given the simple reality that so many prominent writers nowadays have born-digital materials in their archival collections: Salman Rushdie, Gabriel Garcia Marquez, Lucille Clifton, and David Foster Wallace to name just a few. The most exciting development I know of at this time is the opening of Toni Morrison’s papers at Princeton: this collection includes some 150 floppy diskettes, many of them containing the only extant copies of drafts and records whose paper incarnation was lost in a house fire. In future years, we will surely see the textual scholar with a hex editor open, much as a portable collator occasionally graces the tabletops of the reading room now.

9.

One last question that goes beyond a strict notion of “word processing” as a tool for literary writing. We could say that word processing has also become a naturalized and dominant form of interaction in the global instantaneous and mobile communications network. The words that we constantly process as writing subjects in cloud computing systems are now part of the feedback loop that increases the efficiency of machine-learning algorithms, recommendation systems, customized advertisements, surveillance methods, and other forms of behavioral and social control. Would you agree with the idea that word processing is now also part of the internet infrastructure? In other words, the processing of written language is an essential component that sustains the network as an ensemble of disciplining practices?


A timely question to end on, given the recent news of Microsoft’s acquisition of Linkedin and the promise (threat?) of integration with Microsoft Word in order to bring one’s professional network into direct contact with the composition space. This has stirred memories of the infamous Clippy in the popular press, but in truth I see it as merely symptomatic of exactly the phenomenon you describe—I like the idea of word processing, in all its guises and incarnations, as an essential element of network infrastructure. For all of its visual footprint, the internet is still held together by text—true not only in terms of human-readable documents, but the transactions of browsers and servers. This is to say nothing of the algorithmic engines that increasingly shape the contours of the Web itself, from search results and recommendations to actual content scripted by automated journalism. To the extent surfing the Web is a textual transaction—and we live, after all, in a time when text itself has become a verb—the Web is a medium we process through and with words.

At a book talk recently I suggested that there might be some forms of writing which humans didn’t need to do, and that this was okay. After all, if a machine can compose my next email, that gives me more time to write my next book!

 


Notes

[1] Reviewed by Manuel Portela in Digital Humanities Quarterly 4.1 (2010):
http://www.digitalhumanities.org/dhq/vol/4/1/000087/000087.html.

[2] This interview was conducted through email, in May and June 2016.