Post-Digital Literary Studies
Florian Cramer
Rotterdam University of Applied Sciences


Digitization of literature

The terms “digital humanities” and “digital literary studies” are technically imprecise as well as historically questionable. They may well also be flawed on a scholarly level, since they both use the word “digital” in its colloquially common (yet scientifically incorrect) sense of information processed as zeros and ones by electronic computers. From this academically problematic perspective, the history of “digital literature” can be said to have begun in the period from the 1940s to the 1970s in university research laboratories [1] before evolving into more widespread forms of literary writing in the 1980s and 1990s, made possible by the emergence of personal computing and home Internet connections. Digital humanities and digital literary studies can then be understood as part of this same historical narrative.

However, according to a more precise technical definition of the word “digital”, digital information is not necessarily encoded as zeros and ones, nor does it need to be processed by any kind of computing device, whether electronic or non-electronic. Rather, “digital” refers much more broadly to any kind of information that is, to use Nelson Goodman’s term, “differentiated” (1968: IV, 8): that is, divided up into (a) unambiguously countable units that (b) stem from a finite repertoire of symbols. Saussure’s and Jakobson’s structuralist models of speech, as the paradigmatic selection of elements from a language-toolbox and the syntagmatic combination of those selected elements into sequences (words, sentences etc.) is in fact a digital model of language, at least if one literally reads them as technical proposals with no room for interpretation and no ambiguity regarding the differentiation of elements (Jakobson, 1960: 55; De Saussure, 2011: 122). Structuralist linguistics and poetics thus did not only resemble digital literary studies, they could in fact be digital literary studies, at least in theory and insofar as their efforts were limited to the analysis of strictly syntactical operations within a text, such as letter selection and combination. In order to practice digital poetics in the literal sense of the term, it would suffice to technically analyze the syntagmatic and paradigmatic letter, syllable and word combinatorics of, for example, Gertrude Stein’s Tender Buttons ([1914] 2014) or Velimir Khlebnikov’s Incantation by Laughter (Khlebnikov, Douglas and Schmidt, 1990: 20) using Saussure’s and Jakobson’s toolbox. (It is no coincidence that Jakobson had collaborated with Khlebnikov prior to his academic career.)

Older anagrammatic and permutational poetry could just as well be subjected to such a philology. This applies for example to medieval Kabbalism and Renaissance Lullism, both of which featured combinatorial writing as well as combinatorial analysis. Yet this would, once again, narrow down the concept of digital literary studies to merely computational poetics. Digital linguistics in the broadest sense begins with the alphabet itself, which is a paradigmatic example of how digital information can be differentiated, even computable, while still existing independently of any computation or computational machines. The alphabet is a digital system, since it consists of letters as its countable elements in a finite set. Nevertheless, alphabetical texts would rarely be subjected to computation outside of specialized applications such as cryptography, at least until the invention of computer word processing and later the widespread development and application of Internet data mining by corporations and governments.

In the case of European languages, the digitization of writing can be described as a process that has been intrinsically and historically connected to three distinct technological innovations. First, the codification of an alphabet as such, which was often linked to the economic requirements of particular writing technologies, for example stone carving in the Antique period or calligraphy in medieval scriptoria; the standardization of letters was a prerequisite for the practical implementation of such proto-industrial writing factories. A second milestone in the normalization of the alphabet arose as a result of the invention and development of the printing press; from Gutenberg’s movable type to 20th-century typography, any ambiguities in non-discrete (and hence non-digital) letter elements, such as ligatures and diacritical marks, were gradually simplified or removed. Thirdly and finally, the emergence of electronic computing and the standardization of the 7-bit/128-character ASCII code in 1963 (which is still the base encoding of all electronic text, including the Web’s HTML) eliminated any remaining ambiguities in differentiation and computability of the Latin alphabet.


Digital literary studies in the 20th century

If the history of the digitality of language and literature does indeed go back more than 2,500 years, how then could the philological study of digitality remain a largely esoteric practice, limited until the 20th century to the fields of mystical theology and speculative science? Upon closer scrutiny, this historical time frame in fact shrinks to only the past few decades; structuralism, despite its combinatorial language model, has rarely engaged with the purely digital aspects of the structures of writing. The syntagms and paradigms studied by structuralists had mostly to do with the semantics of texts, for example figures of speech [2]. Though literary structuralism may at first sight appear to be an antithesis to hermeneutics, in fact it maintained hermeneutic interpretation in its analysis of semantic structures, and thus did not eliminate the general semantic bias of literary studies.

A radically anti-semantic, computational approach to literary studies was developed as early as in the 1950s by the German philosopher Max Bense. Bense was a pre-eminent theorist and practitioner of concrete poetry, and, as a teacher during the early years of the Ulm School of Design, also an influential figure in post-war modernist design in Germany. Bense combined Claude Shannon’s information theory with Charles Sanders Peirce’s semiotics into something which he called “information aesthetics” (1969) – some fifty years before Lev Manovich independently coined the same term. At its core was a concept he called “statistical aesthetics”, which he defined as the formal analysis of artworks according to technically measurable parameters of “innovation”, “information” and “communication” (Bense, 1969). To this effect, Bense drew on Shannon’s and Andrei Markov’s information theories, which maintained that information contained in a data set can be measured according to the transition probabilities [3] of this set’s elements. If for example the data was a poem, then the average transition probability between its words and/or its letters would indicate its degree of information according to Markov and Shannon, or its degree of information-aesthetic innovation according to Bense. Using this criterion, James Joyce’s novel Finnegans Wake would show a high degree of innovation [4], while a run-of-the-mill newspaper article would not.

Aesthetic innovation could, in other words, be technically measured according to redundancy and compression ratios. Today, this method can be easily replicated on any home computer simply by “zipping” text files: If two plain-text files contain the same amount of characters, the one that would yield the larger file size after compression would contain less transition probabilities (according to Markov), less redundancy and more information (according to Shannon), and more aesthetic innovation (according to Bense). A ranking system for aesthetic innovation of literature could thus be easily programmed by running a collection of electronic books encoded in plain-text format through a generic data compression algorithm such as LZW (whose successor LZMA, the “Lempel-Ziv-Markov chain algorithm” [Wikipedia, 2015], actually implements Markov’s transition probabilities) or Zip’s DEFLATE algorithm, and then comparing the compression ratios of the different books.

By taking an information model originally designed for telecommunications engineering (namely, by Shannon in his capacity as an employee of the American telephone company AT&T) and seamlessly applying it to the field of art and art criticism, Bense intended to abolish “the old categorical difference between form and contents that keeps classical aesthetics and the art theories of the traditional humanities still alive” (1965: 277) [5].

By the 1960s and 1970s, statistical analysis of literature was no longer esoteric or provocative, but a well-established form of computational linguistics; in fact, a positivist branch of cybernetics and structuralism. Italo Calvino parodied this trend in his 1979 novel If on a Winter’s Night a Traveler with the character Lotaria, a literature student who claims to no longer read novels the conventional way, but rather only on the basis of their computed word statistics:

That way I can have an already completed reading at hand […]. An electronic reading supplies me with a list of the frequencies, which I have only to glance at to form an idea of the problems the book suggests to my critical study. Naturally, at the highest frequencies the list records countless articles, pronouns, particles, but I don’t pay them any attention. I head straight for the words richest in meaning; they can give me a fairly precise notion of the book. (186)

The next three pages of the novel reproduce Lotaria’s software-computed word lists. Rather than making up fictitious statistics, Calvino literally quoted part of an actual data set from a 1973 scholarly book by the pioneering computer linguist Mario Alinei (1971-1973). Since the 2000s, Franco Moretti has been defining his quantitative philology of “distant reading” (2005: 1) with claims similar to those made by Calvino’s character Lotaria for her statistical reading method, though Moretti’s intentions have nothing to do with parody. His “materialist conception of form” (92) is effectively an updated restatement of Bense’s information aesthetics.

It could be argued that computers have been instrumentalized as assault weapons in a series of philosophical crusades against hermeneutics, from Bense to Moretti as well as in the media theory of Friedrich Kittler. While hermeneutics, by their very definition and etymology, necessarily show a semantic bias, computers are not merely biased toward syntax, but are in fact limited to syntactical processing by their very design as calculating machines. The semantics of a text therefore only becomes computable after it has been converted into syntax, a process that can only be done by applying heuristics. For example, a computer program can only guess, by applying predefined probability criteria, whether the word “present” found in a sentence is the verb “present”, the adjective “present”, or the noun “present”. Though it may be relatively simple to design a heuristic for this particular problem, a further determination of whether the noun “present” refers to the moment in time or to a synonym of “gift” will require more complex artificial intelligence programming, with a high probability of false results.

These are, in short, “big data” technologies exactly like those currently being developed and applied by corporations and government agencies, from Google to the NSA. The more domain-specific and formalized a language is, the more reliable the results produced by semantic analysis algorithms will be. Therefore, an automatic translation of medical reports is quite feasible, as is “robot journalism” which uses the statistics of a football game to generate a news report in plain English for the sports section of a newspaper or news website. Computer linguistic analysis is, however, doomed to fail when applied to any semantics that deviates from the norm. The poetry of Velimir Khlebnikov and Gertrude Stein are textbook cases of how computer-linguistic algorithms for recognizing grammar and lexis can be made to fail: consider for example the sentence “A shawl is a hat and hurt and a red balloon and an under coat and a sizer a sizer of talks” from Tender Buttons (Stein 2014: 29).

By this virtue of failure, computer science for the first time makes it possible to state more precisely the limits of formal analysis and the necessities of hermeneutic interpretation, as opposed to vague or stereotyped romanticist criteria for assessing the literary or poetic quality of a text. This by implication retrospectively legitimizes Bense’s attempts of measuring aesthetic innovation, although the methods he chose for information-aesthetic analysis were at best questionable and at worst useless; for example, an analysis of Markov chain transitions and data compression rates applied to Stein’s writing would only reveal a high degree of redundancy, and thus a low degree of aesthetic innovation.

These issues are not only relevant to experimental poetry and language poetry (as arguably extreme cases). What the novelist John Barth calls the “experience of language, which can take us beyond the possibilities of reality” (1997: 165) applies just as well to realist prose, the most obvious example being literature in the fantastic genre. Computational semantic analysis and semantic metatagging of language based on information ontologies (i.e. Semantic Web technology) are, by design and definition, limited to predefined models of reality. They are generally unable to deal with ambiguity or with metaphor, metonymy, synecdoche and irony, the basic tropes of figurative speech (Vico, [1725] 1968: 129-131). In any digital project of edition philology, scholars will face situations in which the choice of a markup tag must necessarily involve subjective judgment. This is already the case when one must choose between the “<i>” and “<em> [emphasis]” tags as the HTML/XML equivalent of italic text in a print book. In such cases, humanities computing becomes underground hermeneutics.

After several decades of unrealized promises in the field of artificial intelligence, there is no reason to expect that computer programs can overcome their systemic limitation to syntactical processing, at least not if these programs are to run on computers as we currently understand them. Barth ([1975] 1997: 165) cites popular American song lyrics alongside Lewis Carroll, as examples of pure literariness:

Sun so hot I froze to death; Susanna don’t you cry.” “’Twas brillig, and the slithy toves / Did gyre and gimble on the wabe. / All mimsy were the borogoves, / And the mome raths outgrabe.

Barth’s conclusion, “[t]ry making a movie out of those”, could be rephrased four decades later as “try to fit those into your metadata ontology”, or “try to translate them with Google Translate, or to summarize them using Microsoft Word’s automatic summary function”. Contrary to traditional hermeneutic positions, none of these challenges would be impossible, just as some of Barth’s claims were invalidated by the television adaption of Carroll’s poem Jabberwocky in an episode of The Muppet Show [6]; but they would necessarily involve a degree of interpretation (if not poetry) that would effectively refute any assumption of digital philology as a purely formalist or quantitative endeavor.

Though other disciplines of humanities and cultural studies face similar issues in making use of quantitative analysis, the hurdle of semantics is particularly problematic in the case of philology. Computational analysis of instrumental music (whether of score notation or sound recordings), for example, is much less problematic than text analysis, since it deals with non-semantic data. Computational musicology has been practiced since the 1950s and is arguably the most well-established form of digital-computational humanities [7], but has of yet neither dominated nor revolutionized musicology and music criticism. Computational analysis of images, strongly advocated by Manovich (2009) under the concept of “cultural analytics”, remains problematic since any digitization of images must necessarily deal with technical issues related to scanning and camera/lens artifacts, optical resolution and color fidelity. Therefore most digitized images – especially those publicly available on the World Wide Web – do not provide reliable material to serve as data sets for quantitative analysis. The issue of semantics affects images just as much as text: wherever artificial intelligence algorithms are used for pattern and object detection, they involve (a) error-prone heuristics and (b) can only recognize the obvious: the established norm according to a predefined pattern. A hypothetical computer algorithm programmed in the early 20th century for recognizing visual art would not have recognized Malevich’s Black Square as a painting, or Duchamp’s readymades as sculptures. Likewise, an object recognition algorithm written in the early 20th century, or using heuristics based on early 20th-century visual data, would not have recognized an Eames or Marcel Breuer chair as a chair. These limits of computational analysis correspond to what Calvino ([1967] 1986: 10) considers the limitation of computational synthesis, namely the “style of a literary automaton” whose “true vocation would be classicism”.

But if computational analysis cannot yield more than a preprocessing, or filtering, for subsequent hermeneutic interpretation, wouldn’t this then relegate digital literary studies in particular, and digital humanities in general, to auxiliary science? This was in fact the traditional status of edition philology; and the creation of digital scholarly text editions was already an established practice decades before the term “digital humanities” was even coined, with the first ISO-standardized version of the TEI markup language dating back to 1987.

Chun (2013) addressed the problem of digital humanities as a precarious labor, pointing out that its scholars risk the same kind of precarization as most other computer-based work [8]. For example, while being a system administrator has gradually become a typical middle-class job, the NSA’s decision, immediately after Edward Snowden blew the whistle on its mass surveillance programs, to obsolete 90% of its own system administrators and replace them with automated systems, is an indication of what could soon happen to their profession in the rest of the labor market (Allen, 2013). In the field of commercial print and electronic publishing, a substantial proportion of document engineering tasks – including digital transcriptions of print books and manuscripts, as well as XML and EPUB authoring – was outsourced more than a decade ago to low-wage countries such as India. This also raises serious questions regarding the professional perspectives in Western countries of students of computer administration, low-level computer programming, and digital humanities in general.


Post-digital literary studies

If “digital humanities” is a contentious term in light of the correct technical definition of “digital”, then “digital literature” is no less problematic in its contemporary, colloquial sense of poetry and fiction written for electronic computers – or more precisely, for computer display screens. Unlike computer music and new media art, such literature has largely remained a product of university literature departments. As Seiça (2016) shows, it is largely a self-referential system; not only are the authors and the critics usually one and the same individuals, in fact most of these critics go so far as to refer to their own literary work in their research papers. This is also a familiar issue in other disciplines of the humanities where the traditional divide between art theory and art practice has been removed. Practice-based Ph.D. theses of visual artists, for example, tend to end up being extended artists’ statements. This shows how difficult it is to truly overcome the divide between poetry and poetics (a divide which became institutionalized only in the 18th and early 19th century with the establishment of national philologies, and which is typical of the continental European university system where being simultaneously a literary writer and a philologist can still taint one’s scholarly reputation). Conversely, the current academic field of electronic literature involves the risk of literature departments fabricating their own objects of study (not unlike baroque scientists who fabricated the dragons for their cabinets of curiosities).

So far, the single exception of digital-age experimental writing reaching a broader, non-academic audience has been the “uncreative” poetics of Kenneth Goldsmith [9]. Goldsmith’s Internet-age conceptual poetry differs from academic electronic literature in much the same way as the contemporary visual art known as “Post-Internet” differs from electronic media art: as an art “after” the computer revolution rather than “on” the computer itself [10]. Goldsmith’s poetry and poetics are actually more digital (in the literal sense of the term) than much electronic poetry, since they are not concerned with screen typography and screen visuals – which in themselves are not even digital properties at all – but rather with the selection and combination of found alphabetical signifiers. Goldsmith is by no means the first poet to do so; as he clearly states, his approach is consistent with an established avant-garde writing tradition, from Lautréamont to conceptual art and concrete poetry. Electronic copy-and-paste, however, accelerates and intensifies this poetics beyond classical collage and montage to the point where it can, for the first time, truly transgress regimes of authorship and intellectual property in the same way that MP3 file sharing has done with music. In the avant-garde poetry of the early 20th century, when Tristan Tzara ([1920] 1975) proposed to create poetry by randomly rearranging single words cut out from newspaper articles ([1920] 1975), this transgression remained symbolic.

In his manifesto Being Boring (2004), Goldsmith claims that

I am the most boring writer that has ever lived. If there were an Olympic sport for extreme boredom, I would get a gold medal. My books are impossible to read straight through. In fact, every time I have to proofread them before sending them off to the publisher, I fall asleep repeatedly. You really don’t need to read my books to get the idea of what they’re like; you just need to know the general concept.

Yet Goldsmith’s reading performance undermines the ostensibly disembodied conceptualism of his “uncreative” poetics. His work The Body of Michael Brown and the media outrage it provoked, is a case in point. At the Interrupt 3 experimental poetry symposium at Brown University in March 2015 (the title alludes to aesthetic interruption and disruption, as well as to the technical computer term “interrupt”), Goldsmith read a poem that was simply a rearrangement of the autopsy report of Michael Brown, the black teenager shot dead by police in Ferguson, Missouri in August 2014, an event which resulted in a wave of street protests and riots across America. Participants of the symposium described the atmosphere during the reading and subsequent discussion as calm, concerned and reflective [11]. Arguably, the artistic montage of such a document is not categorically different from Andy Warhol’s silkscreen reproductions of news photos showing police violence against black American protesters in the 1960s; artworks which have never generated controversy among civil rights activists.

The outrage over the alleged racism of Goldsmith’s poem and performance began on social media and was soon picked up by mainstream news media (Frank, 2015; Flood, 2015); Goldsmith subsequently apologized and later on would call the poem a “flawed work” [12]. Most of Goldsmith’s critics insisted on the point that there never can be such a thing as disembodiment in writing – a point which is central to postcolonial and feminist criticism. In 2015 and in the United States of America, Goldsmith could not have picked a text charged with more political and cultural semantics than Michael Brown’s autopsy report. His poem could be considered a laboratory experiment on the extremes to which a disembodied, dispassionate and quasi-machinic poetics can be combined with explosive material. The critical reactions suggest that there can be no truly disembodied or de-subjectified processing of text and symbol. The same argument has long been raised in critical media and software studies, for example regarding the politics of search engines [13].

Another reason why Goldsmith’s reading cannot be considered disembodied, is the manner in which it was performed. The issue of ethnicity, gender and class – of the privileged standpoint of the writer vs. the subject of his poem – cannot be avoided when the reciting subject is Kenneth Goldsmith. His self-characterization as “the most boring writer that has ever lived” is undermined by his charisma as a public speaker and his mastery as a performer. In public performance, Goldsmith is – in opposition to his own poetics, if taken at face value – a storyteller. His work thus both practices and sabotages a digital poetics of linguistic selection and combination. Its medium is not what it seems, since his writing ends up serving as a notational score for a performance that is neither uncreative, nor copy-pasteable. It is highly embodied, unlike the silkscreened canvases made in the Warhol Factory – and unlike Warhol’s own public readings, performed by other people wearing Warhol’s trademark silver wigs.

Thus the critique that The Body of Michael Brown cannot claim conceptualist detachment from identity politics, is not without merit. However, the poem’s flaw lies not in its conceptualism and uncreativity, but in its lack thereof. Of course, a piece of writing can never be fully uncreative, since the choice of selection is already creative. Yet in the case of The Body of Michael Brown, the creative elements which the poet chose to employ are precisely what render the piece problematic; all the more if one considers that ultimately, the meaning of a work constitutes itself after the fact, not in its signifiers but in its interpretations. In this sense, the actual performance and poetry of The Body of Michael Brown was the social media outrage it generated.

As an engaging performer who is a “boring writer”, Goldsmith practices in literature what has become a norm for other art forms after the Internet changed the rules of their production and distribution. Today, most professional musicians derive their income from live performances and sell records only as promotional material [14], as opposed to the period from the 1980s to the late 1990s, when concert tours were loss-making promotional vehicles for record sales. A similar development can be observed in independent and documentary filmmaking, where filmmakers no longer derive income from theater distribution and DVD sales, but rather as public speakers presenting their films on college campuses and in arthouse movie theaters [15]. It is entirely conceivable that the economics of literature will be similarly transformed, with writers making a living from public readings rather than from book sales. In poetry, this turn from mass media to performance as the business model began in the late 1980s and early 1990s with the emergence of poetry slams. Slam poets are arguably among the most truly popular poets today, but they rarely publish books.

A corresponding opposite trend of augmenting the reproducible medium rather than phasing it out can be observed in the field of literature, where books are increasingly becoming typographical and visual objects [16]. For most of the 20th century, this type of publishing was limited to a particular scene of visual artists making small-edition books, but since the turn of the millennium, such books have developed into their own genre of graphic design and contemporary literature [17]. Boutique-type stores for artists’ books can now be found in most major European, American, Australian and Asian cities. Likewise, zines (i.e. self-published do-it-yourself periodicals) have enjoyed a global comeback as a publishing format for writers, visual artists, designers and social activists [18]. The visual, tactile and craft qualities of these books and zines often overshadow or even completely supersede the writing. In other words, their analog information constituents marginalize their digital information constituents, much as in Goldsmith’s performance of his “uncreative” poetry. Such bookworks resist digital philology, just as computational word analysis is unable to process ambiguity and complex figures of speech (as pointed out earlier). A TEI XML edition of, for example, Diether Roth’s Literaturwurst (1961), a bookwork created by mincing and rolling up book and magazine pages like a sausage, would be pointless if not impossible [19].

Such developments in music, film and publishing amount to a new contemporary dialectics of media: what at first glance seems to be a digital age, with a historical high point of digital textuality, is at the same time a period of crisis for literature in the form of written alphabetical (hence digital) text – arguably even for any literature focused on mass-reproduction media. Electronic literature of the variety produced in university literature departments thus ends up in a void between (a) the tendency for mainstream, mass-market digital electronic books to do away with visuality, typography and haptics altogether, and (b) the re-discovery of paper as “rich media” in the field of experimental audio-visual literature and arts. This contradicts the basic assumption from the 1990s and early 2000s of “rich media” as a product of the combination of digital software and electronic computer displays [20]. In fact, digital electronic devices – smartphones, e-readers, tablets – are now used mainly for “poor media” reading. Rather than taking over the market for illustrated books, they have mostly replaced ephemeral publishing formats such as newspapers, magazines and paperbacks.

The same logic of differentiation and hybridization, rather than a simple dichotomy of “old” vs. “new” media, can also be observed in the field of visual art. In contrast with the separate systems of “contemporary art” (typically shown at Biennials) and “media art” (typically shown at media art festivals), “Post-Internet” art effectively overcomes this divide. In his essay-manifesto The Image Object Post-Internet, the notable Post-Internet artist Artie Vierkant (1) points out that

Post-Internet […] serves as an important distinction from […] New Media Art […] New Media is here denounced as a mode too narrowly focused on the specific workings of novel technologies, rather than a sincere exploration of cultural shifts in which that technology plays only a small role. It can therefore be seen as relying too heavily on the specific materiality of its media.

Vierkant calls upon Post-Internet artists to instead “create projects which move seamlessly from physical representation to Internet representation, either changing for each context […] or created with a deliberate irreverence for either venue of transmission” (10).

All of the developments described above – from concert agencies replacing major record labels, to print books as the new multimedia – could by and large be called “post-digital” phenomena. They exemplify a new functional differentiation of publication forms after networked electronic devices have become objects of everyday use. Although the term “post-digital” is prone to misunderstanding and lacks terminological precision, it still usefully describes a contemporary critical revision of “new media” and its progressive Hegelian narrative of techno-cultural upgrade cycles.

If such a techno-Hegelian model is indeed outdated, then this observation in turn calls for a critical examination of the degree to which “digital humanities” and “digital literary studies” still play a role in such an outdated narrative of progress; a narrative which increasingly documents a gap between a broader culture on one hand, and on the other hand the world of policy makers and institutional funding bodies which still clings to a 1990s/2000s “new media” belief in economic growth through development of digital technologies [21].

To summarize: binary juxtapositions of “digital” vs. “analog” (however technically accurate these terms may or may not be) as “new” and “old” media have become problematic if not outdated. Instead, this dichotomy has been superseded by many complex relationships and mutual dependencies between digital, analog, “new”, “old” and in-between forms of production and distribution. In some areas such as music, graphic design and contemporary art, this new post-digital condition is widely acknowledged. The traditional boundaries between media design and graphic design, electronic and non-electronic music, contemporary art and new media art are rapidly collapsing. Terms such as “Post-Internet” and “post-digital” can be used to describe this new situation.

In the case of media studies, this yields the question as to whether it is useful to continue pursuing such sub-disciplines as “new media studies” and “software studies”. Conversely, literary studies, as a more established and therefore more conservative discipline, is confronted with the problem that it is difficult to phase out the concept of “new media” when most scholars still live in an old media world [22].



Literature, too, could benefit from a “sincere exploration of cultural shifts in which […] technology plays only a small role”, to quote Vierkant. The debate as to what exactly qualifies as literature has been simmering since at least the early 20th century avant-garde movement in the arts, while the structuralist extension of poetics to everyday phenomena, for example by Jakobson (1960) and Barthes ([1957] 1962), was discontinued in literary studies and only taken up by cultural studies in non-philological ways. Contemporary experimental practices such as zine and book making, the proliferation of memes in Post-Internet “surf clubs”, or the writing of Afrofuturist time travel manuals [23], are all indications of how the traditional Western arts system and the corresponding humanities disciplines are once again being confronted with their own structural anachronisms. Just as in the late 18th and early 20th centuries, when hermeneutics and structuralism led to a transformation of not only the methodology of literature but also its basic concepts, literary studies may well have to reconsider its object of study in the 21st century.

The traditional focus of literary studies on “belles lettres” is also the origin of a hierarchical divide between on one hand criticism as fundamental research, and on the other hand “digital humanities” as auxiliary science. While the danger described by Chun (2013) that digital humanities students will end up in call center-type jobs, is real, one may also question this hierarchy and establish practice-oriented literary studies on an equal footing with criticism. Digital Literary Studies could, for example, become an avant-garde of high-quality public domain publishing, not so much of scholarly papers but rather of electronic critical text editions; a kind of philology which now exists only outside of official humanities, as the samizdat digital humanities of anti-copyright text repositories such as AAAAARG, Monoskop Log and Kenneth Goldsmith’s UbuWeb, all of which cater to an increasing global demand from non-traditional places where the arts and critical theory are studied – from art and design schools without research libraries to artist-run and activist-run spaces in non-Western countries [24].

Such a school of practice-oriented literary studies could be deliberately low-tech, since its technical requirements can be met by existing third-party Open Source software: text editors, distributed revision control systems such as Git, XML validators, XSL processors. This would free digital humanities projects from the burdensome task of having to develop their own tools, only to see most of these fail or become rapidly obsolete.


AAAAARG. 16 July 2015.
AFROFUTURIST AFFAIR (2014). Do-It-Yourself Time Travel (mini zine). Philadelphia: Metropolarity.
ALINEI, Mario (1971-1973). Spogli Elettronici dell’Italiano delle origini e del Duecento. Bologna: Il Mulino.
ALLEN, Jonathan (2013). “NSA to Cut System Administrators by 90 Percent to Limit Data Access.” Reuters. Thomson Reuters 08 Aug. 16 July 2015.
BARTH, John (1997). The Friday Book. [1975]. Baltimore: Johns Hopkins University Press.
BARTHES, Roland (1972). Mythologies.[1957]. New York: Hill and Wang.
BENSE, Max (1965). Aesthetica. Baden-Baden: Aegis.
___________ (1969). Einführung in die informationstheoretische Ästhetik – Grundlegung und Anwendung in der Texttheorie. Reinbek: Rowohlt.
CALVINO, Italo (1986). The Uses of Literature. [1967]. San Diego, New York, London: Harcourt Brace & Company.
___________ (2010). If On A Winter’s Night A Traveller. [1979]. New York: Random House.
CHUN, Wendy Hui Kyong (2011). Programmed Visions: Software and Memory. Cambridge: MIT Press.
CONNOR, Michael (2013). “What’s Postinternet Got to Do with Net Art?.” 01 Nov. 16 July 2015.
COOPMAN, Ted M. (2014) “Rogue Scholar Manifesto 1.0.” 16 July 2015.
DE SAUSSURE, Ferdinand, Wade Baskin (2011). Course in general linguistics. [1913]. Columbia University Press.
DRUCKER, Johanna (2004). The Century of Artists’ Books. New York: Granary Books.
EUROPEAN UNION (2013). “Research & Innovation – Participant Portal.” ICT-21-2014. 11 Dec. 16 July 2015.
FLOOD, Alison (2015). “US Poet Defends Reading of Michael Brown Autopsy Report as a Poem.” The Guardian. 16 Mar. 16 July 2015.
FRANK, Priscilla (2015). “What Happened When A White Male Poet Read Michael Brown’s Autopsy As Poetry.” The Huffington Post. 17 Mar. 16 July 2015.
FULLER, Matthew, Andrew Goffey (2012). Evil Media. Cambridge: MIT Press.
GOLDSMITH, Kenneth (2004). “Being Boring.” Electronic Poetry Center Buffalo. 16 July 2015.
GOODMAN, Nelson (1968). Languages of art: An approach to a theory of symbols. Indianapolis: Hackett Publishing.
HIGGINS, Hannah, Douglas Kahn, eds. (2012). Mainframe Experimentalism: Early Computing and the Foundations of the Digital Arts. Berkeley, CA: University of California Press.
“International – RISM” (2015). RISM. 16 July 2015.
JAKOBSON, Roman (1956). “Two aspects of language and two types of aphasic disturbances.” Fundamentals of Language. Eds. Roman Jakobson and Morris Halle. Gravenhage: Mouton. 54-82.
___________ (1960). “Closing statement: Linguistics and poetics.” Style in Language and Sebeok. Eds. Roman Jakobson and Thomas Albert. Cambridge: MIT Press. 350-377.
___________, Claude Lévi-Strauss (1962). “‘Les Chats’ de Charles Baudelaire.” L’homme: 5-21.
KHLEBNIKOV, Velimir, Charlotte Douglas, and Paul Schmidt (1990). The King of Time: Selected Writings of the Russian Futurian. Cambridge: Harvard University Press.
LEEDS, Jeff (2007). “Madonna Nears Deal to Leave Record Label.” The New York Times. 10 Oct. 16 July 2015.
MANOVICH, Lev (2009). “Cultural Analytics: Visualising Cultural Patterns in the Era of ‘More Media’.” Domus March.
MONOSKOP LOG (2015). Monoskop. 16 July 2015.
MORETTI, Franco (2005). Graphs, Maps, Trees: Abstract Models for a Literary History. London and New York: Verso.
“PRINTED MATTER” (2015). 16 July 2015.
SAMORA, Alex (n.d.). “FANZINES.” 16 July 2015.
SCHULMAN, Kori (2011). “A Celebration of American Poetry at the White House.” The White House. 11 May. 16 July 2015.
SEIÇA, Álvaro (2016). “Digital Poetry and Critical Discourse: A Network of Self-References?” MATLIT 4.1: 95-123.
SHANNON, Claude E., and Warren Weaver (2015). The Mathematical Theory of Communication. [1948]. Chicago: University of Illinois Press.
STANDING, Guy (2011). The Precariat: The New Dangerous Class. London: Bloomsbury Academic.
STEIN, Gertrude (2014). Tender Buttons: The Corrected Centennial Edition. [1914]. San Francisco: City Lights Publishers.
TZARA, Tristan (1975). “Pour faire une poème dadaïste.” Oeuvres complètes. [1920]. Paris: Gallimard. 382.
UBUWEB. 16 July 2015.
VICO, Gianbattista (1968). The New Science. [1725]. Ithaca: Cornell University Press.
VIERKANT, Artie (2010). “The Image Object Post-Internet.” 16 July 2015.
Wikipedia (2015). “LEMPEL-ZIV-MARKOV CHAIN-ALGORITHM.” Wikimedia Foundation, Inc. 24 June. 16 July 2015.



[1] As reconstructed by Higgins and Kahn (2012).

[2] Prominent examples include (Jakobson, 1956)’s own definition of metaphor and metonomy; the analysis by (Jakobson and Lévi-Strauss, 1962) of the metonymical structure of Baudelaire’s Les Chats; and the structural analysis of everyday cultural phenomena by (Barthes, [1957] 1972).

[3] I.e. the statistical likelihood of changing from one state to another. In the English language for example, there is a high transition probability that the letter “e” will be followed by the letter “r”, and a low transition probability for “e” to be followed by “k”.

[4] Shannon and Weaver noted the “compression of semantic content” of Finnegans Wake ([1948] 2015: 56).

[5] “[…] die alte kategoriale Differenz zwischen Inhalt und Form, von der die klassischen Ästhetiken und Kunsttheorien der traditionellen Geisteswissenschaften immer noch leben”.

[6] “Episode 5.06”. Dir. Jim Henson and Frank Oz. The Muppet Show. 1980. Television.

[7] For example, The International Inventory of Musical Scores (RISM, 2015), now a digital humanities/musicology project, was founded in 1952 and includes working groups in 36 countries.

[8] The concepts of “precarization” and a social class of the “precariat” describe a new type of lower class that is no longer defined by industrial wage labor but by a permanently insecure status as an underpaid independent contractor. These concepts have their origin in social movements and are more systematically described by Standing (2011).

[9] As demonstrated for example by the fact that Goldsmith was invited to read his work at A Celebration of American Poetry hosted by U.S. President Barack Obama and First Lady Michelle Obama in the White House in 2011 (Schulman, 2011).

[10] In 2005, the artist Marisa Olson characterized the poetics of Post-Internet art as follows: “What I make is less art ‘on’ the Internet than it is art ‘after’ the Internet. It’s the yield of my compulsive surfing and downloading. I create performances, songs, photos, texts, or installations directly derived from materials on the Internet or my activity there” (cited after Conner, 2013).

[11] Johanna Drucker in an e-mail to the author, June 2015.

[12] During a panel discussion after his reading at the Poetry International festival in Rotterdam, The Netherlands, June 11th, 2015.

[13] For example by Chun (2011) and Fuller and Goffey (2012).

[14] A pioneer of this development was Madonna who, as reported by Leeds (2007), discontinued her record label contract in 2007 to exclusively sign up with a concert agency.

[15] The author of this article knows several independent U.S. filmmakers who make their living according to this business model.

[16] See (Drucker, 2004).

[17] As demonstrated for example by the New York Art Book Fair, an event dedicated to artists’ books rather than conventional exhibition catalogs or books on art, and with 27,000 visitors annually according to Printed Matter (2015).

[18] As documented for example on the blog (Samora, 2015).

[19] In November 2014, Leuphana Universität, in Lüneburg, Germany, hosted a conference titled The Post-Digital Scholar on the particular media economics of scholarly publishing. Since most of the debates focused on Open Access humanities publishing and the challenge of making such publishing compatible with existing academic reputation systems and career paths, it seems as though a loss of economic value in publishing, comparable to that caused by MP3 and music streaming services, is about to take place in academia. The humanities may well have to invent their own equivalent of vinyl records and artists’ books – perhaps literally if one considers theory bookworks such as McLuhan’s/Fiore’s The Medium is the Massage, Derrida’s Glas and Avital Ronell’s Telephone Book.

[20] The term “rich media” was coined in the 1990s to refer to advertising using visual animations on the World Wide Web; the term was later widely applied to promote web content made for the multimedia browser plugin Flash, which was developed by Macromedia and later acquired by Adobe Systems.

[21] As seen for example in the 658 million Euro research grant for “gaming/gamification technologies” as part of the Horizon 2020 research funding program by the (European Union, 2013).

[22] Likewise in art and design, “Post-Internet” and “post-digital” are prone to be misunderstood by design and art traditionalists as a license to go back to business as usual.

[23] Afrofuturist Affair (2014).

[24] Some of these points are addressed in the Rogue Scholar Manifesto by (Coopman, 2014).