Ways of Studying Games from a Communication Perspective (Qualifying Paper #2)

The field now known as game studies gradually formed around individuals writing about digital video games. These individuals, some writing since the 1990s, came together with the creation of journals (Game Studies in 2001; Games and Culture in 2006), edited volumes (Cassell and Jenkins 1998; Wolf and Perron 2003; Wardrip-Fruin and Harrigan 2004), and organizations with conferences (DiGRA in 2003; FDG in 2009). To create a history, the field pulled certain writings on digital software from the 1990s, a host of Internet and computer culture works from the 1970s to 1990s, select portions of the study of play and leisure, and the history of material video games (1950s-present) and board games (~3000BC-present). As a formation consisting of individuals from different disciplinary backgrounds, game studies is interdisciplinary. This paper questions the extent of that interdisciplinarity by looking at how the current moment of game studies is still tied to disciplinary subfields.


The field now known as game studies gradually formed around individuals writing about digital video games. These individuals, some writing since the 1990s, came together with the creation of journals (Game Studies in 2001; Games and Culture in 2006), edited volumes (Cassell and Jenkins 1998; Wolf and Perron 2003; Wardrip-Fruin and Harrigan 2004), and organizations with conferences (DiGRA in 2003; FDG in 2009). To create a history, the field pulled certain writings on digital software from the 1990s, a host of Internet and computer culture works from the 1970s to 1990s, select portions of the study of play and leisure, and the history of material video games (1950s-present) and board games (~3000BC-present). As a formation consisting of individuals from different disciplinary backgrounds, game studies is interdisciplinary. This paper questions the extent of that interdisciplinarity by looking at how the current moment of game studies is still tied to disciplinary subfields.

In an introductory article to their edited issue of the journal Games and Culture, Thomas Malaby and Timothy Burke (2009) argue that game studies is at a crossroads; it has been interdisciplinary, but is also at the cusp of a cementation and sedimentation into a discipline. The field is full of voices from different disciplinary backgrounds, but the conversations move forward due to a mutual understanding of the game artifact. This mutual understanding coming from different backgrounds is the main reason that it claims interdisciplinarity. And yet, Malaby and Burke write both that certain groups within the field have already sought to lay down disciplinary borders and fences, and that the creation of stable disciplines is historically normal and professionally safe. Other fields have sought (or fought about) the disciplinary route (Communication among them), and younger academics tend to seek some sort of disciplinary home for career safety. In contrast to this drive toward disciplinarity, Malaby and Burke highlight the maintained interdisciplinarity of the authors within their edited journal issue. However, their excitement about their the various methods they consider legitimate for the study of virtual worlds misses the very disciplinary outlook of the other subfields within game studies, and the lack of discussion between the subfields. Where they see interdisciplinarity, I see enclaves of disciplinarity.

As I see them, these enclaves of disciplinarity exist as topics of research. Ontological studies of play through philosophy, where play is linked to biologically universal theories of development, and universality through studies of games as code; art, rhetoric, persuasion and the question of what games are and what they do through art history, design, and critical studies of communication; media effects such as violence and addiction through psychology, cognitive science and traditional communications; gaming cultures, virtual worlds and the collapse of real and virtual boundaries through anthropology, economics, and sociology; political and cultural issues as translated between real and virtual worlds through cultural studies and critical studies of communication. Each topical enclave is dominated by disciplinary methods; each tends to signal the importance a particular moment of the life of games over the other moments; and each has a different focus. The different foci form the subsections of this paper: what games are, what games do, what games do to players, what we do in games, and how games and the world interact.

In this paper I will outline game studies as it is now, how these enclaves stem from certain disciplinary origins, and how there are built-in universalities that I argue are a problem. However, I also write of how the field might be heading in certain new directions away from reductive universalities. These new directions imply a resurgence of particularity, location and interdisciplinarity, all of which lead to a better understanding of how and why games matter to people and the world.


Ontology – What Games Are

The subfield that makes categorical definitions of play and games is one of the most active and important. Four seminal theorists are Johan Huizinga, a Dutch cultural historian, Roger Caillois, a philosophically and literary oriented sociologist, Brian Sutton Smith, who studies play in a general sense from a more interdisciplinary perspective, and Jesper Juul, who works specifically on digital games. The four span from the early part of the twentieth century to the early part of the twenty-first century and show a general movement in how the theory has changed over the duration of the formal study of games.

Writing Homo Ludens in the late 1930s as the world once more approached war, Johan Huizinga attempted to see just “how far culture itself bears the character of play” (Huizinga 1955, ix). Not simply innocent or educative, play is a key element of culture that changes depending on the situation, but is always tied to competition and even war. While Huizinga’s link between play and the ‘natural’ progression of civilization is problematic, his overall claim for the rules of play are often a starting point for later conceptualizations of both play and games. According to Huizinga:

play is a voluntary activity or occupation executed within certain fixed limits of time and place, according to rules freely accepted but absolutely binding, having its aim in itself and accompanied by a feeling of tension, joy and the consciousness that it is “different” from “ordinary life.” (Huizinga 1955, 28)

Game studies takes two important points from Huizinga’s definition: play as free, and the magic circle.

The first is that play is free, but this is not in opposition to payment and work. Seriousness, which Huizinga links in a limited sense to ‘work,’ “seeks to exclude play, whereas play can very well include seriousness” (Huizinga 1955, 45). For Huizinga, “Play is a thing by itself. The play-concept as such is of a higher order than is seriousness” (Huizinga 1955, 45). The second concept taken up by later scholars is Huizinga’s concept of the ‘magic circle,’ the arena in which play takes place. The actions of play take place within the circle, and outside the play cannot exist. It is Roger Caillois who moves the discussion from play to games, and then makes a hard distinction between games and work. Caillois and later theorists harden the line of Huizinga’s magic circle denoting an inside and an out, and through further discussions around the topic we arrive at the current generation’s of scholars’ arguments of playbor (Dyer-Witheford and De Peuter 2009; Dibbell 2006), xreality (Coleman 2011), and the porousness of MMORPGs[1] (Taylor 2006; Boellstorff 2008). This question of what play is, what purpose it serves, how it relates to work, and whether these are all contingent on a particular world system (be it the neoliberal Capitalist model or otherwise) are all questions of increasing importance within a certain subset of game studies.

In the second seminal text of game studies, Man, Play and Games (2001 [1958]) Roger Caillois criticizes Huizinga for focusing particularly on the idea of play as having value, as it led to Huizinga missing alternate forms of play and games. In contrast, Caillois creates a classification of games that, while problematic in how it subsumes certain generic differences under a “fundamental kinship” (Caillois 2001, 13), certainly opens up the discussion to a greater number of types of games. Caillois initially outlines four rubrics, classes or categories (he uses all three terms at points, which indicates certain fluidity of the classificatory system) of play: agon (competition), alea (chance), mimicry (simulation) and ilinx (vertigo). These four classes are then subject to a sliding ratio between paidia (free play) and ludus (codified games). While Caillois’ Structuralist classificatory scheme is an interesting expansion of Huizinga’s initial interest in play, Caillois’ location of mimicry and ilinx with primitive civilizations, agon and alea with advanced civilizations, and certain categories as essential for the development of culture is problematically teleological in the same way as Huizinga decades earlier. He also notes that certain categories have been key toward the development of certain cultures in a partially essentialist mode. Caillois defines play with six qualities: free (voluntary), separate (subscribed in certain space-time limits in a similar magic circle), uncertain (the conclusion is unknown), unproductive (creating neither goods nor wealth), governed by rules (breaking with real life rules), and make-believe (aware of it being not in real life) (Caillois 2001, 10). Key for later game studies scholars are both Caillois’ particular classification scheme, and his general definition of play.

A prolific writer on the concept of play, Brian Sutton-Smith is often invoked as a third seminal author in a genealogy of game ontology. In his book Ambiguity of Play, he understands play as an ambiguous, diverse act depending on the rhetoric being used in the discursive context. In the core of the book that combines decades of previous studies done by the author, Sutton-Smith elaborates that play is discussed in seven types of rhetoric: progress, fate, power, identity, imaginary, self, and frivolity. Some of these categories overlap with some of Caillois’ (power is agon, fate is alea, identity and imaginary contain parts of mimicry, and self is in part linked with ilinx), but progress and frivolity are distinct in their interaction with work and use. In contrast to Huizinga and Caillois, Sutton-Smith does not relegate play to a lack of purpose. Rather, the ambiguity of its purpose is itself a purpose. Sutton-Smith invokes Gregory Bateman’s oft quoted phrase involving play, dogs and nipping when he very early on notes that “Animals at play bite each other playfully knowing that the playful nip connotes a bite, but not what a bite connotes.” Following this, the assumption is that meaning is clear for dogs that the playful act is not the painful act. However, Sutton-Smith points toward the ambiguity of play by invoking performance studies scholar Richard Schechner by saying that “a playful nip is not only not a bite, it is also not not a bite” (Sutton-Smith 1997, 1). Despite it’s playfulness, a nip is still not the act of not nipping. The act, nipping, playing, has ambiguous meaning. Sutton-Smith concludes his meta-analysis of the various overlapping and interconnected rhetorics of play writing, “variability is the key to play, and that structurally, play is characterized by quirkiness, redundancy, and flexibility” (Sutton-Smith 1997, 229). The variability and quirkiness of play are linked to the biological adaptability of human evolution: “play’s variability acts as feedback reinforcement of organismic adaptive variability in the real world” (Sutton-Smith 1997, 230). Unlike the video game industry, where play and games are understood as related solely to entertainment and fun, and their job is to code a fun escape from life’s problems (Koster 2005), Sutton-Smith links play to the world, people and biology. Play is not simply fun; play is not ‘not work;’ play is something with purpose.[2]

As the most recent theorist of play and games, Jesper Juul is also the first generally quoted theorist to come from a purely digital game perspective with a doctorate in video game studies. In contrast to the above theorists, Juul attempts to understand what a video game is, not simply what a game is, and not the nature of play. Despite the different goal, Juul’s contribution similarly begins with a definition of what is and is not a game. He takes the definitions of seven previous theorists (including Huizinga, Caillois, and Sutton-Smith) and distills them into what he considers to be the core features. This meta-definition is his “classic game model,” which includes six features: rules; variable, quantifiable outcome; valorization of outcome; player effort; player attached to outcome; and negotiable consequences:

A game is a rule-based system with a variable and quantifiable outcome, where different outcomes are assigned different values, the player exerts effort in order to influence the outcome, the player feels emotionally attached to the outcome, and the consequences of the activity are negotiable. (Juul 2005, 36)

Juul’s definition clearly indicates how ‘play’ has been translated fully into ‘games’ within much of game studies. Noble war, free-form play, ring-a-ring o’ roses are all excluded from his categorization of games because they fail to include certain of the six key features. Even chance-based gambling, which is an obvious form of Caillois’ alea, is relegated to a borderline game for Juul. However, while even Juul understands the problems of this very tight ‘classic game model,’ in that it ignores certain games and types of play, he ignores that his definition does not allow for the world and context. Juul does not account for Salen and Zimmerman’s (2003) tertiary rubric of game, design and context where context is culture, or Raessens and Goldstein’s (2005) separation that similarly includes context/culture. In Juul’s final analysis “the rest of the world” has almost entirely been removed as an “optional” element (41). That Juul pushes context, culture and the world out of his essential ontology of games is unfortunate, but it is quite common for much of the works in game studies; there is a visible distinction between work in the field that argues to see universalities and essential perspectives on the one hand, and work that looks at particular locations and iterations on the other hand. It is my contention that we need to pay more attention to the latter despite the dominance of the former in the past decade of research.

The main problem with such ontologies of play is that they seek to render play as universal across cultures and locations. While Huizinga has a vaguely located understanding of play in that he is discussing a Western evolutionary theory, Caillois makes a much more structuralist argument that borders on universality across cultural particulars. This difference between located particularity and general universality is reproduced in later theorists, but the ontologies tend toward the side of universality for greater emphasis. There are two anchors of play to universality: the first is between play and biology, and the second is between digital games and software code.

The first anchor of play to universality is biological. Sutton-Smith’s ambiguity is a built in element that should allow a more particular understanding of any type of play, but his conclusion links play to biological evolution and naturalized universality. Sutton-Smith’s rhetoric of play as progress is absolutely essential to many subfields of game studies. The rhetoric of play as progress links play to learning how to do things that will be ‘not play’ later: dolls become babies, play becomes housekeeping, practice fighting becomes real fighting, play becomes work, et cetera. Many media effects studies, which I will discuss below, depend on a universal understanding of play as a biologically universal. The active media approach to media effects believes that technological mediation necessarily affects all (passive) players in the same way. The location of play, the particulars of the players, and the details of the game are all subsumed under a reductive understanding that people play, and are affected by, games in the same way.[3] Play, understood through this rhetoric, is universal; we all develop, we all learn to play in the same progressive way. Therefore there must be a universal meaning of games.

The second anchor between to universality is through the mutable nature of games as coded software.[4] For example, Juul’s idea of rules and code is linked to an understanding of software and the machine as universal. The machine is ordered. This concept comes from two of Lev Manovich’s five principles of new media, modularity and variability.[5] Modularity understands that new media texts, such as games, are comprised of a host of smaller elements. These elements are combined into the larger text, but exist as themselves. This can be easily seen in the way that software is coded: not as a continuous file, but as a file that calls smaller files: subroutines, functions, procedures, or scripts (Manovich 2001, 30-1). Individual modules can be easily replaced, which brings up interesting possibilities with versions, originals and derivatives. Video game localizations serve an interesting example, as the national linguistic assets of a game are modular and can be swapped out for another set of national linguistic assets. The program itself, the game’s essence does not change, so the game is considered the same thing.

The principle of modularity links up with Manovich’s fourth principle, variability, to create what I understand as a mutable universality through digital manipulation. Manovich lists numerous examples of variability, but scalability, which he calls the most basic, is also the most understandable (Manovich 2001, 37-9). A digital image can be seen in all of its pixilated glory at the full resolution, or reduced to lesser resolutions all the way down to a miniature desktop icon. This is enabled by the scalability of the digital image, which comes from a necessity of new media to be variable. Again, in terms of localization, we can see variability in the necessity of using the modularity of digital video games to enable variations of the applications. Modules are replaced to create the idea of a mutable video game. However, key to the difference between old and new media is not that this variability happens, but that “there exists some kernel, some structure, some prototype that remains unchanged throughout the interaction” that can be considered the essence of the new media text (Manovich 2001, 40). While everything else may be manipulated at no loss, the kernel is considered the essential core to the text/game, and universally understood.

In contrast to these overarching, universal conceptualizations of play and games, there exist recent studies that delve into the very spatially located and contextually specific nature of play and games. Three of these are Mary Flanagan’s work on critical play, Thomas Malaby’s study of gambling in Greece and contention that games must not be separated into play, and Alex Galloway’s concept of social realism in games. Each of the following studies problematize a universal understanding of play, and the ontologies discussed above, but they are not a unified front. Malaby does not follow Galloway’s focus on play and action; Flanagan focuses on play and not the video games in Galloway’s work; finally, Flanagan is working on critical design interventions into games, in contrast to Malaby’s anthropological studies of how games change and are changed by groups of people.

As a game designer and theorist, Mary Flanagan looks at located practices of play. As Flanagan notes, “while the phenomenon of play is universal, the experience of play is intrinsically tied to location and culture” (Flanagan 2007, 3). Games are played in particular spatial and cultural contexts, and this cannot be divorced from analysis (or an ontology of play and games). Her work then moves toward how to design games that deal with meaningful social issues (games for particular places; games that deal with issues of particular places). In her book Critical Play: Radical Game Design, Flanagan argues that game design practice must start and end with values and goals. These values must come from social contexts, and the act of playing must support the values the designer sets out to interact with. Flanagan looks at a very wide spectrum of cultural ‘games,’ which she defines incredibly broadly as “situations with guidelines and procedures” (Flanagan 2009, 7). This allows her to consider artwork, playing house, board games, alternate-reality-games, and finally computer based. It is important to note that Flanagan, like Huizinga, Caillois, and Sutton-Smith, but unlike Juul and many of the other people that I will discuss later, does not limit her study to digital games despite that being the focus of parts of the game studies field at present. Additionally, Flanagan’s tight focus that goes from design to situated play leaves little question of the importance of context, but it also means that games for her do not necessarily travel between contexts. Translation involves redesign, which leads to a new game.

In Gaming: Essays on Algorithmic Culture, Alex Galloway defines video games as essentially related to action. “If photographs are images, and films are moving images, then video games are actions. Let this be word one for video game theory” (Galloway 2006, 2). Video games run through actions of a computer, and must be actively played by a user. These two sides can be doubled into diegetic and nondiegetic (acts within and acts outside the game world) to form a four quadrant theory: ‘diegetic machine acts’ such as running background noise at certain points, ‘nondiegetic operator acts’ like configuring options and settings, ‘diegetic operator acts’ of playing the game itself, and ‘nondiegetic machine acts’ such as game over or loading screens. So far, this bears much similarity to the universal ontological definitions above, but in the heart of Galloway’s book is a discussion of social realism in games. In the chapter he moves from the argument of visual representation toward including embodied, ludic action. Opposed to an impossible task of ‘realisticness,’ which is the task of mimetically representing the world visually, Galloway understands realism as a form of social critique that is linked to both form and context.[6] Galloway suggests “there must be… some type of fidelity of context that transliterates itself from the social reality of the gamer, through one’s thumbs, into the game environment and back again. This is what I call the ‘congruence requirement,’ and it is necessary for achieving realism in gaming” (Galloway 2006, 78). His point is brought home when he compares the ‘realism’ of two FPS (first person shooter) games with great ‘realisticness’ and a relative amount of ‘realism:’ America’s Army, created in conjunction with the United States Army, and Special Force, created by the Hezbollah Central Internet Bureau. Galloway argues:

video games absolutely cannot be excised form the social contexts in which they are played. To put it bluntly, a typical American youth playing Special Force is most likely not experiencing realism, whereas realism is indeed possible for a young Palestinian gamer playing Special Force in the occupied territories. (Galloway 2006, 84)

In games, realism is inextricable from action in context. In terms of ontology, no meaningful theory of games can be taken away from this contextual action and experience. Games never simply ‘are’; games are always in context.

Thomas Malaby argues that games should not be understood in relationship to play or rules, but as contingent cultural practices in the process of becoming. He believes the link of games to play is unhelpful, because play is always assumed to be its own activity. In contrast, Malaby’s dissertation research on gambling in a small Greek town leads him to argue that games and life are not separable: games like poker inform real life actions like politics. In a small essay after this research on gambling, Malaby defines a game as a “semibounded and socially legitimate domain of contrived contingency that generates interpretable outcomes” (Malaby 2007, 96). In opposition to Huizinga’s magic circle and Caillois’ typifications, Malaby argues that games are always integrated with life (there is no magic circle), but in contextual ways (the type of game is related to context, not an essence of the game itself). Additionally, in opposition to Juul’s focus on set rules, which Malaby calls a “misplaced formalism” (Malaby 2007, 103), he argues, “games are grounded in (and constituted by) human practice and are therefore always in the process of becoming” (Malaby 2007, 103). Their rules are never set as they depend on context. One major problem with Malaby’s argument is that he is analyzing analog games (gambling) in contrast to Juul’s digital games. While I do not wish to make an argument separating them as essentially different or similar, I do wish to point out that Juul’s set of rules within digital games must be programmed, and are therefore set (formal, universal), even if they are eternally mutable through patching (still universal through a non-particular quality). While there are local rules to analog games (house rules for gambling), there are no local rule sets for digital games (there are no house rules for StarCraft). Despite this discrepancy between formal qualities of analog and digital games, Malaby interjects into game studies discourse a much needed focus on both particularity and process.


Art, Rhetoric and Persuasion – What Games Are to What Games Could Do

A second subfield looks at what games are aesthetically. The next section of this paper traces the games and/as art, and its current trajectory into questions of what games do rhetorically.

In the introduction to their edited volume, Art and Videogames, Andy Clarke and Grethe Mitchell (2007) connect art and games in a series of stages. The first stage involves utilizing, or repurposing game iconography as art. One example is the street artist Invader, who makes tile mosaics in the shape of characters from the classic 1970s arcade game Space Invaders. He places these mosaics around the world and documents them as ‘invasions.’ In major cities within a world of increasing migration, these invasions serve to propagate alternate ideas of citizenship and ‘alien’ movement. However, iconography does not need to be visual. There are many examples of music groups that play covers from video games, and even performances of Uematsu Nobuo’s Final Fantasy music in concert halls could be considered iconographic art.

The second stage involves game art that utilizes the technology of the game system. These are usually internal or external game modifications.[7] Three examples are Brody Condon’s Adam Killer (1999), Anne-Marie Schleiner’s Velvet-Strike (2002), and Cory Arcangel’s Super Mario Clouds (2002). With his Half Life game modification, Adam Killer, Condon plays with the idea of representation, killing and the FPS by putting the player in a room with innumerable, passive, identical characters where the only thing to do is kill them. Schleiner’s Velvet-Strike is a spray paint modification for the popular FPS game that allows the player to tag walls with user generated, anti-war and anti-violence graffiti instead of running around killing other players. Like Adam Killer, Velvet-Strike plays with the FPS genre’s ‘essential’ shooting element. For Super Mario Clouds, Cory Arcangel removed all of the code from the game Super Mario Bros other than the clouds in the background drifting by to the left, so that when played the game of running, jumping, collecting coins and rescuing a princess does not exist. All that remains is the peaceful experience of watching 8bit clouds drift by.

A third type repurposes gameplay as art. This can be seen in machinima that create movies from game play footage, speedruns as perfect playthroughs, and performances within game worlds. Machinima (from Machine Animation) use video from game play, or video using game engines, in order to make movies. Machinima range from the original United Ranger Films’ “Diary of a Camper” (1996) using the Quake engine to create the most basic of narratives, to Jake Hughes’ “Anachronox: the Movie” (2002), which combined the game Anachronox’s (2001) various cutscenes to create a full length movie, to the long running comedy series Red vs. Blue that uses various Halo engines,[8] to fan made music videos such as Oxhorn’s “ROFLMAO!” (which uses the World of Warcraft engine to adapt the skit “Mahna Mahna” from the Muppets Show).[9] Speedruns attempt to link the playing of games to art, so that Andrew Gardikis’ five-minute speedrun of Super Mario Bros. is elevated to artisan or athletic skill.[10] Finally, there are performances within game environments such as Joseph DeLappe’s dead-in-iraq (2006) or his The Salt Satyagraha Online (2008). For dead-in-iraq, which is still in progress, and will continue as long as United States military is still in Iraq, DeLappe logs into America’s Army, finds a quiet corner, and types out the newly released names of the war dead. He does not take part in the simulated violence. Instead, he types and continues to type until he invariably gets booted from the server for not ‘playing’ the game. He then logs onto a different server and continues until he finishes with the newly released names of the dead. For his reenactment of Gandhi’s “Salt March” in Second Life DeLappe rigged-up a treadmill to operate as an input device for the game; when DeLappe walked on the treadmill, his character, MGandhi Chakrabarti, moved in the game. Over twenty-six days in 2008, DeLappe walked 240 miles on the treadmill, thereby covering the same distance Ghandhi walked during his protest of the British salt tax in 1930. In both instances, the game world is the site of an artistic performance, but the game itself is only minimally used.

While the first three types are mainly alterations of games done primarily by artists and coders, the fourth type is moves toward games as art on their own grounds, and mainly produced by or with game designers. Art games follow the rules of games, but distance themselves from ‘normal’ games through self-identification as art. Examples include: Eddo Stern’s Waco Resurrection (2004), which plays with embodiment by putting players into the role of David Koresh at the 1993 Branch Dividian standoff in Waco, Texas; Jason Rohrer’s Passage (2007), which uses simple, 8bit graphics and sounds to play with memory, identity and life as the player moves forward with life, gets married or doesn’t, scores points, or doesn’t, but always dies in the end; and Tale of Tales’ The Path (2009), which plays with gender and the Little Red Riding Hood story to create an experience of introspection instead of action. All three games utilize the game form to enact art; the player’s experience is a central part of the art.

Unfortunately, one of the problems with Clarke and Mitchell’s declaration of ‘art games’ is that it necessitates not art games, and this brings the conversation back to what makes certain games ‘art’ and what makes other games ‘not art.’ Is it a matter of high and low culture, or good and bad art? Is it a matter of subjective judgment or professional training? Roger Ebert’s (2010) gate keeping of modern technology’s interaction with art is one of many places that show this sort of reductive yes/no is not particularly productive. Instead, we might say that the questions “what is art?” and “are video games art?” are themselves bad questions. Art has shifted in cultural status, material form, and actual purpose in different movements, cultures and time periods. Instead, Ian Bogost asks what do art and video games do, and what connecting movements can be seen in games? Bogost understands video games not as “art,” an amorphously defined battleground of different movements (Bogost 2009), but as a “new domain for persuasion… that uses procedural rhetoric, the art of persuasion through rule-based representations and interactions rather than the spoken word, writing, images, or moving pictures (Bogost 2007, ix). “Videogames service representational goals akin to literature, art, and film” (Bogost 2007, 45), but the means of attaining this representational goal cannot be the same as other media. Art for Bogost is tied to various movements, and linked to the gallery and museum; in contrast, video games are tied to their own movements and linked to the arcade and the home. While video games might do similar things as art, they cannot be equated, just compared.

Through Kenneth Burke, Bogost argues that rhetoric is essentially linked to persuasion and meaning: “Wherever there is persuasion… there is rhetoric. And wherever there is ‘meaning,’ there is ‘persuasion’” (Bogost 2007, 21). The particular form this rhetoric takes is procedural, which is to say in and through the game code (Bogost 2007, 14). “Procedural rhetoric is a technique for making arguments with computational systems and for unpacking computational arguments others have created” (Bogost 2007, 3). Bogost initially uses Molleindustria’s The McDonald’s Videogame (2006) as an example of procedural rhetoric at work. As a simulation of a fast food company, the player controls four areas: farmland, slaughterhouse, restaurant, and corporate headquarters. In order to win, the player must understand how to make money where the only way to succeed is to cut corners in terms of environmental and health-related issues such as cutting down rainforests, feeding the cows contaminated beef, bribing politicians, and of course using various forms of advertising and propaganda. The game persuades the player, through gameplay, that the only way for this type of business to succeed is by making unethical choices, and the player succeeds by playing in such an unethical manner. The game’s meaning then, is that fast food companies are bad. However, Bogost is clear that The Grocery Game, a website for saving money through coupons and stockpiling, uses procedural rhetoric equally well. The code finds the best deals, tells the user what coupons to use, what items to stockpile, and through following this logic the user saves money (Bogost 2007, 37-9).

Part of Bogost’s desire to understand how procedural rhetoric works is to make better games himself. His Cow Clicker (2010) game for Facebook is an attempt to mix irony and annoyance at the current breed of click games that place a façade of content over simplistic, repetitive, meaningless, but seemingly rewarded clicking. Cow Clicker allows you to click your cow once every six hours, spend ‘mooney’ (the in-game currency, which can be bought with real money) to buy different cows, and complete for numbers of clicks with friends and strangers. Ironically, the game was a hit despite its attempt at sarcasm. Bogost’s Guru Meditation (2009) iPhone game (it also has a version on the Atari VCS using the Joyboard) similarly uses procedural rhetoric to argue about the meaning of action. The goal of the game is to keep the iPhone as steady as possible so that the built in accelerometer registers as little movement as possible. Only by keeping the mobile phone immobile can the player succeed in the game. To play the game the player cannot do all of the required things of life such as walking to work, taking the bus, or moving in any way: the act of playing the game opposes being active within culture.

The other part of Bogost’s goal is to understand how others make games, which is to say, how to analyze the meaning of a game. This is particularly important in terms of cultural and political discourse, but also in its relationship to art.

September 12th: A Toy World (2003) is a game by Gonzala Frasca that makes claims about America’s strategies in its never-ending “war on terror.” The player sees an isometric ‘middle eastern’ village with many ‘citizens’ and a few ‘terrorists.’  The player has a targeting reticule that is aimed through mouse movement. Upon clicking the mouse, a missile is launched; it reaches its destination after a few moments. The missile may hit the target, may kill the ‘terrorist,’ may destroy nearby buildings, and/or may kill ‘civilians.’ In any case, the result is that nearby ‘civilians’ gather around the destruction, mourn, and become ‘terrorists’ themselves. Like with the 1998 film WarGames, where an AI learns that in nuclear war “the only winning move is not to play,” September 12th uses procedural rhetoric to persuade the player that, like tic-tac-toe and thermonuclear war, the only way to win the war on terror is not to play. Of note is that this rhetoric is not simply in games that wear their ideological stripes on their titles; it is also in games that hide their meaning.

Brenda Braithwaite has recently switched from making digital games professionally to creating a series of award winning board games called “Mechanics is the Message.” The game in the series that has garnered the most attention is called Train. In it, the player tries to get people to a terminus. There is rolling of die, blocking of other players’ trains, and the usual board game fun. However, when the first player gets his or her first train to the terminus, its name is revealed to be Auschwitz. At this point various details of the game come into a new light, but the game itself does not stop. Rather, the player is asked to get more passengers, to continue the ‘game.’ Whether the game actually continues or not is up to the players themselves, just as was the practice of many things within twentieth century Europe, including the extermination camps. The mechanics of a game persuade the player of a message and in the case of Train, the persuasion has been incredibly powerful. Many people do not want to continue and are appalled by their previous excitement; some people try to ‘save’ the passengers through creative play (Brophy-Warren 2009). In a Brainy Gamer podcast appearance on the topic of games and art, Art Historian John Sharp indicates that Train is art because its mechanics allow a complex emotional experience (Abbot 2009). Whereas Sharp understands post-Renaissance art and culture of the past 500 years to be dominated by the visual, games are a key element of the current change in world culture. The so-called “Ludic Age” is dominated by systems and action, and Sharp sees games as a way to interact with these systems. The way that a user enters the aesthetic experience of understanding the system at work is through playing the game, but when a game is in a museum as art it is unplayable and untouchable. Thus, games are contextually at odds with our current view of art.[11] To Sharp, it is only through playing games that we attain something that is similar, or could be similar, to art. Unfortunately, the ability to see games as things of artistic expression is hindered by the discursive understanding of games as fun, or a waste of time; by people refusing the possibility of games as art; by the place in which people play games: entertainment rooms, lounges and bedrooms instead of museums, galleries, or even places of worship.

If we put Sharp and Bogost together we might claim that certain games are now able to approach the claim of being ‘art’ because they successfully mount claims and are able to persuade their players of these claims. While procedural rhetoric is how games work, not all games are effectively persuasive. Similarly, not all ‘art’ is ‘good art.’ However, persuasion through rhetoric is something games can do to provide a parallel modality to the expressive rhetoric of art.[12] Quake is not particularly artistic or rhetorical, despite its importance historically as the first polygon based FPS game, and first engine to encourage machinima. In contrast, The McDonald’s Videogame, Train, and September 12th all mount arguments and could be claimed to be ludically aesthetic. One might further claim that America’s Army is just as rhetorically successful, if not more successful, due to its overarching integration into American culture. Such a claim is problematic because of the question where art, aesthetics and persuasion end and where propaganda begins, but such is a key place of research in game studies due to the long relationship between games and commercial industry.[13] I will return to the discussion in the closing section of this paper with discussions of what we do with games, but now I turn to questions of what games do to us by turning to the subfield of video game effects.


Media Effects – What Games Do to Players

Another large subfield of game studies is media effects research designed to understand the way that video games affect players. This subfield should be meaningful and important, as it seeks to understand the physical, mental and social ways that video games act on players, but it is problematic due to methodological issues and the baggage it carries from past incarnations involving previous media. I include the subfield while pointing out the numerous problems within it, some of which are ignored by the practitioners. I will focus on two research areas, violence and addiction.[14]

While the question of effects is broad, and in certain ways goes back to Plato’s fears that the written word would destroy peoples’ ability to remember (Phaedrus), it has taken 20th century form in the fears of how film and television effect their viewers, and 21st century form in similar fears regarding video games.[15] Examples of 20th century fears include the Payne Fund studies around 1930 that sought to link movie viewing and youth delinquency, and studies in the 1960s and 1970s that tried to link a rise of documented violence in America to television viewing. Screen and violence studies have continued into the present, particularly in relationship to children. A recent example compared kindergartener aggression levels after watching the television shows Mighty Morphin Power Rangers (a martial arts action show about ‘good’ and ‘evil’) and Barney (about a singing, purple dinosaur) (Singer and Singer 2005). The studies claim that watching violent television makes children violent. However, the extent and duration of the effect is completely unproven. In part because of decades of an inability to prove a causal relationship between screen viewing and violence, and in part because of the belief that ‘actively’ playing games might be different than ‘passively’ watching television, media effects research has moved into the realm of video game research.

The problem with video game effects research is the problem with traditional media effects research. It starts from the assumption that all people are equally affected through playing. It ignores the context of play, and any particulars of the players. Despite the claims that because players are active in their playing, thus more inclined to be effected by video games, video game “effects” research still work from an ‘active media’ perspective. Active media research assumes a passive user, who is affected by the medium in a universal way. Research methods tend to be quantitative, and are designed to find a statistical correlation between playing games and an effect. While there is some ‘active user’ research being conducted to understand the practices of interacting with media from an active perspective, such studies are in part meant as countermeasures to the effects research area itself.

In part because they have no voice to say one way or another, in part because of modern beliefs in what it means to be a child (Buckingham 2000), and in part because moral panic is easily mobilized in order to protect them, children are a primary population studied in relation to media effects. Are children becoming violent by playing games (Anderson)? Are they being indoctrinated into the military through playing propagandistic first person shooters, or murder simulators (per the disbarred lawyer Jack Thompson)? Are they becoming addicted to games (Chou and Ting 2003)? Are they becoming obese through playing games (Stettler et al. 2004)? There’s even the rare positive effect such as whether children have better visuospatial cognition [ability to understand spaces through vision] through playing first person games (Spence and Feng 2010)? A second common subject population is the military in the context of training, especially for understanding how youth might be indoctrinated into the military through certain games such as America’s Army in the United States.[16] However, because children are biologically adapting (as is understood through the very culturally based concept of childhood), and moral panic is easily mobilized around them, the focus tends to be on children instead of the military, or when children turn into the military. The correlation between effects research and children is particularly visible in the studies done on games and violence.

Using a General Affective Aggression Model [GAM], Craig Anderson and Karen Dill study the link between playing games, entering a generally more aroused state, and being primed for aggressive actions (Anderson and Dill 2000). In an initial questionnaire they found a correlation between violent video games and aggressive personality, and between playing violent video games and having a delinquent personality. A laboratory study further correlated playing violent video games with a desire to harm somebody else (represented by holding down a buzzer longer). Finally, in his meta-analysis a year later with Brad Bushman, Anderson argues:

results clearly support the hypothesis that exposure to violent video games poses a public-health threat to children and youths, including college-age individuals. Exposure is positively associated with heightened levels of aggression in young adults and children, in experimental and nonexperimental designs, and in males and females… In brief, every theoretical prediction derived from prior research and from GAM was supported by the meta-analysis of currently available research on violent video games. (Anderson and Bushman 2001)

Anderson is the main voice in the active media side of video games research that tries to show a causal relationship between playing video games and violence, which the above research and quote clearly indicate are a foregone conclusion.

Anderson’s work is extensive and forcefully written, but has been shown by numerous researchers to be biased, and to stem from faulty research. Opposing studies have shown that longer exposure to violent games correlates to reduced aggression (Sherry 2001, 425), and that the correlation between violence and video games disappears when gender is controlled for (Ferguson 2008; Gentile et al. 2004). Additionally, Anderson and Dill’s study was attacked as they ignore three out of four violence indications, all of which do not point to heightened aggression (Ferguson 2008, 6). In general, the various studies have been criticized because the authors have played up an incredibly weak correlation between violence and games and calling it a causal relationship. Finally, it is telling that Anderson, Douglas Gentile, and Katherine Buckley announce that, “the scientific debate about whether exposure to media violence causes increases in aggressive behavior is over… and should have been over 30 years ago” (Anderson et al. 2007, 4), but they do so through referencing none of the oppositional studies. While one side of the active media research ignores alternate studies in order to produce damning correlation (that they claim is a cause and effect relationship) between playing violent video games and violence, the other side points out the problems of the research, and the weak correlation, or even the lack of correlation, between games and violence (Egenfeldt-Nielson et al 2008; Ferguson 2010; Gauntlett 2005).

Media effects research is tied to what Christopher Ferguson calls the “Moral Panic Wheel” (Ferguson 2008).[17] The wheel turns as follows: (1) Most of the impetus for effects research begins with general societal beliefs that may be informed by cultural, religious, political, scientific or activist elements. (2) These general societal beliefs lead to media reports on potential adversarial effects. (3) The possibility for violence turns into a likelihood or certainty of violence, which is implied in the broadcasts of the mass media. (4) This dissemination of false information results in a call for research in order to support the original beliefs. (5) The research promotes fear, and is uncritically supported by the media that called for it in the first place. (6) Politicians then promote the panic and fear in order to promote their own political careers, which loops back around to more media reports on potential fears. Through this cycle parental and mass media reactionary responses get coupled with advocacy groups and easily funded scientific research, resulting in a majority of scientific publications indicating some sort of correlation. In contrast, because of the publication bias toward reporting positive effects, null effects go unpublished and therefore uncited (Ferguson 2010, 6-7).

In the recently released and relatively well publicized general audience nonfiction book Grand Theft Childhood, Lawrence Kutner and Cheryl Olson argue that there is a correlation between violence and playing games, but it is insubstantially determined at present, and far from the sole cause of violence. The authors argue that it would be far more productive to locate the other reasons for violent actions outside of media, and that it is not incidents of extreme violence that have increased (they have decreased), but bullying. However, what is most interesting about Kutner and Olson’s book is its popular orientation. Their book is published through a popular press, and targeted not at academic or scientific audiences, but the same audience that is otherwise subject to the wheel of media panic. The book’s second chapter is a cultural history of media panics over the past century in the United States. By showing the link between the current panic and previous ones, they highlight the parallels between what was being hidden and focused upon then, and what is being hidden and focused upon now. Their book thus works to contrast efforts of game naysayers, media voices, and politicians in ways that the academic oppositional studies and meta-analyses have simply failed at so far. Essentially, Kutner and Olson work within Ferguson/Gauntlett’s Moral Panic Wheel to slow it down.

While violence has been the most visible topic of video game effects research, it is not the only one. A second controversial effect is addiction. The question whether games are addicting, whether this addiction (if it is indeed an addiction) needs to be regulated, and for whom does it need to be regulated are all elements of this branch of effects research.

Like the research on violence, addiction research stems (at least in part) from media panics. The suicide of Shawn Woolley while playing Everquest in 2002; the 2004 suicide of Xiao Yi with accompanying note that discusses his addiction and desire to be reunited with other players (Guttridge 2005); the 28 year old South Korean gamer who collapsed after a 50 hour session in 2005 (BBC News 2005); and the 2010 starvation of a South Korean couple’s real life baby supposedly caused by the parents’ game addiction (Tran 2010). All of these incidents have been highly represented in the mass media (Reverend_Danger 2009), and following Ferguson and Gauntlett’s concept of moral panic, studies have begun to research addiction, but are inconclusive as yet. While the American Psychiatric Association (APA) has discussed the possibility of listing video game addiction as an official addiction, it has not yet done so for a number of reasons (ScienceDaily 2007; Hartney 2011). One of the main conundrums around the addiction issue involves the difference in bodily reaction for becoming addicted to games as opposed to drinking or gambling. While the societal reaction toward ‘game addiction’ is similar to that of ‘drug addiction’ (withdrawal form society and an inability to act ‘normally’), the physical manifestation of addiction is quite different in terms of dependence, withdrawal and relapse. For the APA these are serious differences, and part of the reason that game addiction is not considered an official addiction.

A second difference involves the relationship between games, addiction and design practices. If gamers are becoming addicted to playing, this is in part because of particular game design that draws the players in and encourages them to keep playing. If this is the case, then it is not games that are addicting, but particular design practices. In a 2003 study on game addiction, Ting-Jui Chou and Chih-Chen Ting argue that ‘flow experience’ might be a causal link between playing as habit and playing as addiction. The concept of “flow” comes from Mihaly Csikszentmihalyi’s (1990) study on optimal experience, or the rare, intense, but happy and pleasurable moments where the body and mind are completely consumed in something. Chou and Ting note that Becker and Murphy’s economic theory of ‘rational’ addiction, as a type of logical habit, is at odds with the psychological or sociological theory that understands addiction as abnormal excess. However, it is within the playing as habit that Csikszentmihalyi’s (1990) flow state is more likely to occur for players, thus pushing them over into the “quasi-lunatic” type of addiction (Chou and Ting 674). While the habit style of rational addiction is good for business, an enjoyable experience, and good game design, too much of it might generally push players over into the more dangerous area of addiction.

Despite this close link between flow and addiction, many games are designed with a type of flow experience in mind. The most obvious example (and the usual scapegoats) are MMOs like World of Warcraft, which require the player grind for hours in order to increase his or her level, or to gain higher levels of gear. However, clearer and more self-aware examples are Jenova Chen’s games flOw and Flower, which follow through from his MFA thesis on “Flow in Games” (2006). Chen sought to adapt Csikszentmihalyi’s flow experience into a workable theory of game design. Chen designs to allow the user to play in the ‘zone of flow,’ which exists between challenge and boredom depending on personal skill. The user is never at a loss for what to do, and never in a state that would cause him or her to stop playing from vexation. Flow in these games is about designing for user enjoyment, not long lasting play through repeated quests.

The other confusing element involving addiction and MMOs is why the players continue playing. Do people play MMOs due to a simple, unhealthy addiction, incredibly good game design, or because interaction with an alternate, online culture is different and/or better than the offline culture for one reason or another? Unlike the biologically oriented effects research, certain studies on addiction seek to understand societal reasons that people run to games.

Economist and theorist of virtual worlds, Edward Castronova, has used the economic “utility function” to understand the playing of MMOs, and to explain what he foresees as a mass migration from real to virtual world (Castronova 2007, 65-70). According to economics, the rational action is the one that produces the most value; people are rational; people do the rational thing that leads to the most value. Thus, MMOs must have some sort of value that is causing people to rationally play to the point of addiction. While Castronova agrees that there is value that these people seek in the game (freedom, fun, and an interesting experience), part of their decision to flee to the virtual worlds resides in the problems of the real world. These problems are lack of equality in employment, access, outcomes, wealth, et cetera. Addiction, then, is a silly concept as of course people are addicted to something that is fun and good. For Castronova, the answer is simply in changing the real world to take advantage of the utopian, virtual world benefits. He predicts that the exodus to virtual worlds that stems from problems in the real world will eventually reverberate by people changing the real world. Whether this is through making the real world game-like, or changing the bad parts of the real world is as yet unknown.

Flowing directly from the studies of addiction and MMOs are more general studies of virtual worlds. The following section is about gaming cultures studied primarily from an anthropological perspective.


Gaming Cultures – What We Do In Games

One of the first discussions of gaming cultures is Julian Dibbell’s 1993 Village Voice article, “A Rape in Cyberspace,” which is about how rape can exist in a virtual form, how a digital place can be social, and how emotions and affect work between real and virtual environments. Dibbell’s article describes a series of events in the online multi-user domain LamdaMOO, which began with a virtual ‘rape,’ followed with a general meeting and user encouraged ‘execution’ (a ‘toading’ that corresponded to the erasure of a character), continued with a ‘resurrection’ (a new character with similar actions), and ended with a character eternally ‘sleeping’ (offline, and away from game). The two key points of “A Rape in Cyberspace” are the emotional reaction to the rape, and the social formation surrounding the decision to toad the rapist. The emotional reaction to the virtual rape is important because it indicated the porousness of virtual/actual borders emotionally. Despite its virtuality, the rape was real. It had real effects on both the virtual avatars and on their actual players. The second key point was the social formation that mimicked real culture. MUDs were of course social situations, but the virtual execution of the rapist took place not by a developer’s authoritarian decision, but deep deliberation of a created social group. This type of cultural formation, deliberation, and action is surprising precisely because it is the exact same thing that happens in the actual world. The same thing that happens in the real world happens in virtual worlds. The magic circle is no longer a hard line between play and life (if it ever was), but a porous boundary between alternate places. These two points have been reiterated in much of the later work on gaming cultures: the things that happen in games matter; there is no hard line between online, virtual game and offline, actual life, and cultures are developed between the two worlds.

A later, and much quoted study in and of a modern MMORPG is T.L. Taylor’s Play Between Worlds (2006).[18] In her ethnographic research of then dominant MMORPG Everquest, she makes strong conclusions aligned with Dibbell’s decade earlier statements, against the hard boundaries of Huizinga’s magic circle, and for the sociality of games. Opposed to the ‘common sense’ belief that video games are antisocial, solitary endeavors and that video game players are loners, Taylor’s study crosses between the virtual and real world, showing how communities were built between the two worlds. Unlike the belief that the game was a place apart, an encircled, magic world of play, she writes of many real things that happened online; and conversely, that virtual things that happened in the game had an affect on the real lives of the players. Taylor concludes, “to imagine we can segregate these things—game and nongame, social and game, on- and offline, virtual and real—not only misunderstands our relationship with technology, but our relationship with culture… [Her] call then is for nondichotomous models” (Taylor 2006, 153). By nondichotomous she means models that do not push one theory or another, real or virtual, game or not game, play or work, as these either or situations falsely simplify the way we as humans interact. While Taylor does not directly reference Sutton-Smith’s ideas of the ambiguity of play here, she is in ways extending this ambiguity to the entire register of game interactions.

Taylor’s study and conclusions support many of the other research agendas from the argument that games affect gamers (discussed above), to the methodological expansion of the ethnographic method to alternate sites (Marcus 1998), to the fact that these synthetic worlds matter socially and economically (Castronova 2005; Dibbell 2006). Within game studies, Taylor’s study is indicative of a massive widening of research to a new area of game cultures and gamer cultures: studies of the social cultures of gamers in and out of the games. These have included studies of clans in MMOs (Pearce and Artemesia 2009, Nardi and Harris 2006), children and learning (Nardi et al. 2007), and even case modification culture (Simon 2007).

A similar, almost derivative work is Tom Boellstorff’s Coming of Age in Second Life (2008). Boellstorff’s ethnography of the MMO world Second Life is a detailed sketch of the culture. He reiterates some of the claims Dibbell made 15 years earlier involving the creation of social groups, and he follows Taylor by pointing toward alternate ethnographic field sites. However, key problems with Coming of Age in Second Life are the complete disregard of any element of the MMO as related to game, and that for him Second Life is a self contained culture. Regarding Second Life as a world, but not even world as he reiterates its titular claim of a ‘life,’ Boellstorff pushes away from the play or gameness of the application. Second Life is a place where life happens (Boellstorff 2008, 91). This claim is not wrong, as that is how players use the application. However, it is problematic as the application is fighting for market share with World of Warcraft, Everquest, Ultima Online and all of the other MMO games. While Second Life is not a game, it is still fighting against other games as an alternative, and as such should be considered as an industry and cultural comparative artifact. However, because of Boellstorff’s focus on the self-contained culture he ignores the industrial, cultural and real world particulars of how Second Life functions. Essentially, Boellstorff creates a magic circle of cultural containment around Second Life players. There are those who play it, and they are in one culture even if their culture goes between their two worlds of real and virtual.

Mia Consalvo’s work on cheating (2007) is a second way to study gaming culture. Consalvo focuses on the very particular, and yet very different ways that one may cheat within games. Unlike Boellstorff’s overly focused online study, Consalvo looks at the multiple places within the register of production and consumption that one might cheat. She spans from programming easter eggs to cheat the producer or distributer, and the creation of strategy guides by the industry or by the community, to cheating within gameplay through codes or hacking, and even to the effort to police cheating in order to create a fair system. As somebody who understands the range of ways digital gaming happens, Consalvo study explores the culture of game cheating in a much less reductive way than Boellstorff’s study of Second Life culture. Her study spans from top-down industry, to resistant fan practices, to mainstream player practices that are both allowed and illegal depending on the game. However, there is a familiar problem in her work: she fails to relate or explore the culture of cheating within different national and regional cultures. Consalvo presents cheating as universal.

Consalvo’s main focus of study (Final Fantasy XI) exists in Japanese, United States and European locales, but she ignores the differences beyond brief mention. For Consalvo, too, gaming culture exists apart from real world cultural matrices, which are somehow assumed to be equally self-contained. In contrast, I would argue that located (often nationally) cultural understandings of cheating are essential to understand the notion of cheating in games. The status of hacking, FAQ writing, and gold farming (all forms of cheating discussed by Consalvo), changes depending on the perspective of the subjects involved. To those who buy gold it is because they do not have enough time and the game to be a matter of fun and play value. In contrast, those who oppose the culture of farming and buying gold often consider it cheating as it ruins their way of playing. While these are conclusions that Consalvo derives, she fails to consider the plethora of freemium games—free to play, but micro-payments for add-ons—that are currently thriving. Originally prevalent with MMOs in Korea and China, the freemium model opposed more Western dominant monthly payment styles that support Consalvo’s conclusions about ‘fairness’ and ‘cheating.’ The freemium model is almost identical to gold farming, except it is the developers and producers who are ‘ruining’ the game by ‘cheating.’ Is gold farming as cheating somehow integrated into national or regional understandings of cheating? Even though she deals with gold farming, Consalvo fails to look at the different models, or situated understandings of cheating. She sees a universal gaming culture and corresponding understanding of cheating instead of understandings that are integrated into regional and national based cultural understandings.

While studies of game cultures and gamer cultures break with earlier separations of real and virtual, they also assume an essentialized alternate culture that somehow remains unlinked to cultures that players might otherwise be a part of (such as national, racial, or religious cultures). While these studies have facilitated the logical break of games as separate and pure fun, they have reinforced the ontological universality of games. In the next section I discuss the way that a different subfield of game studies deals with cultural and political issues between real world and games. In a way, these last two subfields overlap, and it is only the researchers’ agendas and methodologies that determine the placement in one or the other. If walls and fences separate the other disciplinary enclaves, these two are separated by train tracks. One is a bit more run-down, a bit dirtier, and a bit more tangled; however, my alliance is with the dirtier enclave in the next section, which more critically goes back and forth between real and virtual worlds studying their tangled interactions.


Political and Cultural Issues – How Games and the World Interact

The final area that I will discuss here involves the intersection of (supposedly) contained games with (supposedly) external variables, issues and problems in the world. Like with the anthropological studies of game cultures, these cross Huizinga’s magic circle, but these final studies work toward how games problematically reproduce, reiterate and facilitate real world issues. In this final subfield of game studies are issues of empire, economics, race, gender, and my own focus, translation. While this subfield holds promise for future study from a Communication perspective, the studies as they stand are relatively limited at present. This looseness is in part because different authors do not necessarily draw from each other, or from the core of game studies. Rather, they often draw from other interdisciplinary origins and discussions. However, if conversations continue to happen, this subfield could turn into a very powerful and productive area in contrast to the bickering of other areas where either or arguments tend to dominate.

In their 2003 book Digital Play, authors Stephen Kline, Nick Dyer-Witheford, and Greig De Peuter provide a more “multidimensional approach” to studying games that mixes media studies, political economy, and cultural studies perspectives. From studies of media the authors draw from Harold Innis’ analysis of the printing press and technology’s relationship to bias, empire and knowledge, and Marshall McLuhan’s extension of Innis’ theories toward electronic media. From political economy, the authors create a genealogy from Karl Marx’s critique of Capitalist accumulation, through the Frankfurt School’s critique of mass culture, to Herbert Schiller’s link of capitalism and communication technologies to American Empire. However, they focus on Nicholas Garnham’s “circuit of capital” where both selling and advertising feed off of each other (Kline et al. 2003, 39-49). Using cultural studies approaches, the authors problematize the top down reductive, top down understanding of the political economic approach that does not consider agency or the user. This they pull from Stuart Hall’s call for both encoding and decoding of messages, and reception studies’ focus on the viewer/user. However, they also critique cultural studies for how it ignores embodied play, underplays the commercial structure of the industry, and misses the fact that audiences do not simply exist, but rather, are constructed in the commercial marketplace (Kline et al. 2003, 45-6). Finally, the authors get inspiration from Raymond Williams complex understanding of television as a continuously shaped and shaping technology that was far from inevitable or predetermined (as media determinists would hold). Through Williams, and more directly Garnham’s “circuit of capital,” the authors combine the three disparate approaches through with their Three Circuit model of interactivity (Kline et al. 2003, 58). The three approaches form separate circuits of technology, marketing, and culture respectively. The center where they overlap is the realm of games, and as the circuits are constantly working, the approach integrates a non-universal, non-cemented view of games that incorporates time and change. While their approach has much promise, primarily due to the complexity, it has been little used. In part this is due to the very complexity that they espouse, but in part it is due to their avoidance of the seminal figures of game studies and the digital/coded aspect. While they discuss the designers and programmers, they do so from a historical approach that avoids the more technical aspects of computer games as code. The unfortunate result is that they do not speak to the more technical areas of game studies, which are dominant at present, and those technical areas do not pull from Kline et al.’s complex, interdisciplinary approach. This is further visible in that Dyer-Witheford and De Peuter have co-written a second book on Games of Empire (2009), which goes similarly unmentioned in most of the other literature in game studies. While cited in bibliographies, the two books are little discussed in other studies, and their overarching method is so far not often followed. This lack of crossover has little to do with the usefulness of their work and all to do with the disciplinary and methodological biases of the individual people in the field.

There are two places where Kline et al.’s interdisciplinary methodological intervention can be seen to be bearing fruit so far. The first is in Aphra Kerr’s (2006) recent textbook-like overview of games. In The Business and Culture of Digital Games: Gamework/Gameplay, Kerr argues, “that digital games are socially constructed artefacts that emerge from a complex process of negotiation between various human and non-human actors within the context of a particular historic formation” (Kerr 2006, 4). Therefore, it is necessary to study the span of contexts, actors and artifact in all of their manifestations: these moments include studying the game as text to be played, the industry as an object, the global networks of production that make games, players in their context and how they particularize play, which includes counter gaming strategies. Unfortunately, her study is a broad overview aimed at introductory students in game studies, which does not follow any particular game through these different moments. As a result, the interdisciplinary methodology remains an ideal or goal.

A second connection is Matthew Payne’s yet-to-be-finished dissertation work, where he follows Kline et al.’s circuits of capital methodology to study FPS games.[19] Payne argues that FPS games represent a new form of Ludic Capitalism.[20] The connection between world and game, Capitalism and commodity, is key to critical communication studies of games and is one of the key benefits of this area of research. Payne’s extension from these larger connections to a particular genre is a second important point. By focusing on a genre, Payne is (likely) able to get closer to making particular arguments that games in general (through the studies of Kline et al. and Kerr) do not approach. Unfortunately, other than Kerr and Payne few studies so far take up the challenge of moving between the various fields within game studies. As it stands, despite its claims toward interdisciplinarity game studies is still focused on separate areas, or to continue with Kline et al.’s terminology, separate circuits.

Connected to both Kline et al’s discussion of culture and circuits, and Kerr’s analysis of business economics, is a subset of research on game economies. Most striking are Edward Castronova’s early study of Everquest’s GDP, Julian Dibbell’s effort to make a living buying and selling in Ultima Online, and Ge Jin’s documentary work on Chinese gold farmers.

The first of these three examples is Castronova’s study of the economy of Norrath, the world in which MMORPG Everquest takes place. In his study, Castronova calculates that “the gross national product of Norrath [is] about $135 million… [making Norrath] the 77th richest country in the world, roughly equal to Russia” (Castronova 2001, 33). While the per capita income is above the poverty line, “inequality is significant” both between levels (as designed) and within equal-level characters (signaling poverty where there ought to be equality) (Castronova 2001, 33). Castronova’s study was groundbreaking as it showed a system of value within Everquest’s diegetic world (and by extension in any MMO), but it was also ground breaking for showing that economies of game worlds are tied to national economies. There is no magic circle to separate work and play, real and virtual lives; rather, they are completely tangled together.

Two years after Castronova’s study, Julian Dibbell published an article (2003) about the sale of real estate in MMORPG Ultima Online, and a year after that Dibbell conducted a yearlong attempt to make his primary source of income the buying and selling of goods in Ultima Online, which he documented in a blog and later compiled in a book. In Play Money (2006) he documents the banal process of the various ways he learned to buy and sell in Ultima Online from farming and selling, to buying and selling items, to simply buying and selling gold. Dibbell’s analysis is interesting in that he proves he can make a reasonable living. However, it shows dystopian sides, as he does not make as much as he did as a writer, which is not the highest paid job regardless, and because of the exploitation at “gold farms” in Tijuana, Mexico (Dibbell 2006, 9-29; 88-134). While Dibbell works independently as a buyer and seller, the Tijuana gold farm (which the author hears about, but does not see) is a place of unskilled, exploited labor. Gold farms are a place of incredible importance for the back and forth between world and game; the issues of exploited labor, gold farming, and racism have gotten more important as such gold farms have become more common in places with available and exploitable labor.

A third influential work is Ge Jin’s documentary work on Chinese gold farms (2006, 2007). He has studied the living, working and discursive conditions of gold farmers in and around MMOs. While some MMO players are against gold farmers who make in-game currency in order to sell it for real world currency, other MMO players claim gold farmers are simply supplying a demand. Regardless of the official status of their act,[21] gold farmers’ conditions bring up critical issues regarding work and play in rich and poor environments, and the visible material connections between virtual and actual worlds. Castronova puts in-game economic disparity as an afterthought at the very end of his early work (2001), but five years later it has become a crucial area of study. While the economics of online worlds is acknowledged, it remains a highly contentious area of game culture: Who is right and who is wrong? How can one play properly? Is a gold farmer’s play work? These and many more problematic areas are touched through the topic of game economies. However, key to my point is that real world economics is not simply being reproduced in the game. Rather, the game affects the real world and the real world affects the game, and to study one it is necessary to study the other.

An area that is heavily discussed, but little advanced, is racial and gendered representation within games, which can produce further racist and sexist societal attitudes.[22] Typically, characters in games have been white and male. This reproduces certain industry structures, as the International Game Developers Association reports its demographics to be 83% white and 88% male (IGDA 2005). However, the character bias (white and male) has also been ‘justified’ by a (mistaken) belief that game players are white and male. Certain studies indicate the male/female ratio of players is 60% male and 40% female (Hewitt 2008), but this ratio has been reported as different as 93% female for certain genres and modes of play (Juul 2009, 80). While the 93% is specifically for casual, downloaded games, the generally accepted 60% male to 40% female ratio is quite different than the mistaken belief that games are solely played by teen boys. Racial and national player base percentages are unclear, but studies note that game expenditure is quite high for African Americans (Nielson 2010), and Japanese, Chinese and Korean game industries are all large. Because of the relatively multinational and multiracial player bases, the unbalanced ratios of represented characters are incredibly problematic.

There have been numerous studies on the representational bias, sexism or racism in games (Everett 2005; Jenkins 2006; Ludica 2004; Nakamura 2002, 2009; Richard and Zaremba 2005). Later studies indicate that along with an increased awareness of the issue of representation there have been more neutral, and less debasing, inclusions of race (Higgin 2009) and gender (Alexander 2010; Kim 2010) in games in the past years as opposed to previous decades. Representation in games has gotten visibly ‘better.’ However, it is far from ‘good,’ and there are still untouched areas that bring out what Ludica in their mission statement call the “re-active” as opposed to “pro-active” way these changes have become manifest.[23] Race and gender have been re-actively attacked because of they have become rendered visible in more mainstream discourse (Cassell and Jenkins 1998), but national and alternative groups are still unrepresented and unrepresentable in mainstream game discourse. It is necessary to pro-actively approach the topic of representation as a general study.

James Paul Gee’s work on learning and literacy is an interesting alternative pro-active theorization approach. Gee notes that games often simply reinforce typical cultural models (Gee 2007, 145-6). These include the racial and gendered ones I discussed above, but range to concepts such as good and evil, that working hard makes everything possible, and that ‘aliens’ are necessarily evil and must be exterminated. He also argues that games can challenge cultural models (for good or ill). One challenging game, Under Ash, challenges the standard FPS model of shooting Arabs/Aliens by putting the player into the role of a Palestinian of the infitada fighting Israel (Gee 2007, 155-60). A more childish, but equally interesting game is Sonic Adventure 2 Battle, which allows the user to play as both Sonic the Hedgehog, a good character, and Shadow the Hedgehog, a bad character; as Sonic, the player saves the world, and as Shadow, the player seeks to destroy the world. On one side, Gee’s six-year-old informant learns the ideological concept of ‘good,’ and on the other the child learns to fight for the group despite its ‘evil’ nature. Both are models that can be learned.

There is a paucity of games that allow the user to experience and learn alternate cultural models. In part this is an industrial/economic issue: the familiar sells; genres sell; companies design what the audience wants. But in part it can be seen as a conservative cultural trend. Oppositional models, which come in many forms, are unacceptable in mainstream culture. That kids could learn useful things from violent video games, the simulation of both sides of an American military conflict, and the existence of games about ‘defending’ the border between United States and Mexico are all oppositional cultural models.[24] However, these alternate models are key to learning for both children and adults. It is unfortunate that they are so rare. To extend Ludica’s comment regarding pro-active and re-active practices beyond gender, pro-active game design practices are the active inclusion of alternate models. The problem with re-active studies of representations (be it of gender, race, sexuality, or nation) is that they are not productive avenues. However, representation is not the only place models disappear.

One of the places that cultural models tend to disappear is in the practice of localization, which renders foreign games culturally palatable through a thoroughly domesticating[25] form of translation. The game industry understands good games as those that sell well and are entertaining. As such, the practice of translating a game includes making it easy to consume, and this involves changing alternate cultural elements. Cultural models that are risqué, or assumed to be difficult to understand, are altered or deleted (Chandler 2005; Mangiron 2006; O’Hagan and Mangiron 2004). These changes are justified through the new media principles of modularity and variability discussed above (Manovich 2001). An example is the Japanese Ryu ga gotoku 3 (2009), which was localized in the United States as Yakuza 3 (2010) with numerous Japanese specific elements removed, including hostess bars and pachinko parlors (Ashcraft 2010; DJ Fob Fresh 2010). An alternate, but similarly domesticating strategy of localization would have been to change hostess bars to strip clubs, and pachinko parlors to slot machines. However, this would still result in altered cultural models. A way to allow (or force) the user to engage with the alternate cultural models would be to leave the Japanese game assets in place. This could be considered a foreignizing, and non-localizing strategy of translation. This alternate strategy would also lead to the player confronting real world cultural models that he or she might otherwise not see. This interaction involves learning, but it is also the way one may ethically witness and confront a foreign culture.

Real world culture is inseparable from games, and these studies approach the topic of games in this very tangled way. Unfortunately, this subfield is quite limited at present: it is the ‘wrong side of the tracks,’ but its problems and disorders are much more promising than the ordered existence of the gaming cultures subfield.



In this paper I have discussed the various subfields currently active within the field of game studies. The subfields give the field at large an interdisciplinary appearance, but they tend to have disciplinary underpinnings themselves. As the outline of a class taught in a department of Communication it is my intent to highlight the interdisciplinary field, the disciplinary subfields, and the foci of the subfields; it is not just topic, or area of the game that changes between subfields, but means of approaching that topic.



Abbot, Michael, Brenda Brathwaite, and John Sharp. Brainy Gamer Podcast 26, November 9, 2009

Alexander, Leigh. “Bayonetta: Empowering or Exploitative?” GamePro, January 6 2010.

Anderson, Craig Alan, and Brad J. Bushman. “Effects of Violent Video Games on Aggressive Behavior, Aggressive Cognition, Aggressive Affect, Physiological Arousal, and Prosocial Behavior: A Meta-Analytic Review of the Scientific Literature.” Psychological Science 12, no. 5 (2001): 353-59.

Anderson, Craig Alan, and Karen E. Dill. “Video Games and Aggressive Thoughts, Feelings, and Behavior in the Laboratory and in Life.” Journal of Personality and Social Psychology 78, no. 4 (2000): 772-90.

Anderson, Craig Alan, Douglas A. Gentile, and Katherine E. Buckley. Violent Video Game Effects on Children and Adolescents: Theory, Research, and Public Policy. Oxford; New York: Oxford University Press, 2007.

Ashcraft, Brian. “Sega, You Are Once Again Making a Giant Mistake.” Kotaku, February 24 2010.

Bartle, Richard. “Hearts, Clubs, Diamonds, Spades: Players Who Suit Muds.” In The Game Design Reader: A Rules of Play Anthology, edited by Katie Salen and Eric Zimmerman. Cambridge: MIT Press, 2006 [1996].

BBC News. “S Korean dies after games session.” BBC News. Posted: August 10, 2005. Accessed: April 7, 2011. http://news.bbc.co.uk/2/hi/technology/4137782.stm

Boellstorff, Tom. Coming of Age in Second Life: An Anthropologist Explores the Virtually Human. Princeton: Princeton University Press, 2008.

Bogost, Ian. Persuasive Games: The Expressive Power of Videogames. Cambridge: MIT Press, 2007.

———. “Persuasive Games: The Proceduralist Style.” Gamasutra, January 21 2009.

Boomen, Marianne van den. Digital Material: Tracing New Media in Everyday Life and Technology. Amsterdam: Amsterdam University Press, 2009.

Brophy-Warren, Jamin. “Speakeasy: The Board Game No One Wants to Play More Than Once.” The Wall Street Journal. Posted: June 24, 2009. Accessed: April 2, 2011. http://blogs.wsj.com/speakeasy/2009/06/24/can-you-make-a-board-game-about-the-holocaust-meet-train/

Brown, Harry J. Videogames and Education, History, Humanities, and New Technology. Armonk, N.Y.: M.E. Sharpe, 2008.

Bryce, Jo, and Jason Rutter. “Gendered Gaming in Gendered Space.” In Handbook of Computer Game Studies, edited by Joost Raessens and Jeffrey H. Goldstein. Cambridge: MIT Press, 2005.

Buckingham, David. After the Death of Childhood: Growing up in the Age of Electronic Media. Cambridge, UK ; Malden, MA: Polity Press, 2000.

Caillois, Roger, and Meyer Barash. Man, Play, and Games. Urbana: University of Illinois Press, 2001.

Cassell, Justine, and Henry Jenkins. From Barbie to Mortal Kombat : Gender and Computer Games. Cambridge, Mass.: MIT Press, 1998.

Castronova, Edward. “Virtual Worlds: A First-Hand Account of Market and Society on the Cyberian Frontier.” CESifo Working Paper Series, no. 618 (2001).

———. Synthetic Worlds: The Business and Culture of Online Games. Chicago: University of Chicago Press, 2005.

———. Exodus to the Virtual World: How Online Fun Is Changing Reality. New York: Palgrave Macmillan, 2007.

Chan, Dean. “Negotiating Intra-Asian Games Networks: On Cultural Proximity, East Asian Games Design, and Chinese Farmers.” The Fibreculture Journal 8 (2006): 1-14.

Chandler, Heather Maxwell. The Game Localization Handbook. Hingham: Charles River Media, 2005.

Chen, Jenova. “Flow in Games.” University of Southern California, 2006.

———. “Flow in Games (and Everything Else).” Communications of the ACM 50, no. 4 (2007): 31-34.

Chou, Ting-Jui, and Chih-Chen Ting. “The Role of Flow Experience in Cyber-Game Addiction.” CyberPsychology & Behavior 6, no. 6 (2003): 663-75.

Clarke, Andy, and Grethe Mitchell. Videogames and Art. Bristol, UK; Chicago: Intellect, 2007.

Coleman, Beth. Hello Avatar. Cambridge: MIT Press, 2011.

Consalvo, Mia. Cheating: Gaining Advantage in Videogames. Cambridge, Mass.: MIT Press, 2007.

Csikszentmihalyi, Mihaly. Flow: The Psychology of Optimal Experience. 1st ed. New York: Harper & Row, 1990.

Dibbell, Julian. “A Rape in Cyberspace.” The Village Voice, December 23 1993.

———. “The Unreal Estate Boom.” Wired 11, no. 1 (2003).

———. Play Money: Or, How I Quit My Day Job and Made Millions Trading Virtual Loot. New York: Basic Books, 2006.

Dyer-Witheford, Nick, and Greig De Peuter. Games of Empire: Global Capitalism and Video Games. Minneapolis: University of Minnesota Press, 2009.

Ebert, Roger. “Video Games Can Never Be Art.” Chicago Sun-Times. April 16, 2010. http://blogs.suntimes.com/ebert/2010/04/video_games_can_never_be_art.html

Egenfeldt-Nielsen, Simon, Jonas Heide Smith, and Susana Pajares Tosca. Understanding Video Games : The Essential Introduction. New York: Routledge, 2008.

Elias, Norbert, and Eric Dunning. “Leisure in the Spare-Time Spectrum.” In Quest for Excitement: Sport and Leisure in the Civilizing Process. Oxford; New York: Basil Blackwell, 1986.

Everett, Anna. “Serious Play: Playing with Race in Contemporary Gaming Culture.” In Handbook of Computer Game Studies, edited by Joost Raessens and Jeffrey H. Goldstein, xvii, 451 p. Cambridge, Mass.: MIT Press, 2005.

Ferguson, Christopher J. “The School Shooting/Violent Video Game Link: Causal Relationship or Moral Panic?” Journal of Investigative Psychology and Offender Profiling 5 (2008): 25-37.

———. “Blazing Angels or Resident Evil? Can Violent Video Games Be a Force for Good?” Review of General Psychology 14, no. 2 (2010): 68-81.

Flanagan, Mary. “Locating Play and Politics: Real World Games & Activism.” Paper presented at the Digital Arts and Culture, Perth, Australia 2007.

———. Critical Play: Radical Game Design. Cambridge, Mass.: MIT Press, 2009.

Fresh, DJ Fob. “A Yakuza 3 Guide to Edits and Cuts.” Segashiro 2010.

Galloway, Alexander R. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press, 2006.

Gauntlett, David. Moving Experiences : Media Effects and Beyond. 2nd ed. Eastleigh: John Libbey Publishing, 2005.

Ge, Jin. “Gold Farmers.” 2006.

Gee, James Paul. Good Video Games + Good Learning: Collected Essays on Video Games, Learning, and Literacy. New York: P. Lang, 2007.

———. What Video Games Have to Teach Us About Learning and Literacy. Rev. and updated ed. New York: Palgrave Macmillan, 2007.

Gentile, Douglas A., Paul J. Lynch, Jennifer Ruh Linder, and David A. Walsh. “The Effects of Violent Video Game Habits on Adolescent Hostility, Aggressive Behaviors, and School Performance.” Journal of Adolescence 27 (2004): 5-22.

Guttridge, Luke. “Chinese Suicide Shows Addiction Dangers.” Play.tm. Posted: June 3, 2005. Accessed: April 7, 2011. http://www.play.tm/news/5928/chinese-suicide-shows-addiction-dangers/

Harrigan, Pat, and Noah Wardrip-Fruin. Third Person: Authoring and Exploring Vast Narratives. Cambridge: MIT Press, 2009.

Hartney, Elizabeth. “Is Video Game Addiction Really an Addiction?” About.com, Addictions. Updated: January 8, 2011. Accessed: April 7, 2011

Hewitt, Dan. “Women Comprise 40 Percent of U.S. Gamers.” The Entertainment Software Association, 2008.

Higgin, Tanner. “Blackless Fantasy: The Disappearance of Race in Massively Multiplayer Online Role-Playing Games.” Games and Culture 4, no. 1 (2009): 3-26.

Huizinga, Johan. Homo Ludens; a Study of the Play-Element in Culture. Boston,: Beacon Press, 1955.

International Game Developers Association. “Game Developer Demographics: An Exploration of Workforce Diversity.” October 2005. http://www.igda.org/game-developer-demographics-report

Jenkins, Henry. “”Complete Freedom of Movement”: Video Games as Gendered Play Spaces.” In The Game Design Reader: A Rules of Play Anthology, edited by Katie Salen and Eric Zimmerman, xxx, 924 p. Cambridge: MIT Press, 2006.

Juul, Jesper. “The Game, the Player, the World: Looking for a Heart of Gameness.” In Level Up: Digital Games Research Conference Proceedings, edited by Marinka Copier and Joost Raessens, 30-45. Utrecht: Utrecht University, 2003.

———. Half-Real: Video Games between Real Rules and Fictional Worlds. Cambridge, Mass.: MIT Press, 2005.

———. A Casual Revolution: Reinventing Video Games and Their Players. Cambridge, MA: MIT Press, 2010.

Kerr, Aphra. The Business and Culture of Digital Games: Gamework/Gameplay. London; Thousand Oaks: SAGE, 2006.

Kim, Tae K. “Bayonetta: More Substance Than Virtually Any Female Protagonist before Her ” GamePro, January 8 2010.

Kline, Stephen, Nick Dyer-Witheford, and Greig De Peuter. Digital Play: The Interaction of Technology, Culture, and Marketing. Montréal; London: McGill-Queen’s University Press, 2003.

Koster, Raph. A Theory of Fun for Game Design. Scottsdale: Paraglyph Press, 2005.

Ludica, Tracy Fullerton, Jacquelyn Ford Morie, and Celia Pearce. “A Game of One’s Own: Towards a New Gendered Poetics of Digital Space.” In Digital Arts and Culture, 2004.

Malaby, Thomas M. Gambling Life: Dealing in Contingency in a Greek City. Urbana: University of Illinois Press, 2003.

———. “Beyond Play: A New Approach to Games.” Games and Culture 2, no. 2 (2007): 95-113.

Malaby, Thomas M., and Timothy Burke. “The Short and Happy Life of Interdisciplinarity in Game Studies.” Games and Culture 4, no. 4 (2009): 323-30.

Mangiron, Carmen. “Video Games Localisation: Posing New Challenges to the Translator.” Perspectives: Studies in Translatology 14, no. 4 (2006): 306-23.

Manovich, Lev. The Language of New Media. Cambridge: MIT Press, 2001.

Marcus, George E. Ethnography through Thick and Thin. Princeton, N.J.: Princeton University Press, 1998.

Montfort, Nick, and Ian Bogost. Racing the Beam: The Atari Video Computer System. Cambridge, Mass.: MIT Press, 2009.

Murray, Janet Horowitz. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, Mass.: MIT Press, 1998.

Nakamura, Lisa. Cybertypes: Race, Ethnicity, and Identity on the Internet. New York: Routledge, 2002.

———. “Don’t Hate the Player, Hate the Game: The Racialization of Labor in World of Warcraft.” Critical Studies in Media Communication 26, no. 2 (2009): 128-44.

Nardi, Bonnie A., and Justin Harris. “Strangers and Friends: Collaborative Play in World of Warcraft.” In Conference on Computer Supported Cooperative Work. Banff, Alberta, Canada, 2006.

Nardi, Bonnie A., Stella Ly, and Justin Harris. “Learning Conversations in World of Warcraft.” In 40th Hawaii International Conference on System Sciences, 2007.

Niedenthal, Simon. “What We Talk About When We Talk About Game Aesthetics.” In DiGRA 2009: Breaking New Ground: Innovation in Games, Play, Practice and Theory, 2009.

Nielson. “State of the Media: U.S. TV Trends by Ethnicity.” 2010. http://blog.nielsen.com/nielsenwire/consumer/who-watches-what-and-how-much-u-s-tv-trends-by-ethnicity/

Pearce, Celia, and Artemesia. Communities of Play: Emergent Cultures in Multiplayer Games and Virtual Worlds. Cambridge: MIT Press, 2009.

Poole, Steven. Trigger Happy: Videogames and the Entertainment Revolution. 1st U.S. ed. New York: Arcade Pub., 2000.

Pym, Anthony. “Redefining Translation Competence in an Electronic Age: In Defence of a Minimalist Approach.” Meta 48, no. 4 (2003): 481-97.

Raessens, Joost, and Jeffrey H. Goldstein. Handbook of Computer Game Studies. Cambridge, Mass.: MIT Press, 2005.

Reverend_Danger. “The Top 10 Deaths Caused By Video Games.” Spike.com. Posted: February 24, 2009. Accessed: April 7, 2011. http://www.spike.com/articles/id98jf/the-top-10-deaths-caused-by-video-games

Richard, Birgit, and Jutta Zaremba. “Gaming with Grrls: Looking for Sheroes in Computer Games.” In Handbook of Computer Game Studies, edited by Joost Raessens and Jeffrey H. Goldstein. Cambridge: MIT Press, 2005.

Salen, Katie, and Eric Zimmerman. The Game Design Reader: A Rules of Play Anthology. Cambridge: MIT Press, 2006.

Schleiermacher, Friedrich. “On the Different Methods of Translating.” In The Translation Studies Reader, edited by Lawrence Venuti. New York: Routledge, 2004 [1823].

ScienceDaily. “American Psychiatric Association Considers ‘Video Game Addiction.’” ScienceDaily. Posted: June 26, 2007. Accessed: April 7, 2011. http://www.sciencedaily.com/releases/2007/06/070625133354.htm

Sherry, John L. “The Effects of Violent Video Games on Aggression: A Meta-Analysis.” Human Communication Research 27, no. 3 (2001): 409-31.

Simon, Bart. “Geek Chic: Machine Aesthetics, Digital Gaming, and the Cultural Politics of the Case Mod.” Games and Culture 2, no. 3 (2007): 175-93.

Singer, Dorothy G., and Jerome L. Singer. Imagination and Play in the Electronic Age. Cambridge, Mass.: Harvard University Press, 2005.

Sotamaa, Olli. “When the Game Is Not Enough: Motivations and Practices among Computer Game Modding Culture.” Games and Culture 5, no. 3 (2010): 239-55.

Spence, Ian, and Jing Feng. “Video Games and Spatial Cognition.” Review of General Psychology 14, no. 2 (2010): 92-104.

Stettler, Nicolas, Theo M. Signer, and Paolo M. Suter. “Electronic Game and Environmental Factors Associated with Childhood Obesity in Switzerland.” Obesity Research 12 (2004): 896-903.

Sutton-Smith, Brian. The Ambiguity of Play. Cambridge: Harvard University Press, 1997.

Taylor, T. L. Play between Worlds: Exploring Online Game Culture. Cambridge: MIT Press, 2006.

Tran, Mark. “Girl Starved to Death While Parents Raised Virtual Girl in Online Game.” Guardian.co.uk. Posted: March 5, 2010. Accessed: April 7, 2011. http://www.guardian.co.uk/world/2010/mar/05/korean-girl-starved-online-game

Wardrip-Fruin, Noah, and Pat Harrigan. First Person: New Media as Story, Performance, and Game. Cambridge: MIT Press, 2004.

———. Second Person: Role-Playing and Story in Games and Playable Media. Cambridge, Mass.: MIT Press, 2007.

Wolf, Mark J. P., and Bernard Perron. The Video Game Theory Reader. New York; London: Routledge, 2003.


Art, Films, and Games:

Afkar Media. Under Ash. Dar al-Fikr. 2001.

Arcangel, Cory. Super Mario Clouds. 2002. http://www.coryarcangel.com/things-i-made/supermarioclouds/

Badham, John. “WarGames.” MGM/UA. 1983.

Blizzard Entertainment. Starcraft. Blizzard Entertainment. 1998.

———. World of Warcraft. Blizzard Entertainment. 2004.

Bogost, Ian. Cow Clicker. 2010.

———. Guru Meditation. 2009.

Braithwaite, Brenda. Train. 2009.

Brown, Heath (writer), et al. Diary of a Camper. United Rangers Films. 1996

Condon, Brody. Adam Killer. 1999. http://www.tmpspace.com/ak_1.html

Curtis, Pavel. LamdaMOO. 1990.

Delappe, Joseph. dead-in-iraq. 2006 http://www.unr.edu/art/delappe/gaming/dead_in_iraq/dead_in_iraq%20jpegs.html

———. The Salt Satyagraha Online. 2008. http://www.unr.edu/art/DELAPPE/Gaming/Salt_March_Second_Life/Salt_March_Second_Life_%20JPEGS.html and http://saltmarchsecondlife.wordpress.com/

Electronic Arts. Ultima Online. Electronic Arts. 1997.

Gault, Teri. The Grocery Game. http://www.thegrocerygame.com/

Hezbollah Central Internet Bureau. Special Force. Hezbollah Central Internet Bureau. 2003.

Hughes, Jake. Anachronox: The Movie. 2002. http://www.archive.org/details/JakeHughesAnachronoxTheMovie

id Software. Quake. id Software. 1996.

Ion Storm. Anachronox. Eidos Interactive. 2001.

Linden Lab. Second Life. Linden Lab. 2003.

Molleindustria. The McDonald’s Videogame. 2006.

Nintendo. Super Mario Bros. Nintendo. 1985.

Powerful Robot Games. September 12th: A Toy World. Newsgaming.com. 2003. http://www.newsgaming.com/games/index12.htm

Rohrer, Jason. Passage. 2007.

Schleiner, Anne-Marie, Joan Leandre, and Brody Condon. Velvet-Strike. 2002. http://www.opensorcery.net/velvet-strike/

Sega. Yakuza 3. Sega. 2010 [2009].

Sonic Team. Sonic Adventure 2 Battle. Sega. 2001.

Sony Online Entertainment. Everquest. Sony Online Entertainment. 1999.

SquareSoft. Final Fantasy XI. SquareSoft. 2002.

Stern, Eddo, Peter Brinson, Brody Condon, Michael Wilson, Mark Allen, and Jessica Hutchins. Waco Resurrection. 2004. http://www.eddostern.com/waco_resurrection.html

Tale of Tales. The Path. Tale of Tales. 2010.

ThatGameCompany. flOw. ThatGameCompany. 2006.

ThatGameCompany. Flower. SCEA. 2009.

U.S. Army. America’s Army. U.S. Army. 2002.

Valve Software. Half Life. Sierra Entertainment. 1998.

Zynga. Farmville. Zynga. 2009.

[1] MMORPG is a common acronym for Massively Multiplayer Online Role-Playing Game. A similar, but more general acronym that I use in this paper is MMO (Massively Multiplayer Online), which can be either a game-like (World of Warcraft) or world-like (Second Life) atmosphere. There is also the predecessor term MUD (multi-user dungeon, or multi-user domain) and its successor term MOO (Multi-user domain Object-Oriented), which stresses the more advanced ‘object-oriented’ coding of certain MUDs.

[2] This is similar to Norbert Elias and Eric Dunning’s (1986) work on leisure. They write that it is only in the “spare-time spectrum” that play approaches an oppositional value from work. They are, however, coming from a very different disciplinary perspective than Sutton-Smith.

[3] We might also link play to education and Piaget’s stages of childhood play where children necessarily progress through certain stages regardless of culture.

[4] Code is unimportant for analog games including board games and outdoor play, but it is key for digital games, which are the focus of my own work.

[5] Manovich’s other three principles are numerical representation, automation, and transcoding.

[6] Galloway draws this from critical theory, particularly looking at the work of Frederic Jameson, and Italian neo-realist cinema such as Vittorio de Sica’s Ladri Di Biciclette (1948).

[7] While there is a history of fan modifications (Sotamaa 2010), I am focusing on ‘art’ mods.

[8] Rooster Teeth Productions. Red vs. Blue. http://redvsblue.com/home.php

[9] Oxhorn. “ROFLMAO!” Uploaded: April 1, 2007. http://www.youtube.com/watch?v=iEWgs6YQR9A&feature=related

[10] Andrew Gardikis. “Super Mario Bros.” Speed Demos Archive. http://speeddemosarchive.com/Mario1.html

[11] What Sharp implies, but does not fully mention is that other contextual understandings of art, or externally imposed understandings of art, also have the spatial problem. Japanese ukiyo-e were popular posters that became art upon western instigation; ritual objects lost their purpose when placed in museums; happenings, Fluxus events and Dada poetry cannot reside within a museum’s permanent collection.

[12] It is possible that there are other things that games can do, or other styles/movements that games will be a part of, but so far it is either pure entertainment or proceduralist.

[13] This is not to indicate that ‘art’ is not commercial. The ties between art and commerce are inextricable. However, visual art at present has been naturalized where ‘good’ is ‘expensive.’ Video games at present have been naturalized as ‘entertaining,’ but such entertainment value is quite subjective (the latest FPS blockbuster is only entertaining to an audience that appreciates the genre). This is quite unlike the supposed objective monetarily equated ‘art value.’

[14] Despite the existence of numerous other possible effects, violence and addiction effects research dominates the field. In part, this is due to political and media visibility, and how this visibility funnels funding into further research on violence and addiction.

[15] The history of moral panic extends to novels, plays, poetry and pretty much every other medium; people like to place blame.

[16] The long history of technology and games with military research facilities is one reason this subject population exists, however, I do not delve too deeply into it.

[17] Ferguson adapts his moral panic wheel from a David Gauntlett’s diagram entitled “The spiral of panic about screen violence” (Gauntlett 2005, 127).

[18] There are various studies of games and online games between Dibbell’s earlier work and Taylor’s later work. An earlier example is Richard Bartle’s “Hearts, Clubs, Diamonds, Spades: Players Who Suit MUDs” (1996), in which he creates a typology of player types for online, multi-user domain/dungeon environments/games. He argues that there are four player-types (explorer, adventurer, killer and socializer), and that a multi-user game must be designed for optimum balance (not equality) between the four types. What differentiates Taylor’s study from Bartle’s and others’ studies is its focus on player culture and the porousness between worlds instead of the players themselves and the game as a set environment.

[19] Matthew Payne. “Selling Ludic War: Marketing Military Realism in Call of Duty 4: Modern Warfare.” UCSD Department of Communication Job Talk. February 2, 2011.

[20] It should be noted that Julian Dibbell’s use of ‘ludocapitalism’ (2006) predates Payne’s usage of ‘ludic capitalism.’

[21] This ontological status changes depending on the game: World of Warcraft bans gold farmers, Ultima Online turns a blind eye, and certain games (like Farmville) incorporate buying in-game currency into the design. This was previously mentioned above regarding ‘freemium’ models.

[22] While race and gender should be split to highlight the particulars of how they function differently in games, they are typically treated as simple representation in the discourse and field. I leave them combined for the sake of space in this paper, but they need to be separated.

[23] Ludica. “Ludica Mission.” http://www.ludica.org.uk/Mission.htm

[24] The first is shown in the effects research above, the second is from a cancelled game called Six Days in Fallujah, and the third is in various games including Smuggle Truck: Operation Immigration (released in early May 2011 as Snuggle Truck due to the media backlash) and Call of Juarez: The Cartel.

[25] ‘Domestication’ is a style of translation where the translator takes the foreign text and transforms it into the idiom of the domestic audience. Friedrich Schleiermacher (2004 [1823]) coined the term domestication as opposed to ‘foreignization’ —where the text stays in its foreign idiom and the domestic audience is forced to work harder to understand the original text and audience—as a principle choice for the translator.

Toward a Multi-Layered Digital Translation Methodology (Qualifying Paper #1)

In this paper, I approach new ways of translating digital media texts— from digital books, to software applications, but particularly my own focus on video games —by mixing traditional translation theory and new media theory. There are similarities between these two fields, but they do not refer to each other. Translation theory rarely looks to films and television, let alone websites, software and games; new media theory fetishizes the ‘new’ and rarely considers that it’s all been done before.[1] I cross the fields because there are mutual benefits to be had by doing so: translation can get new material practices; new media can get more history. I also cross the fields because that is what I see as the work of Communication. Finally, I cross the fields because my own work on video game translation can emerge from their crossing.

Introduction: ‘From Translation to Traduction’ to Localization

In this paper, I approach new ways of translating digital media texts— from digital books, to software applications, but particularly my own focus on video games —by mixing traditional translation theory and new media theory. There are similarities between these two fields, but they do not refer to each other. Translation theory rarely looks to films and television, let alone websites, software and games; new media theory fetishizes the ‘new’ and rarely considers that it’s all been done before.[1] I cross the fields because there are mutual benefits to be had by doing so: translation can get new material practices; new media can get more history. I also cross the fields because that is what I see as the work of Communication. Finally, I cross the fields because my own work on video game translation can emerge from their crossing.

While I have already started this paper with confusion (complexity and fusing togetherness), the word ‘translation’ itself has a confused (or perhaps, defused) past. As Antoine Berman notes, it is only in the modern period (post 1500) that the word (renamed ‘traduction’ in romance languages other than English) has taken on its present meaning.[2] Previously, the word (‘translation’) had an unstable meaning because writing itself was never considered the originary act of an author. Instead, all writing, from musing, to marginal notations, to transcriptions, to commentary, to linguistic alteration was considered translation. We are in the process of discursively moving back to the earlier understanding of the word.

The earlier understanding, ‘translation,’ comes from the Latin translatio, which can include the transportation of objects or people between places, transfer of jurisdiction, idea transfer, and linguistic alteration.[3] As Berman stresses, the premodern understanding of translation is as an “anonymous vectorial movement.”[4] In contrast, the post 1500 term, ‘traduction,’ signifies the “active energy that superintends this transport – precisely because the term etymologically reaches back to ductio and ducere. Traduction is an activity governed by an agent.”[5] For Modernity and its lauded author this “active movement” through a subjective traducer makes sense, as it distances the iterations by emphasizing a particular hierarchy of original over derivative. However, in a Postmodern culture where global flows and exchanges have moved well away from the author function and the primacy of the work it is helpful to understand the elements of translation that were lost “vectors” in the move to traduction.[6]

For romance languages where ‘translation’ became ‘traduction,’ certain formal and temporal vectors have been lost and taken up by other concepts such as adaptation, repetition, convergence, and intertextuality. While all of these terms have particulars, intertextuality is a useful example due to its link with postmodernity and the move away from grand theories.[7] With postmodern intertextuality there is no singularity of a work. Rather, everything is texts with borrowed themes, images, and sections. Intertextuality follows the formal vector of transformation, which has left translation, but it does not consider power and difference. In the early 21st century United States context, both power and difference are increasingly important and yet elided.

Some vectors were never actually lost in English, as it never switched over to the word traduction. As Berman notes, “English does not ‘traduce,’ it ‘translates,’ that is, it sets into motion the circulation of ‘contents’ which are, by their very nature, translinguistic.”[8] As the problematically designated world language, English sets itself up as a translinguistic universal, but it does so in opposition to a host of other languages that have switched over to thinking about translation as the necessary and active linguistic alteration to move a text from one place to another. Similarly, while there is an underlying energy that fuels the translational movement of a modern video game over space, there is a simultaneous understanding that nothing the game translator does can change the game as they are not changing the play level. Just like English, play is translinguistic and universal. Current forms of game translation, then, have retained a link to some of the anonymous vectors of translation.

I define translation as the ‘carrying over’ of a text from one context to another, where context can be understood as spatial, formal, or temporal. This broad definition begins to reclaim previously lost vectors, particularly a criticality necessary for the analysis of video games, which are currently exempt as they reside in an area of pure entertainment. This broad definition allows me to consider other forms of textual manipulation including video game localization—the process of translating games for new cultural contexts, which includes linguistic, audio, visual and ludic [play/action] alterations —that has theoretically and practically separated itself from simultaneous interpretation and literary translation. By doing so I wish to force open the definition to include what is already happening, localization, where much of the text is changed for the purpose of a “better” user experience. However, this move allows opens a space for what might happen, such as new forms of translation that use unofficial production to destabilize the meaning of the text by building it up.

I link traditional foci of literary translation theory with some of Jacques Derrida’s theories of deconstruction (particularly of ‘trace,’ ‘living-on,’ and ‘relevance’), and J. David Bolter and Richard Grusin’s concept remediation, in order to reconnect ‘translation’ with its (not quite) lost vectors.[9] I begin with the standard tropes of translation theory — sense and word, source and target, domestication and foreignization — as they do well to show the different possibilities at play with translation. However, disciplinary bound theories are never complete as they ignore extradisciplinary connections. One such connection is remediation. While the concept comes from a literary origin, remediation exists between literary and new media theories; I believe it can help to combine translation in the two areas and help move understandings of translation to new alternatives.

I argue that current practices of translation focus on only one side of the literary theories, thereby turning them into mutually exclusive binaries (sense or word, foreignization or domestication, immediacy or hypermediacy). However, Bolter and Grusin show that remediation is not a binary between hypermediacy and immediacy; rather, remediation utilizes both sides of the equation. Essential to new media is the simultaneous existence of both hypermediacy and immediacy. Current translations espouse only one of these sides, and ignore the benefits of the other. Translation can learn from this simultaneity in new media theory. This paper argues through to a material instantiation of new media translation that takes into consideration both sides of these pairings.

In the second section I show how the dominant practice of translation at present utilizes a domesticating, immediate strategy that overwrites (and thereby renders falsely singular) texts, whether they are literary, filmic, or ludic. In contrast, I argue that a foreignizing, hypermediate strategy that layers texts, which has always existed despite its current lack of presence, can facilitate an alternate, much needed ethics of both translation and cultural interaction. I am not arguing for a simplistic multiculturalism where difference can be subsumed under mere celebration, but for a difficult, abusive, and often painful form of interaction with difference that can reveal the actual ways in which culture functions. As Derrida argues, there is violence and pain that comes with eating the other, but there is also a necessity to eat. One must thus eat [ethically] (bien manger).[10] The same holds for translating.


Tenets of Translation

In the following sections I will review the key principles that have been the focus of translators throughout Western translation history. These examples are primarily from a European/English perspective although I try to use alternative examples where available, applicable and known. I will begin with the impossibility of a perfect translation. Second, I will elaborate on the ways of escaping this core dilemma beginning with the argument between sense-for-sense and word-for-word, and ending with the concept of equivalence. Third, I will review the opposing tendencies of domestication and foreignization as an alternate focus on the author and user instead of equivalence’s focus on the text itself. Finally, I will bring up remediation as concept terms that help bridge literary translation with new media and video game translation and transformation. By linking translation with remediation I can, in the later half of the paper, re-approach Berman’s ‘lost vectors’ of translation, recombine translation and localization, and point out alternate possibilities that are currently unconsidered due to the discursive dominance of fluent translations.


(Im)possibility of Translation

In an almost fetishistic move translation is known for its parts in lieu of its whole. The whole in this case is a holistic notion of perfect translation that completely reproduces a text in a secondary context. As George Steiner notes:

A ‘perfect’ act of translation would be one of total synonymity. It would presume an interpretation so precisely exhaustive as to leave no single unit in the source text —phonetic, grammatical, semantic, contextual — out of complete account, and yet so calibrated as to have added nothing in the way of paraphrase, explication or variant.[11]

Steiner rightly notes such a task is impossible for both an original interpretation and a translational restatement. In fact, the sole example ever given for a perfect translation is the mythical Biblical Septuagint translation where 72 individually cloistered translators made 72 simultaneous translations of the Torah from old Hebrew to Greek over 72 days. As the story goes, their translations were exactly the same indicating divine intervention. However, if one considers the logic of the translation it was the absence of any particular tenet, or focus, that enabled the translation to be considered perfect. God’s weight, on some tenet or another, was imperceptible, so it is the absence of a particular reference that marks the example of perfection. It is the unmarked translation that can be considered perfect, but this does not help with real translations. The practical lesson from the Septuagint is thus that perfect translation is impossible.

The impossibility of a perfect translation has forced all practical translation to focus on certain elements. These elements—sense, rhythm, original meaning, feel, length, and experience—are routinely marked as essential and elevated to primacy. The elements that are considered non-essential are then justifiably negated. One is hard pressed to find some moment, including the present, where this fetishization of certain tenets does not happen.

In contrast to such a partial focus with translation I hope to encourage a use of materiality, which can lead to a fragmented, built translation; imperfect and incomplete, but hopefully leading to a partial picture of what could be. A postmodern translation that is hardly ‘perfect,’ but in contrast to other forms of translation it does not assume justifiable negligibility of unconsidered elements.

I argue that digital new media in particular can enable this form of translation. However, this new method is anything but new, just as new media is anything but new. Rather, it borrows from, and builds upon, both Jacques Derrida and Walter Benjamin’s theorizations of translation. Derrida, in strict opposition to the dream of perfect translation and meaning argues for the slippery sliding of signifiers as a way to point back, but never get back, to an originary moment, text, or meaning. In contrast, Benjamin understands the failures of translation as a necessary part of the dream of messianic return in that they build up to perfection. These two provide theoretical groundwork for what can be made possible by the impossibility of translation.

Derrida’s concept of deconstruction is based in Ferdinand de Saussure’s semiotics taken up to postmodern instability instead of the Formalist dream of an ultimately stable meaning. In the Course in General Linguistics Saussure argues that the linguistic sign is arbitrary in that there is no natural relationship between signifier and signified;[12] it is both variable and invariable in that it changes, but nobody controls the change;[13] it exists as a system (la langue) and individual instances (parole), and this duality makes it both synchronic in its permanence related to langue and diachronic in its relation to parole.[14] As Jonathan Culler argues, what is interesting in Saussure’s linguistics is the relational nature of signs, and therefore how “[l]anguage is a network of traces, with signifiers spilling over into one another.”[15] Words do not equal each other. Rather, they stand in positions of relationality that depend on time and space.

While Saussure focused on both the synchronic and the diachronic, stable and unstable, system and individual, ways that language exists, the Russian Formalists after him dreamed of a study of stable signs, a Science. Formalists such as Shklovsky and Jakobson (against which Mikhail Bakhtin later wrote) dreamed of an ultimate equality between signified and signifier, of a way that language made Scientific sense. This impetus toward stability and reason drives a great deal of language usage, and it informs practical translation. However, Derrida takes the instability of language, the ‘traces’ that Culler mentions, and runs with it.[16] There is no formal structure to language, there is no deep structure, there is simply the sliding of signifieds on signifiers as words change meaning over time and between utterances. Derrida represents this by the trace, the word under erasure (‘sous rature’). The word is unstable, but this does not indicate that it is free; rather, the word is loaded down with all of the past meanings, the traces of history (whether we recognize those past meanings or not). For Derrida, like with Saussure, meaning can never be pinned down, which means that words are never singular and always slide back along different signifiers; however, for Derrida, this instability means that a translation is twice as meaningful as the original text itself. It is an added sense above; it is an after erasure, a meaning after the original. In light of such polysemy, translation ultimately does something different than simply move a text between form, time and space: it helps the text “live on.”

In “Des Tours de Babel” Derrida argues that the proper name (Babel, but all names) is the ultimate example of translation’s impossibility. Coming from the Biblical story, Babel is the tower, it is ‘chaos’ (the multiplicity of tongues), but it is also God, the Father.[17] Names remain as they are in translations, they are untranslatable, but this is further the case with God’s name, and the tower itself, both of which cannot be translated/written/completed. Ultimately, Derrida argues that translation is the ‘survie,’ the ‘living on’ and ‘afterlife’ of the original text through the translation, but not the dead, original author whose sole means of immortality is through ever transforming literary texts.[18] As he summarizes in his discussion of a ‘relevant’ [meaningful and raising] translation of Merchant of Venice’s Shylock:

It would thus guarantee the survival of the body of the original… Isn’t that what a translation does? Doesn’t it guarantee these two survivals by losing the flesh during a process of conversion [change]? By elevating the signifier to its meaning or value, all the while preserving the mournful and debt-laden memory of the singular body, the first body, the unique body that the translation thus elevates, preserves, and negates [relève]?[19]

Translation allows a text/body/father, to live on, to survive, but in so doing the original is necessarily changed.

The lesson from Derrida in regards to translation is that it is impossible. This much is obvious. However, impossibility does not mean that it should not be done. Translation is a necessary act despite its flaws: a text would not ‘live on’ without translation, just as we cannot ‘live on’ without eating, consuming, translating the other into sustenance.[20] We can learn two things from Derrida: the first is that deconstruction is about the psychoanalytic working through of the trauma, the historical weight imbedded in the word due to the impossible overload of meanings. The second, the lesson that I take, is that the failure of translation must be flaunted, highlighted. The Derridian methodology (not deconstruction, perse, but the productive theory we may take from deconstruction) is about showing how language and texts have multiple meanings and in fact can never be pinned down to any single meaning. Translation, just like language and original texts, must show this built-in instability. As all language is sliding along unstable signifiers, and all texts float along the backs of others, translation too must show its layeredness, its historicity. However, the instability is not flexibility and freedom, but a painfully historical burden (a ‘haunting,’ even[21]), and Derrida shows this uncomfortable instability by writing with asides, marginal notes and what Philip Lewis has argued as abusive translation.[22] Because this abusive, Derridian style of translation is painful and difficult to read, it is not often considered useful to translation practice, which focuses on clarity, consumption and entertainment.[23] However, the build-up of meaning through layering is a key method to bring together the various modes of translation that I will return to throughout this paper.

Like Derrida, Benjamin argues that perfect translation is impossible, but he does so toward a completely different end. In “The Task of the Translator,” Benjamin argues that the ‘Aufgabe’ [task, give up, failure] of the translator is impossible, but such failures add up to something more.[24] A translation must not reproduce the original, but must be combined with the original to approach something more. His master metaphor is of an amphora, representing language, which has been shattered into innumerable pieces:

[A] translation, instead of resembling the meaning of the original, must lovingly and in detail incorporate the original’s mode of signification, thus making both the original and the translation recognizable as fragments of a greater language, just as fragments are part of a vessel.”[25]

The amphora is language and in order to piece it together individual, failed translations (and the original) must be undertaken piece by piece in order to piece the ‘Reine Sprache’ [pure language] together. Finally, translations are not necessarily possible in any given time; there is a timeliness, or “translatability” that allows or prevents certain translations.[26] For Benjamin, no translation is necessarily possible and no translation does everything, but translations must be undertaken both for Messianic (it facilitates the return to a pure language) and logistic (it enables the spread of ideas and texts) reasons.

Individual translations do not do everything, but as particular translations in particular contexts they give a glimpse of the pure language. From Benjamin I take the notion of seeing something more even if the singular is not perfect, and I take the idea that particular translations are better in particular contexts. Both of these oppose the idea of a singular, perfect translation, which, like Derrida’s insistence of abuse, is little desired by practitioners of popular translation. However, it is something that has great importance in a world where the difference between believing in a perfect translation and understanding the problems of translation can be the difference between fun and boredom, but also between death and life.[27]

While I do not believe in a Messianic return of an Adamic language, I do agree with Benjamin’s insistence on the unequal benefit of different translations. Certain languages at certain times translate better than others due to contextual issues. This is not to say that translation at any given point is fundamentally impossible, but rather that translations are unequal. While Benjamin might hold that this renders useless certain translations at certain times, I believe that it is possible to use the materiality of new media to combine Derrida’s abusive slipperiness of language with Benjamin’s build up of languages to create a more complete translation. Such a new form is where this paper will ultimately conclude.


Word, Sense, and Equivalence(s)

While Benjamin, Derrida, and a large number of other theoreticians of translation confront (and embrace) the impossibility of translation, practitioners of translation routinely deny the impossibility by necessity. Translation must (and does) happen, so instead of a holistic notion of perfection, individual elements are highlighted. Historically, the two primary tenets of translation have been the oppositional mandates of translating word-for-word, and translating sense-for-sense. However, theorists in the 20th century expanded the either/or of word vs. sense to include a host of other correspondences and equivalences. In the following section I will go over these different forms of practical translation, but I will conclude by pointing out that at issue with all of them is that they naturalize a single element, which blocks off the possibilities of any other options.

The oppositional mandate between word and sense has been a major focus in Western translation since the Greeks in part because of the importance of the Bible in Western translatology. The conundrum posed within the oppositional mandate is simply does the translator translate the words in front of him/her [word-for-word], or the meaning of those words as a larger whole [sense-for-sense]? However, because this debate has been contextualized historically within the realm of Bible translations it has never been a simple question between sense and word, but between worldly sense and divine word.[28]

The ‘first’ Bible translation was the previously discussed Septuagint translation from Hebrew to Greek, which was done ‘by the hand of God,’ but manifested through the separate acts of 72 individual translators. In this instance, the translators create what is known thereafter as a perfect translation. The words are God’s words and can neither be altered nor denied. It is the perfect translation as there was unified meaning between original and translation in word and sense. Such claims for perfect word-for-word and sense-for-sense translation are quite problematic, but they go unquestioned until St. Jerome again translates the bible from Greek to Latin. The problem (or so it is claimed) is that he refers back to the old, pre-Septuagint Hebrew version of the Torah, and in so doing denies the primacy of God’s perfectly translated words. How can the Greek version be perfect, with all of the sense of the original in the new words, if Jerome must go back to the Hebrew?

While St. Jerome argues for sense-for-sense translation, he does so in an interesting bind having translated the Septuagint Bible while referring back to the older version and highlighting the importance of particular words. He thus pays very close attention to word-for-word ideals, noting the importance of word order with mysteries, but ultimately argues, “in Scripture one must consider not the words, but the sense.”[29]

Word-for-word translation schemes never work, as there are never equivalent words. To show how this works I’ll take the word ‘wine’ between English and Japanese: wine is not blood; wine is not saké; saké is definitely not blood. Wine rhymes with dine and whine, but it is also either white or red and can be related to both debauchery and blood, and even metonymically to Christ’s blood. Of course, wine is the fermented liquid from grapes, but also just the general fermentation process itself so that “rice wine” is fermented rice starch, and “plum wine” is fermented plum liquid, but “grape wine” would be considered redundant. On the other side, saké, the Japanese word from which “rice wine” is often translated, stands as a general word for all alcohol, but nihonshu, or Japanese alcohol, which is the more explanative Japanese word for saké, is unused in English. Finally, there is no link between sake and blood in color, rhyme, or any other mode of meaning. If one single word can cause this (and more) trouble it should come as no surprise that a word-for-word translational scheme must fail.

From Jerome through to the modern period there is a fixation upon sense-for-sense translation, and by the time of John Dryden sense-for-sense translation (except when dealing with mysteries of the divine word) is cemented. While metaphrase, word-for-word, is one of Dryden’s three types of translation it is only done in extreme cases. The main debate is between paraphrase, sense-for-sense translation with fidelity to the author, and imitation, which is a type of adaptation that partially betrays the original author.[30] Dryden’s third form of translation, imitation, is the divergence point between adaptation, what I seek to note as a carrying over of form where the translator hints at the style, form or sense of an author, but not the content. Between Dryden and the present this form has completely diverged into adaptation and intertextuality, which are considered entirely separate from translation. This is the final splitting point between translation’s original vectors and traductions’s linguistic and authorial focus in the modern period. Finally, Dryden’s second form of translation, paraphrase, is the most general concept of sense-for-sense translation as it is about what the author said in one language said in another language.

Paraphrase translation has enjoyed the primary role in translation from the time of Dryden to the present, and has only faced significant opposition during the 20th century from semiotics, formalism, and postmodern ideas of language. All three of these provided different oppositions, but all significantly affected the word/sense divide.

While Jakobson is mainly known within translation studies for his three types of translation (intralingual, interlingual, and intersemiotic), as a formalist he is understandable as one looking at the formal qualities of language, and therefore what happens to those essential elements in the process of translation within and between languages and forms. Moving from a semiotic understanding of language where “the meaning of any linguistic sign is its translation into some further, alternative sign,” Jakobson argues that there is never complete synonymy as “synonymy, as a rule, is not complete equivalence.”[31] A translation, regardless of word or sense, cannot fully encapsulate the source text. As Jakobson claims, “only creative transposition is possible” where this creative transposition focuses on something, but loses some other specificity.[32] While two possibilities coming from this failure of translation are Derrida and Benjamin, a more common one is to instead focus on creative transposition of one particular element of the text, but ignoring the rest. This is most visible in Nida’s ideas of correspondence with Bible translations, Popovic’s four equivalences in literary translation, and finally the current style of game localization.

Eugene Nida is best known for his principles of correspondence, formal and dynamic (or functional) equivalence, which he has primarily enacted with Bible translations. As a translator closely linked to the American Bible Association most of Nida’s work is also linked to principles of missionary work and the spread of Christianity through rendering the Bible understandable and close to a target audience. His two sides of translational equivalence, formal and dynamic/functional, are quite similar to Dryden’s metaphrase and paraphrase. However, in particular, formal equivalence focuses on fidelity to the source text’s grammar and formal structure. In contrast, dynamic equivalence seeks to make the text more readable to a target audience by adapting it to a target context. Nida’s scalar of equivalence is similar to both the word and sense debate as well as the domestication and foreignization debate, which I will elaborate below, however, that he uses the idea of equivalence in the singular and deliberately notes that one must sacrifice one side or another is important for the current discussion.

In a slightly more expanded sense, Anton Popovič writes of four types of equivalence within a text: Linguistic, Paradigmatic, Stylistic (Translational) and Textual (Syntagmatic).[33] The first, linguistic word, is the goal of replacing a word in the source language with another, equivalent word in the target language; it is different from the word and sense debate in that it simply indicates that the translator must pay attention to the phonetic, morphological and syntactic level of the text, which is to say the words that are written. The following three expand on the idea of equivalence in that a translation may focus on the grammar, the style, or the expressive feeling of the text.

Popovič’s focus is on a very literary understanding of the text. These four methods are for understanding the formal qualities of the written word, and therefore how to translate literary texts. Obviously, these four equivalences do not cover the entire realm of human experience. Other media involve different essential qualities, which have been the focus of those types of translation.

While any medium can offer an example of a different essence, I draw from my own focus on game translation. Game translation highlights experience. Games, as mass produced commodities, are considered interactive entertainment, and the core of the game is the active, fun experience.[34] In light of this gaming essence, the equivalence sought by game translators is the experience of the player in the source culture. As Minako O’Hagan and Carmen Mangiron, two of the few theorists on game translation write:

[T]he skopos of game localization is to produce a target version that keeps the ‘look and feel’ of the original… the feeling of the original ‘gameplay experience’ needs to be preserved in the localized version so that all players share the same enjoyment regardless of their language of choice.”[35]

Because the optimal experience when playing a game is entertainment, a good game translation is one that entertains and nothing more.

While Popovič believes there is an “invariant core [meaning]”[36] that remains regardless of any translational variations, one may translate with the goal of rendering equivalent only one of the elements, and in so doing the other three are sacrificed. Such a sacrifice works directly off of the understanding that perfect translation is impossible. Choosing one equivalence over another does not elevate it in importance over the others. However, in the practical integration of translation and reception only one rendering of one equivalence is ever seen, indicating that it is the true equivalence. Because only one type of equivalence is ever seen it is retrospectively elevated to the true equivalence. The equivalence highlighted becomes the essence of the text regardless of it being only one of many and any other types of translation that highlight the other elements of the text are useless. In the case of video games the fetishistic focus on the experience of the player renders invisible and invalid all other levels of the game. As a result, games become pure entertainment and all artistic, political, or cultural levels are ignored.

A text does not have a single essence; it has many different sites of differing importance to different people. The author might be intending to highlight one thing; the reading takes another; one cultural context focuses on one element, but another focuses on another. While the essence of a text spread to innumerable sites (rhyme, look, site, context, etc) equivalence seeks to focus on one and sacrifices the rest. This sacrifice is naturalized, and the equivalent element is constructed (after the fact) as the ultimate/important thing to be translated. As Lawrence Venuti notes regarding Jerome’s Bible translation, “Jerome’s examples from the gospels include renderings of the Old Testament that do not merely express the ‘sense’ but rather fix it by imposing a Christian interpretation.”[37] Translation does not just move a text from one language, time or place to another, but rather, it imposes particular meanings on that text and through the text on both the source and target cultures. Translational regimes and translations themselves exist within a political world. Translation is inseparable from power.


Domestication and Foreignization

While equivalence flows logically from the debates of sense-for-sense vs. word-for-word it also comes from the other primary concern in translation, which is between domestication and foreignization.

In an attempt to move beyond the debate between paraphrase (sense) and imitation (adaptation),[38] Friedrich Schleiermacher argued that there were two main ways of translating: either the translator makes the text in the style of the foreign original and forces the reader to move toward that source text and context [foreignization, or Source Text orientation (ST)], or the translator relocates the text into the target culture, pushing the text into the local context making it easier for a reader to understand [domestication, or Target Text orientation (TT)].[39] Schleiermacher argued that the debate between sense and word was defunct as both fail bring together the writer and reader. Instead, he contended that the translator needed to decide between foreignization and domestication, as the act of translation was necessarily related not to texts, but to cultures.

Schleiermacher argues that different types of translation are necessary to provoke different reactions in different audiences. Imitation and paraphrase must come first to prepare readers for the higher phases of true translational style: foreignization and domestication. He then argues that writers would be different people were they to write in, or be positioned as if they were writing in, foreign languages as domestication claims to do, and that such a repositioning would take the best elements out of the writers.[40] Thus, his argument ultimately supports foreignizing translation.

Antoine Berman understands Schleiermacher’s call for foreignization as a particular moment where an ethics of translation is visible. This ethics relates to the formation of a German language and culture. To Berman, domestication denies the importance of a mother tongue itself, and foreignization has the possibility that the mother tongue is “broadened, fertilized, transformed by the ‘foreign.’”[41] However, he also notes there are extreme risks to such nation building:

inauthentic translation [domestication] does not carry any risk for the national language and culture, except that of missing any relation with the foreign. But it only reflects or infinitely repeats the bad relation with the foreign that already exists. Authentic translation [foreignization], on the other hand, obviously carries risks. The confrontation of these risks presupposes a culture that is already confident of itself and of its capacity of assimilation.[42]

The prime assumption here is that Germany exists on the cusp of the ability to incorporate the foreign tongue in order to grow, but more importantly it also exists in a situation of being dominated by the French. In order to negate the French dominance over the German culture and tongue (that is extended through domesticating translations and bilingualism) it becomes necessary to take the dangerous plunge and move toward a foreignizing form of translation.

Texts do not exist outside of contexts, so any choice is necessarily related to political interests. In the case of Germany in the 19th century it was the relationship between Germany trying to develop against a dominant France. As Lawrence Venuti notes about Berman and Schleiermacher, “The ‘foreign’ in foreignizing translation is not a transparent representation of an essence that resides in the foreign text and is valuable in itself, but a strategic construction whose value is contingent on the current situation in the receiving culture.”[43] In the case of 19th century Germany, Venuti argues that the “Schleiermacher was enlisting his privileged translation practice in a cultural political agenda: an educated elite controls the formation of national culture by refining its language through foreignizing translations.”[44] Venuti’s argument requires jettisoning the nationally chauvinistic quality of Schleiermacher’s call for foreignization, but maintaining foreignization’s oppositional quality. To Venuti such a foreignization is necessary to oppose the current discursive regime of transparency that is dominant within the 20th and 21st century United States.

Venuti argues that the dominant discourse of translation within the United States is transparency. The translation must read as if it were written in the local language. This is a modern rendition of Schleiermacher’s domesticating translation that has been normalized to the extent that foreignization as a method is not an alternative, or different choice, but an awkward oddity.[45] As his subtitle “A History of Translation” indicates, Venuti lays out a genealogy that shows the rise of fluent translations in Europe between the early modern period and the late 19th century and how during this period the translator’s status dropped. By pointing out the constructed nature of the ‘fluency is good’ discourse, Venuti is trying to argue a move away from such fluency. He does so both to raise the status of the translator in relation to the author and originality, and to problematize the United States and English’s relationship to other countries and languages. As he writes in his conclusion:

A change in contemporary thinking about translation finally requires a change in the practice of reading, reviewing, and teaching translations. Because translation is a double writing, a rewriting of the foreign text according to values in the receiving culture, any translation requires a double reading… Reading a translation as a translation means not just processing its meaning but reflecting on its conditions – formal features like the dialects and registers, styles and discourse in which it is written, but also seemingly external factors like the cultural situation in which it is read but which had a decisive (even if unwitting) influence on the translator’s choices. This reading is historicizing: it draws a distinction between the (foreign) past and the (receiving) present. Evaluating a translation as a translation means assessing it as an intervention into a present situation.[46]

Writing, translating and reading are contextually contingent acts and one must be aware of the contexts from which and to which such texts move. It is key that the discursive regime of domesticating/fluent translation does not allow such historicizing or cultural understanding, as the foreign is simply rendered invisible

The current regime of translation is one in which the translator has become invisible and this has negative effects regarding the translator’s status, but also in regard to couching the United States’ translational imperialism. Venuti argues, “Schleiermacher’s theory anticipates these observations. He was keenly aware that translation strategies are situated in specific cultural formations where discourses are canonized or marginalized, circulating in relations of domination and exclusion.”[47] Results of this naturalized, extreme form of domestication are transparent cultural ethnocentrism and domination. These are, as Venuti argues, “scandals” of translation.[48] In opposition to these scandals, a foreignizing translational regime can link up to an “ethics of difference” that “deviate[s] from domestic norms to signal the foreignness of the foreign text and create a readership that is more open to linguistic and cultural differences.”[49] It is Venuti’s argument that acknowledgement and accommodation of difference are sorely lacking with late 20th and early 21st century United States context, thus requiring the switch to foreignizing translation. However, as previously stated, such a foreignizing method is completely opposite the dominant trend of the present.

Venuti argues for a switch to foreignization and away from the domestication that has been naturalized. He argues that “invisibility” refers to both the status of the translator as negated under the writer economically and functionally, and that translations must be presented so fluently, as if they were made in the local language and culture, that the translator is rendered invisible. The invisibility of domestication overlaps in instructive ways with Bolter and Grusin’s concept immediacy, the transparent side of remediation. Ultimately, remediation is a way out of the problematic discursive regime of translation that Venuti locates.



In their seminal new media text J. David Bolter and Richard Grusin coined the term remediation in response to what they saw happening with new media at the time, but also how all media had been changing over the twentieth century.[50] For Bolter and Grusin all media is remediated: a medium remediates other media. Web pages have text, icons that tell people to ‘turn to the next page,’ and imbedded movies with standard filmic controls; Microsoft Word has a ‘page’ as it remediates writing on paper. This remediation has two qualities, or sides. The first, immediacy, is where the fact of remediation is cut away, or rendered invisible. The HUD (heads up display) of a game is lessened, removed, or rendered diegetically relevant. From a literary standpoint the content and diegesis is all that matters and the user need not leave this place of immediate access to the text. As Bolter and SIGGRAPH director Diane Gromala write a few years later, “we…have lost our imagination and insist on treating the book as transparent….  We have learned to look through the text rather than at it. We have learned to regard the page as a window, presenting us with the content, the story (if it’s a novel), or the argument (if it’s nonfiction).”[51] The second, hypermediation, can be seen in TV phenomena such as showing a miniature window in one corner of the screen and in the scrolling information bar on the bottom of the screen, but it is also footnotes, side notes and commentary with books. For Bolter and Grusin remediation is simply something that happens with all media and has happened since writing remediated speech, much to Plato’s chagrin. However, it has interesting links with translation, particularly in how immediacy can link up with Venuti’s fluency, and how hypermediacy can link up with the possibilities of layered translation, which come from Derrida and Benjamin.

Venuti claims that the current regime of domesticating translation within the United States leads toward a fluency that renders invisible both the translator and the fact of translation. According to the majority of American readers who enjoy this type of translation and experience such a goal is admirable. According to Venuti, fluency is quite problematic due to the translational ethics of difference involved. Within the logics of remediation, by rendering the translation invisible the original text is made an immediate fact for the reader even though it is not the original text, but the translated version. This type of immediacy materializes in particular ways with particular media: for books it is in a one to one fluent translational strategy, with film it is dubbing and remaking, and with video games it is localization. While these fluent/immediate strategies are dominant at present there are alternatives.

For Venuti, the opposite of translational fluency is a foreignization that highlights the ethics of difference. As cited above, most important in this is creating a new style of “double reading” that requires the reader read the text as a translation. However, if we take Bolter and Grusin’s oppositional strategy of remediation, hypermediation, we can see alternative methods of highlighting an ethics of difference. Translational hypermediation would entail highlighting the fact of translation; it could be abusive, Derridian translation; it could be Jerome McGann’s hypermedia work; it could be cinematic subtitles and metatitles; it could be game mods.[52] All of these interact with the medium in a way that utilizes its particular form.

Hypermediated translations of new media could easily exist, because of the particularities of digital alterability, but they do not. In the following section I will elaborate the particular way that translation happens materially with books, film and games. Primarily, these current ways are domesticating, fluent and immediate. Then, I will explain how translation could bring out both a type of foreignizing, layered and hypermediate relationship with the text.


Specific Iterations in Media

While the above section has summarized tenets of translation primarily coming from literary studies, the following will elaborate how these different trends intersect with three particular media: books, film and games. These three media are chosen very deliberately. Gaming is my main focus in part because of industry and theoretical denial of its translated nature, and in part due to its ability to lead to new translational possibilities. However, books and film are necessary predecessor forms on the route to games. Books are important as the primary textual form in current Western literary culture. While poetry, newspapers, magazines and other printed forms are also relevant I limit my analysis to the Modern novel both for space issues, and for the novel’s focus on, and obsession with, the author. Secondly, film is important as games have been created in the wake of the 20th century’s cinematic revolution where the language of games comes in part from the language of cinema: cut scenes, 1st person perspective, and increasing obsession with realisticness.[53] While the link between gaming and cinema has been critiqued on the grounds of gaming’s material and experiential differences from cinema, this does not deny its historical and stylistic links despite their unwieldy application in games.


Books, Supplementarity, and Digital Culture

Books in the modern period are singular objects created by singular authors. An author has an idea, struggles to bring this (original) idea to paper, and over time eventually uses his or her singular language to write the work. While books are made at one point in time, there is a belief in their timelessness: they are able to stand up to decades, centuries, and millennia (although such durability is also a test of worth) due to their original language (or rather, despite their original language, as it is translation that allows the text to ‘live on’). There is an essential link between author, nation and language, which is brought out in the book, and readers partake in this art when they read the book.

A translation is something that comes chronologically after the book. It is the result of taking the words and sentences (the content), and changing it into another language in order to facilitate the book’s movement over spatial-linguistic borders. The translation’s hierarchical relationship to the original book is derivative, but its material relationship has changed over time. Whereas translations are a material replacement that comes chronologically after the original, they were at times both simultaneous and supplementary to an original work.

Certain texts needed to be written in certain languages (Latin for religious, philosophic and scientific texts; literary genres in Galican or Arabo-Hebrew, and travel accounts such as Marco Polo’s and Christopher Columbus’ in a hodgepodge), and the idea of deliberately altering a text from one language to another was not high in priority, or even acceptable in some instances.[54] At one point in lieu of translation there was commentary, or Midrash in the case of the Torah. Such commentary was necessarily displayed alongside the original as a supplement. It complicated, but did not replace the original.

This older form of supplementarity can be linked to the current, but uncommon, practice of side-by-side translations where the original resides on one page and the translation on the other. The original and translation face each other to enable comparison. While Biblical and philosophical material is often granted side-by-side translations it is done so due to the importance of both individual words and overall sense, or because the question of just what is important is either undecided or unknowable. In the case of popular (low) cultural novels there is less reason to consider the original and so there is little reason to print the original. Other possible reasons side-by-side translations of important biblical, philosophical, and literary texts still exist, but popular novels are almost never given such a translational method are cost and size. Halving the pages printed should significantly reduce the cost and size of the book. Only important texts, or political and religious ones where price is not an issue, can justify the additional cost of the double pages. And popular, semi-disposable entertainment texts are less entertaining when enormous, bulky tomes. What is a complimentary relation between original and translation becomes a matter of replacing one with the other.

The shift from supplementary translation to replacement translation, where the translation stands on its own as a complete text, happens at the same time within modernity as the rise of translational equivalences. However, as discussed previously, it is impossible to conduct a perfect translation that conveys word, sense and all equivalences, so one element becomes the focus and under that equivalence the translation replaces the original book. In the case of the 20th to 21st century United States this equivalence is roughly what the author would have written had he or she been from the United States and writing in English. Because the industry follows a replacement strategy that supports fluency and immediacy, books can only follow a single equivalence. However, the materiality can support multiple equivalences through a translational supplementarity that supports an ethics of difference and hypermediacy.

Obviously, page-to-page translation, and the works of Derrida are an example of how books can support this form of hypermediated translation.[55] The viewer can be shown the different words that could have been used throughout the translation. While there are many possibilities for a hypermediated translation, there have been few opportunities throughout Western translation history. However, this hypermediated style might be coming back in fashion with the advent of new technologies including the digital book. These digital books also solve cost and size issues that were partial reasons against side-by-side complimentary translations.

While the digital book holds much potential, proprietary design, national based sales of content, and Digital Rights Management (DRM) issues plague current eReaders. They are simply an alternate way to read a book, which one must buy from a massive chain store in one language, and nothing more; they are monolingual devices that bring out the same trend of immediacy that I described above. However, the digital book could be programmed to show a multiplicity of versions, iterations, and translations. It could be programmed to be a truly hypermediating experience if only by linking different translations of a text. I will return to this in the final section of my paper, but a hint at this possibility is in Bible applications. YouVersion’s digital Bible application[56] has 49 translations in 21 languages, and this number increases as new versions are added. The Bible is not in copyright, but it would be possible to use a micropayment system that would allow interested patrons to buy linked versions of different book translations in a similar manner. By integrating the different variations a hypermediated experience would be created.


Film, Dubs, Subs, Remakes and Metatitles

The contentious relationship between immediacy and hypermediacy is highly visible in film translation.[57] On the one hand there is a long history of replacement/transparency with multi language versions (MLV), dubbing and remaking, but on the other hand there is an equally long history of subtitles. While the debate between subtitles and dubbing is really only solvable by referring to local preference, I argue that the rise of remakes of foreign films, especially in the United States, is a sign of the dominance of replacement and immediacy strategies. In the following section I will outline the history of language in film, then how it intersects with remediation, and finally ways that the lesser-used hypermediacy might bring out alternate forms of film translation.

When cinema was first exhibited there was no call for translation. There was no attached sound and there was no dialogue. The original ‘films’ like the Lumière Brothers’ La Sortie des Usines Lumière (1895), which depicts the workers leaving the Lumière factory, and L’Arrivée d’un train á La Ciotat (1896), which shows the train arriving at the station and people beginning to get off, are good examples of the limited structure and general ‘universality’ of the earliest films. Because there were no complicated plots or multiple scenes it was believed at the turn of the 19th century that cinema, like photography, was merely the “reproduc[tion of] external reality.”[58] At the beginning of the 20th century, cinema was considered outside of language and universal.[59] This understanding was first troubled with the inclusion of intertitles, as they required translation to move the film from one place to another, and from one language to another. However, the rest remained ‘universal.’

The late 1920s brought imbedded sound to cinema, and with it came talkies. These talkies necessitated a new level of translation, and both immediacy and hypermediacy translation styles were available: dubbing and subtitling respectively. Subtitling is both hypermediating and foreignizing. It is hypermediating in that it accentuates the fact of translation by putting the translated dialogue on top of the film. It is foreignizing because of the constant, visible disjoint between the words of the actors and the subtitles at the bottom of the screen.[60] The viewer constantly hears the foreign other, and this brings to the forefront the issue of trusting a translator to have translated properly.

In contrast, dubbing is immediate in that it erases the voices of the visible actors and replaces them with other voices in the target language. However, dubbing is not perfectly domesticating as there is a discrepancy between the bodies on screen and the dialogue. This discrepancy is partially the result of lip-syncing issues, and partially the result of differently signified bodies and voices. One of the tasks of dubbers is to forcefully make the dialogue match the lips by altering the linguistic utterances, often quite significantly.

While dubbing can alter the words and voice coming out of the body it cannot change the bodies themselves. In a realm of racialized nationalism, or as Appadurai writes, when the hyphen between the nation and state is strong,[61] this discrepancy between racially different body and local language is a problem. Because it is assumed that only those with specific bodies speak specific languages such discrepancy is highlighted.[62] Dubbing thus still has a hypermediated quality to it. A further step toward immediacy is changing the body. There have been two different methods used to make films more immediate by changing the bodies. The first was the early 20th century multi and foreign language versions, and the second was the much more long lasting remake.

The understanding of film as universal was initially challenged in the 1929-33 period, which saw the inclusion of multi and foreign language versions. Foreign language versions (FLV) are where the film was recreated after the fact in a different studio, and multi language versions (MLV) are when they were recreated in the same studio on the same set with different actors, but later in the same day.[63] The M/FLV highlights that there were people who understood that culturally specific elements are writ large on the body. Not only was national culture inscribed with language, but with bodies, clothing, and even story. It was believed that replacing the body, remaking the film into both the ‘local’ language and body, the film would be less foreign. This effort reveals the dominant trends of immediacy and domestication. By replacing both the language and body the text is made even more transparent for the audience. However, the M/FLV did not last long largely due to the high costs involved. Then as now there is a high priority given to business and the bottom line, and the cost of making multiple movies simultaneously was not economically justifiable especially when the movie could flop.

While intertitles and the MLV incorporate linguistic and human alteration what they do not consider is the cultural specifics. The content level was not translated or adapted; the stories are not altered. There were incredible numbers of stories adapted and remade again and again, but not because of cultural relativity. This oversight is rectified three decades later with films like Gojira (1954), which was reconceptualized away from the original’s atomic bomb logics. The remake, Godzilla, King of the Monsters! (1956), is reshot and reedited in order to feature an American journalist narrator and highlight the monster genre.[64] Following Godzilla, but primarily at the end of the 20th century there was a resurgence of remakes that link with cultural translation.[65]

With remaking not only do the bodies in the film change to locally recognizable ones with their own voices, but the context of the film can be changed from foreign lands to local ones. An example of this is Shall We ダンス (1996), a Japanese movie about a salary man going through a midlife crisis and learning to dance in an anti-dancing Japanese society, which was remade as Shall We Dance? (2004) with Richard Gere, Susan Sarandon and Jennifer Lopez in a Chicago context.

In one of the most important scenes in the original Mai is lectured by a possible new dance partner, Kimoto. He proposes they give a demonstration at a local dance hall (night club), but she refuses to dance with “hosts and hostesses” claiming it isn’t dancing, but cabaret.[66] Mai is obsessed with the foreign, European Blackpool competition and dance floor, which is opposed to the native dance hall with less history and lower culture. Kimoto claims not only that enjoying dance is of primary importance, but that the lowly Japanese dance hall has a history just as important as Blackpool. The opposition of high to low (hierarchical) and native to foreign (spatial) is stressed in this interchange. When Mai finally holds a party that signals the restart of her career it is on the lowly dance hall’s floor, indicating the primacy (or at least equality as she plans on returning to Europe) of the native over the foreign, and it stresses the equality of high and low. In contrast, the remake opposes Miss Mitzi’s relatively unpopular dance studio with the hip Doctor Dance studio and club. The opposition is both temporal and hierarchical: Miss Mitzi is middle aged and teaches various forms of professional dance compared to the scenes in Doctor Dance that are almost all depicted as club/entertainment moments. And when Paulina, Lopez’s adaptation of the Mai character, decides to go study in England (a rather meaningless decision in the context of the remake) her going away party takes place in an unrecognizable locale. In the original, the Japanese spirit and history is implied to be just as important and meaningful as the European one. The film is highly nationalist in its context. The remake works to erase such nationalism by placing the theme of global/universal work and the international family man/nuclear family over that of foreign and native. Such movement complies with a universalization of remaking as domestication. The foreignness of the Japanese original is rendered domestic and immediate with the remake.

A domesticating translation takes the foreign text and moves it into the native context, making the reader’s job easier by forcing the text to speak in a manner the reader is used to. In the Hollywood’s domesticating remake of Shall We Dansu, Japan’s troubled interaction with modernity and globalization are removed. The local socio-political particulars of the original films are erased in the service of “universal” generic narratives that satisfy an American audience that rarely interacts with foreign others. Hollywood’s remake process is a systematic erasure of difference and the foreign other that has been naturalized under the theory of the remake as cinematic translation, which only needs to render equivalent one essential element at the expense of all others.

So far I have discussed the current domesticating and immediate strategies of film translation. Even though I have claimed that subtitling is both foreignizing and hypermediating, it does not use the materiality of the filmic medium to really bring out the possibilities of hypermediation. So far there have been no further creations, but it is not hard to think of a type of “metatitles” that use the capacities of the digital cinematic medium to layer translations on the screen in a hypermediating translational style.

In the last few pages of “For An Abusive Subtitling,” Nornes refers to the fan subtitling of Japanese animation that took place largely between the late 1980s and early 1990s in the United States.[67] With difficult to translate terms the fan subtitlers gave extended definitions that covered the screen with words. The translation effort goes well beyond the standard translation in that it starts with a foreignizing pidgin, but also provides an incredible amount of information that works to bridge the viewer and source. While this abusive subtitling is hypermediating in that it layers the text, it could be extended to use the medium more by layering the text using DVD layers. These layers could move from the main textual layer (the visual film) and the verbal audible signs (dialogue and its subtitles), to the hypermediated translational layers: the visual audible signs (text on screen), the non-verbal audible signs (background noises that need explanation), the non-verbal visual signs (culturally derived, metaphoric camera usage), and any other semiotic layer possible.

Through such a layering commentary of the different signs the screen would quickly fill and overwhelm the viewer as a form of abusive translation, and while there is something admirable in completely disrupting visual pleasure, such disruption would never be taken up by the industry: all film layers must be visible either alternately or simultaneously, and at the control of the viewer. As home video watching is generally at the command of a single user or a small number of viewers the DVD format is a uniquely suited mode to enact metatitles. Due to the increased capacity to store information coming from DVD, Blu-Ray and future technology there is no limit to the possibilities of layering.

A layered translation uses the capacities of current technology by hovering over the text, but just as a translation can never fully encapsulate the original, metatitling would never fully acknowledge every aspect of the original text: it is a failed translation, just as all translation is failure due to being incomplete, but it does so in a foreignizing and hypermediating style that acknowledges its failings, and builds toward some ethical ‘more.’


Games and L10n[68]

While film translation retained a complex, but present relationship to translation theories and literary translation, the move to new media forms has created a chasm between theories and practice, which has resulted in new methods and industries of translating. Both translation theory and localization practice could benefit from cross-pollination, and that is the heart of my work. The shift to digital software has been accompanied by the rise of a software localization industry (of which gaming localization is an independent but related industry) with its own tools, standards committees and rhetoric. The following section begins by looking at how language intersects with games. I then consider what game localization is and how it succeeds to translate games, but also how it fails to address certain possibilities. One major element is in how localization fails to utilize the possibilities of the digital medium to bring about a hypermediated translation despite the immense amount hypermediation within the medium itself.

Like films, games have an interesting relationship with the idea of universality. The first computer/digital games such as Tennis For Two (1958) and Spacewar (1962), and even early arcade cabinet games like Pong (1972), Space Invaders (1978) and Donkey Kong (1981) were ‘language’ free. In a similar way that the early films were largely visual amazements, games were computer-programming amazements meant to show off the technology.[69] However, the programming was difficult and took up all or most of the available processing power and programming energy. This meant that early games had little processing power or programming time to spare for story. Many held (and still hold) to a universal accessibility and understanding of these games due to the technological and programming limitations coupled with a belief in the universality of play as a social phenomenon. Even now the belief in ludic universality holds despite theorists problematizing that fact in a similar way to how a previous generation of visual culture theorists problematized the universality of vision.[70] For instance, Mary Flanagan has argued, “while the phenomenon of play is universal, the experience of play is intrinsically tied to location and culture.”[71] While she is largely discussing the spatial politics of games existing in certain spaces, the theory can be expanded to indicate that any game, or instance of play, is tied to a cultural context be it Tennis for Two and the atomic age the weapons research lab in which it was created, Spacewar and masculine science fiction fantasies, Donkey Kong and the origins of the side-scroller as linked to a Japanese aesthetic, or any other game and context. Games are developed, produced and distributed in specific socio-political, temporal and spatial locations and are thus not universal.

However, this believed universality is only now coming into question, and it was completely unquestioned during the 1960s to early 1980s during the 1st and 2nd generations of computer games. There were no ‘words’ in the early computer games, just crude iconic representations. This meant that within the games themselves there was no ‘language’ needing ‘translation.’ What did need translation were the external titles and instructions. Titles were kept or changed to the desire of the producers and distributers. Pakkuman (1980) turned into Pacman instead of Puckman for fear of malicious pranksters changing the P to an F, but other titles were kept as is or were programmed in roman characters. Instructions for arcades and manuals for home consoles needed more extensive translation, but it was a very limited, technical form of translation. The first generation of computer game translation was thus both limited and little different from the roughest of technical translations, neither ‘literary’ nor ‘political.’

The second generation of game translation came about when games utilized greater processing power and storage capabilities to tell extensive stories. These were earlier adventure games like Colossal Cave Adventure (1976) and Zork (1977-80), which told 2nd person adventure narratives, and the more graphical adventure descendents of the 1980s such as Final Fantasy (1987) and King’s Quest (1987). These broke ground in games by normalizing narrative along with play. These also necessitated a new type of game translation that could address more than just the paratextual elements of title and manual.[72] This generation of game translation led to the creation of an industry for game translation.

The rise of linguistic material (stories in and surrounding the games) led to an acknowledged need of translation and the beginnings of the localization industry. Originally, the primary method was what is now called partial localization, where certain things were localized, but most others were not. Thus, the manual, title, dialogue, and menus might be translated, but the HUD might remain in the original language due to the difficulty of graphical alterations. The localization industry evolved in the 1990s to match the growing game industry, and localized elements were expanded from menus and manuals to graphics, voices and eventually even story and play elements.

While the current form of game localization is much expanded from early game translation the basics are the same. According to the Localization Industry Standards Association (LISA[73]), “Localization involves taking a product and making it linguistically and culturally appropriate to the target locale (country/region and language) where it will be used and sold.”[74] Localization is like translation in that it facilitates the movement of software between places, but it is different in that it also allows significant changes in the visual, iconographic and audio registers in addition to the linguistic alteration.

Regardless of how much is translated, game translation involves the replacement of certain strings of code with other strings of code. These strings are usually linguistic: The title The Hyrule Fantasy: Zeruda no densetsu (The Hyrule Fantasy: ゼルダの伝説) becomes ‘The Legend of Zelda,’ and within the game the line “ヒトリデハキケンジャ コレヲ サズケヨウ” [it’s dangerous by yourself, receive this] becomes the meme-worthy “It’s dangerous to go alone. Take this!” But alterations are also graphical: a Nazi swastika is changed into a blank armband for games in Germany. The first is a title, the second is a linguistic asset, and the third is a graphical asset. All assets exist as strings of text in the application code, and by altering the programmed code, each can be changed in the effort to move the game from one context to another. The ability to alter assets is an essential quality of new media.

Along with numerical representation, modularity, automation and transcoding, Lev Manovich argues that one of the primary elements of new media is their variability.[75] This idea of variability exists because new media is tied to digital code, which is adaptable, translatable and transmediatable through the alteration of specific strings. Because the strings, especially linguistic strings, are modular there is no specificity to games. With digital games this variability is combined with discourse of play as universally understandable. Because play is considered universal, the trappings of games (form, content and culture) are considered inconsequential, variable, and localized to fit into a target context in a way that does not change the game’s ludic [play] essence. Thus, any level of alteration in the localization process is fully sanctioned in order to provide the equivalent “experience” to the user. [76]

While asset alteration is possible as an essential quality of digital media, it is not simple: a hard coded application can only be changed through painstakingly altering tons of strings all throughout the program. In contrast, an application that calls up assets can change the individual assets into multiple variations and then choose which assets to call. This practice has been enabled in part by the game production industry embracing Internationalization (i18n) as a necessary and regular practice.

Internationalization is the practice of keeping as many game assets as possible untied and unmarked by cultural elements. In his guide to localization Bert Esselink provides an example of an image with a baby covered in blankets and a separate layer of undefined, localizable text.[77] Unlike pre internationalization methods the image and text are not compressed, which makes it possible and easy to switch the text. While the words are changeable the images remain the same, as there is an assumption that a smiling child is universal. Such non-universality of these particular elements is an issue. Games move beyond this by retaining almost all elements as changeable assets whether they are dialogue, images, Nazi armbands, or realistic representations of military flight simulators, but this changeability brings out other problems.[78] It does not address the elements that go assumed as universal that are not, but it also positions internationalization as a lead-in to domestication. Within the ideal of internationalization the practice of internationalization becomes domesticating translation by material and practical necessity. No matter what happens there will be an immediate, replacing, domesticating translation.

If expansive narratives opened games up to larger amounts of translation, there is a conflux of things that led to the third generation of game translation and the eventual rise of the game localization industry. These are the rise of the software localization industry with i18n standards, the understanding of variability and ability to change the games, the creation of CD technology with larger amounts of storage capacity, and finally the use of that storage capacity to enable voice acting to highlight the narratives.

While compact disk technology was created in the 1970s and has been a means of distributing music since the early 1980s, it took until the 1990s for games to be distributed on CDs. Beginning in the early 1990s CD-ROMs were attached to computers and the Playstation gaming device, and games began to be distributed on CDs. This move from floppy disks to CDs greatly expanded the size of games, and with it came the inclusion of both cinematics and digitized voices. One famous early example is Myst (1993). Both cinematics and recorded vocals take a large amount of storage capacity, which the CD provides. However, the CD does not provide enough space to enable multiple languages of vocal dialogue. There was the justified necessity to limit the included languages with a game because of the limited space available. Even when games moved to multiple disks providing multiple audio tracks would have significantly increased the disks required.

The lack of space for multiple languages forced game translators to decide between subtitling the audio and dubbing it over. While this might have led into an equal debate between dubbing and subtitling (like with film translation), the dominance of computer generated (CG) video over live action, full motion video within the games actually led to the naturalized dominance of dubbing and replacing.[79]

As CG requires that voices be added there is little understanding that localization replaces anything. There is no ‘natural’ link between the visible body and the audible voice for CG, so dubbing causes fewer problems in gaming than it does in cinema.[80] However, because of the space issues there was not enough space to provide multiple languages on the single CD, which meant that the majority of games only have one language on them. Certain European regions provide multiple languages by necessity, but this is far from the norm. Even when the storage and distribution method changed from CD to DVD there was little movement toward the inclusion of multiple languages. This lack of included languages is also partially due to the region encoding business practice.

Linguistic multiplicity within games has also been stymied by the practices of video compression for TV and different regions encodings for DVD disks. CDs and DVDs are region encoded in order to protect business interests by opposing ‘piracy,’ defined here as the unsanctioned copy, spread and use of software applications.[81] There are two general eras of this encoding. The first was the separation between NTSC (National Television System Committee) and PAL (Phase Alternate Line). These two methods were linked to the televisions distributed in different regions; the different gaming systems and disks need to operate in the same encoded manner as the televisions. This made it impossible to play European games (PAL) on an American system (NTSC), but it did not necessarily block out Japanese games (NTSC). This initial form of encoding has less to do with piracy protection than it does policing national airwaves. DVDs use a slightly different method in that they are region based between 8 different encodings: in a limited manner they are as follows: US/Canada (1), Europe/Middle East/Japan (2), Southeast Asia (3), Central/South America/Oceania (4), Russia/Africa (5), China (6), undefined (7), international venues such as airports (8). For video games these region encodings work with and against the standard PAL/NTSC distinction so that while Europe and Japan are both region 2 there are differences between the ability to play PAL and NTSC and vice versa. In contrast, while NTSC disks work easily in both Japan and the United States the region encoding limits the ability to play both disks. Both the PAL/NTSC distinction and region encoding have multiple purposes including software piracy prevention, but in terms of translation they legitimize the lack of necessity of translating for multiple regions.

As piracy is a problem to the game industry[82] and large amounts of piracy happen in certain regions (Asian regions especially due to economic disparity, gray markets and governmental bans on consoles) there is a general belief that by not supporting multiple languages there will be a block put on game piracy: if the gray market version is unintelligible due to it being in an alternate language it is possible that a user will still buy their language version. In other words, limiting the number of languages available limits the geographical range of a particular version of a game, which works against the black markets and works for the game industry. Thus, there is an interesting convergence between business interests, the technology available, the developing techniques in programming games, and the general trend toward translational domestication and immediacy. The storage capacity limitations coupled with the use of cinematics and voices and the standardized practice of dubbing and replacing dovetail perfectly with the industry practice of localization as domesticating, immediate translation.

The goal of localization is to make it ‘appropriate.’ This goal is heavily influenced by the business elements of the localization industry. Localization is about profit, the bottom line, so the goal is to fit with user desires. Game localizers identify game user desire related solely to entertainment.[83] Entertainment and appropriate translation here is identified as helping the target player to have the same experience that the source player had in the source context. Such a singular drive is quite different from literary translations that aim to abuse the user, or linguistic interpretation and political translations that deal with the problems of modern political interaction. However, at base localization is still a matter of equivalence: the equivalent experience/feeling/affect.[84]

Insofar as the localization industry is a business there is little one can say negatively about the practices enacted. Only popular games are localized, so translating them with the same money-making “experience” is better business practice. However, when one attempts to move beyond such market logics it is hard not to see the problems. Just as translation needs to be understood as important, powerful and dangerous, so too must localization be understood as a weighty practice. An industry that has globalization (g11n) as one of its prime terms must be aware that there is more to globalization than “the business issues associated with taking a product global.”[85] Just as globalization is a fraught term in the world it must be problematized from its purely business nature in localization.[86] Said simply, there is more to a game than the immediate localization of the foreign user’s experience.

One way in which localization has recently pointed toward both hypermediation and alternate forms of translation are the creation of multilingual editions to games. The switch from CDs to DVDs and the move to downloadable software there has been a move to include multiple languages. DVDs have enough storage capacity to house multiple language audio tracks and downloadable software is unlimited (if time consuming) as it solely relates to the system’s hard drive capacity. Because of this there has been some movement toward including multiple languages. One particularly interesting case is Square-Enix’s “international editions.” Particularly interesting about the international editions is that they started with only one language: Japanese, but included a few additional features (Final Fantasy VII: International Edition). They then turned into games that mixed the English and Japanese, but were released solely in Japan. The audio tracks were English and there were Japanese subtitles, but the rest of the game was in Japanese (Final Fantasy X: International Edition, Kingdom Hearts: Final Mix). Part of the difference between the early and later international editions is the move from CD to DVD, thus there was little dialogue in the early version, but even in the DVD versions there was only a replaced audio track (the Japanese was replaced with ‘international’ English). A third movement was when both English and Japanese audio tracks were available, but only after finishing the game once: the initial playthrough necessitated the player have a mixed English/Japanese experience with Japanese menus, written dialogue and subtitles, but with English audio (Kingdom Hearts II: Final Mix+). Finally, a fourth movement is the full availability between English and Japanese with various different subtitled languages (Star Ocean: Last Hope International). This progression of different styles of international edition implies what was originally a gimmick, but has changed to a marketing decision based on the knowledge that there is an audience and that this audience has spread outside of Japan.

These international editions have a tangled relationship to the concept of kokusaika [internationalization, or ‘international-transformation’] within Japan. Kokusaika itself is tied to ideas of westernization in the late Tokugawa and Meiji periods, and Americanization in the post World War II period. Kokusaika was seen as an important step of modernization in much of the discourse of the 19th and 20th centuries, but it is troubled in nationalist and essentialist discourses in particular.[87] One might also argue that the Square-Enix games both support and trouble this kokusaika discourse as they support it, but they maintain the importance of Japanese within the games. While the international edition allows multiple languages it does so from a Japanese expansionist perspective. Language is never neutral, and but putting the lingua franca with Japanese as the only choices (with the other standard gaming languages such as French, German, Spanish and Italian as subtitle options) there is a definite movement to raise the importance and reach of Japanese as a language. Kokusaika is thus maintained, but with the exception of a continued presence (and even dominance) of Japanese. While I believe the international editions are on the right track toward a layered, foreignizing style of translation they still exist in the context of Japanese politics.[88] This is similar to Venuti’s claim that Schleiermacher’s work offers a helpful corrective despite the German author’s 19th century chauvinism.

While the past thirty years has led to increased immediacy and region protections, new forms such as DRM routines and online portals such as Steam indicate a general belief that such region separations have ultimately failed to protect against piracy. Because the region encoding tactics to prevent piracy have failed, it is possible that a new era of Localization is coming, but so far it has been relatively limited. Hopefully this is only momentary and the same hypermediacy that has been blocked out since the beginning of gaming will become visible along with the existence of difference that is visible with translations and layers. I will discuss some of these possibilities in the final section of this paper.


Possible Futures

I would like to conclude this paper with a discussion of two new trends in translation. Both are postmodern, intentionally unstable, and utilize the digital materiality. One trend destabilizes the translator, and the other destabilizes the translation. However, both trends can heighten the feeling of hypermediation and foreignization, which (according to Venuti) is helpful in the current translational climate.[89]


Destabilization of the Translator

The destabilization of the translator has multiple translators, but a single translation. It has its history in the Septuagint, but its present locus is around dividing tasks and the post-Fordist assembly line form of production. Like the Septuagint, where 72 imprisoned scholar translators translated the Torah identically through the hand of God, the new trend relies on the multiplicity of translators to confirm the validity of the produced translation. However, different is that while the Septuagint produced 72 results that were the same, the new form of translation produces one result that, arguably, combines the knowledge of all translators involved. This trend of translation can be seen in various new media forms and translation schemes such as Wikis, the Lolcat Bibul, Facebook, and FLOSS Manuals.

Wikis (from the Hawaiian word for “fast”) are a form of distributed authorship. They exist due to the effort of their user base that adds and subtracts small sections to individual pages. One user might create a page and add a sentence, another might write three more paragraphs, a third may edit all of the above and subtract one of the paragraphs, and so on. No single author exists, but the belief is that the “truth” will come out of the distributed authority of the wiki.  It is a democratic form of knowledge production and authorship that certainly has issues (among these questions is whether wikis are actually democratic and neutral), but for translation it enables new possibilities.[90] While wikis are generally produced in a certain language and rarely translated (as the translation would not be able to keep track of the track changes), the chunk-by-chunk form of translation has been used in various places.

One form of wiki translation is the Lolcat Bible translation project, a web-based effort to translate the King James Bible into the meme language used to caption lolcats (amusing cat images). The “language” meme itself is a form of pidgin English where present tense and misspellings are highlighted for humorous effect. Examples are “I made you a cookie… but I eated it,” “I’z on da tbl tastn ur flarz,” and “I can haz cheeseburger?”[91] The Lolcat Bible project facilitates the translation from King James verse to lolcat meme. For example, Genesis 1:1 is translated as follows:

KING JAMES: In the beginning God created the heaven and the earth

LOLCAT: Oh hai. In teh beginnin Ceiling Cat Maded teh skiez An da Urfs, but he did not eated dem.[92]

While the effort to render the Bible is either amusing or appalling depending on your personal outlook, important is the translation method itself. The King James Bible exists on one section of the website, and in the beginning the lolcat side was blank. Slowly, individual users took individual sections and verses and translated them according to their interpretation of lolspeak, thereby filling the lolcat side. These translated sections could also be changed and adapted as users altered words and ideas. No single user could control the translation, and any individual act could be opposed by another translation. According to the homepage, the Lolcat Bible project began online in July of 2007, and a paper version was published through Ulysses Press in 2010. The belief is that if 72 translators and the hand of God can produce an authoritative Bible, surely 72 thousand translators and the paw of Ceiling Cat can produce an authoritative Bible.[93]

FLOSS (Free Libre Open Source Software) Manuals and translations are a slightly more organized version of this distributed trend.[94] FLOSS is theoretically linked to Yochai Benkler’s “peer production” where people do things for different reasons (pride, cultural interaction, economic advancement, etc), and both the manuals and translations capitalize on this distribution of personal drives.[95] Manuals are created for free and open source software through both intensive drives, where multiple people congregate in a single place and hammer out the particulars of the manual, and follow-up wiki based adaptations. The translations of these manuals are then enacted as a secondary practice in a similar manner. Key to the open translation process are the distribution of work and translation memory tools (available databases of used terms and words) that enable such distribution, but also important is the initial belief that machine translations are currently unusable. It is the problems of machine translation that causes the need for human intervention with translation, be it professional or open.

Finally, Facebook turned translation into a game by creating an applet that allowed users to voluntarily translate individual strings of linguistic code that they used on a daily basis in English. Any particular phrase such as “[user] has accepted your friend request” or “Are you sure you want to delete [object]?” were translated dozens to hundreds of times and the most recurring variations were implemented in the translated version. The translation was then subject to further adaptation and modification as “native” users joined the fray when Facebook officially expanded into alternate languages. In Japanese <LIKE> would have become <好き>, but was transformed to <いいね!> [good!]. Not only did this process produce “real” languages, such as Japanese, but it also enabled user defined “languages” such as English (Pirate) with plenty of “arrrs” and “mateys.” The open process created ‘usuable’ material, such as Facebook in Japanese, but also things that would never happen due to bottom line considerations, such as pirate, Indian, UK, and upside down ‘translations’ of English.

Wikis, FLOSS, and Facebook are translations with differing levels of user authority, but they all work on the premise that multiple translators can produce a singular, functioning translation. In the case of Facebook, functionality and user empowerment are highlighted, but profitability is always in the background; for FLOSS, user empowerment through translation and publishing are one focus, but a second focus is the movement away from machine translation; in all cases, but wikis particularly, the core belief is that truth will emerge out of the cacophony of multiple voices, and this is the key tenet of the destabilization of the translator.


Destabilization of the Translation

The other trend is the destabilization of the translation. This form of translation has roots in the post divine Septuagint where all translation is necessarily flawed or partial. Instead of the truth emerging from the average of the sum of voices, truth is the build-up, the mass turned back into a literal tower of Babel: it is footnotes, marginal writing and multiple layers. Truth here is the cacophony itself. The ultimate text is forever displaced, but the mass implies the whole. The translation is destabilized through using new media’s digital essence to bring out a hypermediating translational style.

This style of translation it is not new as it is the hypermediated translations that I discussed previously. It is side-by-side pages with marginal notes; it is Derridian translations; it is NINES and other multilayered digital scholarship; it is fan translations and metatitles; it is multilingual editions of games; it is modding. All of these exist, but not as a new methodology. The destabilization of the translation is a term for grounding these different styles as a new methodology that utilizes forms of peer production (similar to the destabilization of the translator), but fully layers things so that it is not the average that is visible to the user, but a mountain of possibilities available to the user to delve into or climb up. All of these types of translation exist, and the willing translators mentioned above are available, so the difficulty is not in making the many translations happen. Rather, the difficult task is in rendering visible the multiplicity.

The main difficulty of the destabilization of the translation is the problem of exhibiting multiple iterations at one time in a meaningful way. How can a reader read, watch, or play two things at once? Books, films, and games provide multiple examples of how to deal with such an attention issue, but in a limited way. Footnotes, side-by-side pages, and subtitles are all hypermediating layers. However, the digital form presents new possibilities in that there is no space issue and things may be revealed and hidden at the user’s command. There are interesting possibilities of how games can use their digital, programmed, form and user/peer production to bring out new levels of the application and the experience. I will review the digital book and metatitle here, but I will focus on what I see as a new form of game translation that not only uses, but truly thrives off of fan production.

Books are rather conservative. While they are in many ways open due to a lapse in copyright, there is little invention happening to bridge different versions. While resources such as Project Guttenberg have opened these thousands of texts to digital reader devices they exist as simple text forms just as the other purchasable books exist as simple, immediate, remediations of the original book form. However, a hypermediating variation would link these different versions and translations. At a click the reader can switch between Homer’s Odyssey in Greek and every single translation into English made in the 20th century. Of course, French, Japanese, German and various other translations are also available and the screen can be split to compare any of the above. With a slightly different (slightly less academic) mentality the reader to peruse Jane Austen’s Pride and Prejudice on the left hand side of the screen and the recent zombie rewrite Pride and Prejudice and Zombies on the right hand side of the screen. This does not advance the technology particularly, it simply has a different relationship with the text, the author, and the translator; the key is to link the texts and make them available even if it is through using small micropayments for each edition.

Films are interesting as there are already possibilities in play: multiple subtitle and audio tracks, and commentary tracks by stars, directors and others. Subtitles are a simple layer that has existed for almost a century. However, with the advent of digital disks the subtitle has been separated from the print itself allowing the user to choose to hide the subtitle or to choose what subtitles to view. Shortly after the introduction of DVD technology better compression algorithms enabled multiple audio tracks including commentary tracks. We are in an era that uses Blu-Ray disks with more storage capacity, and downloadable movie sites that allow the user to access as desired. These already exist. What would be a step forward is the linking of fan translation and commentary tracks to the digital artifact itself. Files that are in-sync with the film, but must be started independently exist now. Three examples are the abusive subtitling that I discussed earlier through Nornes, RiffTrax, from creators of Mystery Science Theater 3000,[96] which overdubs commentary onto various films creating a sort of meta-humor, and fan commentary from the Leaky Cauldron,[97] one of many prolific Harry Potter fan sites that exist on the Internet. All three of these are independent, fan, productions that are partially sanctioned by business. It would be highly beneficial to producers, prosummers and consumers to enable the direct inclusion of these modifications into the DVDs themselves. It would also enable a new understanding of the film where the meaning is not the surface, but the build up of meaning provided by both original creators and all others who play and add to it.

Finally, we arrive at digital games where some of the most interesting fan work has been done and partially integrated. This means that the way has been opened for a hypermediated translation, but it has, so far remained unpaved. The destabilization of the video game translation would combine the burgeoning practice of multi lingual editions, where there is a visible choice for the user between one language version or another, and the practice of allowing and integrating fan mods. Mods are game modifications, which could be additional maps, different physics protocols, alternate graphics, or a host of other types. Some of these, such as Team Fortress, have been wildly popular. However, ‘mods’ could be expanded to include alternate translations and dialogue tracks. The workers are there and available,[98] but so far these fan productions have faced nothing but cease and desist letters, virtual takedowns, and lawsuits.

With digital games the localization process has traditionally replaced one language with its library of accompanying files with another. However, as computer memory increases the choice of one language or another becomes less of an issue, and certain platforms such as the Xbox and online portal Steam, provide multiple languages with the core software. This gives rise to the language option where the game can be flipped from one language to another through an option menu. Some games put this choice in the options menu at the title screen. Examples[99] of this are Gameloft’s iPhone games (almost all of them, but including Block Breaker Deluxe, Hero of Sparta, and Dungeon Hunter) and Ubisoft’s Nintendo DS game Might and Magic: Clash of Heroes. Others have a hard switch that makes the natural language of the game correspond to the language of the computer system software, so that a computer running in English would have only English visible in the game, but if that computer’s OS switched to Japanese the game would boot with the Japanese language enabled. Square-Enix’s Song Summoner: Encore, Final Fantasy, and Final Fantasy II iPhone releases automatically switch between English and Japanese depending on which language the iPhone is set to. The Xbox 360 has a similar switch mechanism that requires the system to be switched to the desired language.[100] Between these two types are games played on the Steam system such as Valve’s Portal and Half-Life 2, which allow the user to launch the game in chosen languages, but do not require a system-wide switch. Finally, a few games allow the user to switch back and forth between languages. Square-Enix’s iPhone game Chaos Rings allows the user to switch between English and Japanese in the in-game menu allowing the rapid switch between languages at any time not currently in conversation or battle. This last example is the closest example to a destabilization of the translation as it would allow the near simultaneous visibility of multiple languages.

Integrating fan created translational mods into the software itself would further destabilize the already unstable base of multiple visible languages. This integrated form would allow the user to switch between official localization to fan translation to fan mod at their whim. The official version ceases to exist and the user is allowed to both interact with other types of users and create fully sanctioned alternative semiotic domains. The eventual ability to mix and match HUD in English, subtitles in Japanese and fan translation in Polish would be a true destabilization.[101]

Both the destabilization of the translator and the destabilization of the translation use new forms of fan and peer production and create a foreignizing, hypermediated translation. All of these things could be good in the current political moment that equates difference with terrorism, which necessitates the translational replacement of all forms difference with local variations. However, key to both destabilizations are that they are not simply utopian fantasies, but legitimately productive and ready to enact. It is my intent to build, and build upon, these possibilities for opening up new forms of translation in digital media in my dissertation project on games and localization.

[1] For an example of the lack of integration of alternate media in translation studies, see: Lawrence Venuti. The Translation Studies Reader. 2nd ed. New York: Routledge, 2004. On a particular attempt to integrate it, see: Anthony Pym. The Moving Text: Localization, Translation, and Distribution. Amsterdam; Philadelphia: John Benjamins Pub. Co., 2004. On the distinct effort to consider ‘old’ media as ‘new’ see: Lisa Gitelman and Geoffrey B. Pingree, eds. New Media, 1740-1915. Cambridge: MIT Press, 2003.

[2] Antoine Berman. “From Translation to Traduction.” Richard Sieburth trans. (unpublished): p. 11.

[3] Serge Lusignan. Parler Vulgairement. Paris/Montreal: Vrin-Presses de l’Université de Montréal, 1986: pp. 158-9. Quoted in Berman. “From Translation,” p. 9.

[4] Berman, “From Translation,” p. 11.

[5] Berman, “From Translation,” p. 11.

[6] Roland Barthes. “From Work to Text.” In The Cultural Studies Reader, edited by Simon During, Donna Jeanne Haraway and Teresa De Lauretis. London: Routledge, 2007. Rosemary J. Coombe. The Cultural Life of Intellectual Properties: Authorship, Appropriation, and the Law. Durham: Duke University Press, 1998. Néstor García Canclini. Hybrid Cultures: Strategies for Entering and Leaving Modernity. Minneapolis: University of Minnesota Press, 2005. Koichi Iwabuchi. Recentering Globalization: Popular Culture and Japanese Transnationalism. Durham: Duke University Press, 2002. Koichi Iwabuchi, Stephen Muecke, and Mandy Thomas. Rogue Flows: Trans-Asian Cultural Traffic. Aberdeen, Hong Kong: Hong Kong University Press, 2004.

[7] See: Barthes, “From Work to Text.” Michel Foucault. “What Is an Author?” In The Essential Foucault: Selections from Essential Works of Foucault, 1954-1984, edited by Paul Rabinow and Nikolas S. Rose. New York: New Press, 2003. Lesley Stern. The Scorsese Connection. Bloomington; London: Indiana University Press; British Film Institute, 1995. Mikhail Iampolski. The Memory of Tiresias: Intertextuality and Film. Berkeley: University of California Press, 1998.

[8] Berman, “From Translation,” p. 14

[9] I use literary theories due to their prevalence within academia, but also because of their political nature. While other conceptualizations of translation avoid politics and ethics (particularly practical understandings of translation) comparative literary theories of translation highlight them: my underlying belief is that translation is both politically and culturally important.

[10] Jacques Derrida. “‘Eating Well,’ or the Calculation of the Subject: An Interview with Jacques Derrida.” In Who Comes after the Subject?, edited by Eduardo Cadava, Peter Connor and Jean-Luc Nancy, 96-119. New York: Routledge, 1991.

[11] George Steiner. After Babel: Aspects of Language and Translation. 3rd ed. Oxford ; New York: Oxford University Press, 1998: p. 428.

[12] Ferdinand de Saussure, Charles Bally, Albert Sechehaye, and Albert Riedlinger. Course in General Linguistics. Translated by Roy Harris. LaSalle: Open Court, 1983 [1972]: p. 67.

[13] Saussure, Course, pp. 71-78.

[14] Saussure, Course, pp. 79-98.

[15] Jonathan D. Culler. Ferdinand De Saussure. Rev. ed. Ithaca, N.Y.: Cornell University Press, 1986: p. 132.

[16] Jacques Derrida. Of Grammatology. 1st American ed. Baltimore: Johns Hopkins University Press, 1976.

[17] Jacques Derrida. “Des Tours De Babel.” In Difference in Translation, edited by Joseph F. Graham. Ithaca: Cornell University Press, 1985: pp. 165-7.

[18] Jacques Derrida. “Living On. Border Lines.” In Deconstruction and Criticism, edited by Harold Bloom, Paul De Man, Jacques Derrida, Geoffrey H. Hartman and J. Hillis Miller. New York: Seabury Press, 1979.

[19] Jacques Derrida. “What Is a ‘Relevant’ Translation?” In The Translation Studies Reader: p. 443. (italics and brackets in text)

[20] Derrida, “‘Eating Well.’

[21] Jacques Derrida. Specters of Marx: The State of the Debt, the Work of Mourning, and the New International. New York: Routledge, 1994.

[22] Philip E. Lewis. “The Measure of Translation Effects.” In Difference in Translation.

[23] Ironically, Spivak’s Derridian translation of Derrida’s Of Grammatology was successful in its abuse, but unsuccessful in getting her further translation jobs of Derrida’s works. Derridian translations are successful when they are unsuccessful.

[24] On the relationship between task, giving up and failure see: Paul De Man. “Conclusions: Walter Benjamin’s ‘the Task of the Translator’.” In The Resistance to Theory. Minneapolis: University of Minnesota Press, 1986: p. 80. For more on Derrida, Benjamin and De Man see: Tejaswini Niranjana. Siting Translation: History, Post-Structuralism, and the Colonial Context. Berkeley: University of California Press, 1992.

[25] Walter Benjamin. “The Task of the Translator: An Introduction to the Translation of Baudelaire’s Tableaux Parisiens.” In The Translation Studies Reader: p. 81.

[26] Benjamin. “The Task of the Translator,” p. 76.

[27] Emily Apter brings this out well in her work on translation and politics. Emily S. Apter. The Translation Zone: A New Comparative Literature. Princeton: Princeton University Press, 2006.

[28] Specifically, Robinson argues for the long lasting presence of Christian asceticism (both eremitic and cenobitic) coming from religious dogma, but leading into the word/sense debate. See: Douglas Robinson. “The Ascetic Foundations of Western Translatology: Jerome and Augustine.” Translation and Literature 1 (1992): 3-25.

[29] Jerome. ”Letter to Pammachius.” Kathleen Davis trans. In The Translation Studies Reader: p. 28.

[30] John Dryden. “From the Preface to Ovid’s Epistles.” In The Translation Studies Reader, pp. 38-42.

[31] Roman Jakobson, Krystyna Pomorska, and Stephen Rudy, Language in Literature. Cambridge: Belknap Press, 1987: p. 429.

[32] Jakobson, Language in Literature, p. 434. There are interesting connections between formalism and Laura Marks’ work on digital translation. Marks argues that digitization necessarily robs things of certain qualities and this means they can be translated in interesting, new ways, but that they are forever robbed of originary elements. The digital becomes a universal language. See: Laura U. Marks. “The Task of the Digital Translator.” Journal of Neuro-Aesthetic Theory 2 (2000-02).

[33] Anton Popovič. Dictionary for the Analysis of Literary Translation. Edmonton: Department of Comparative Literature, University of Alberta, 1975: p. 6. Also see Niranjana’s discussion in Siting Translation, p. 57.

[34] I am skipping over large debates within game studies involving the question of the core of gaming: ludology and narratology. Roughly, whether the core of gaming is the ‘play’ or the ‘story.’ I skip this to save space, because it is a dead end that has been generally concluded with the answer of ‘both,’ because ludologists and narratologists are academics, but finally because ‘experience’ encapsulates both play and story.

[35] Carmen Mangiron and Minako O’Hagan. “Game Localization: Unleashing Imagination with ‘Restricted’ Translation.” Journal of Specialized Translation, no. 6 (2006): 10-21. Also see, Minako O’Hagan and Carmen Mangiron. “Games Localization: When Arigato Gets Lost in Translation.” Paper presented at the New Zealand Game Developers Conference, Otago 2004.

[36] Popovič, Dictionary, p. 11.

[37] Lawrence Venuti. ”Foundational Statements” in The Translation Studies Reader: p. 15.

[38] Schleiermacher is working with Dryden’s tripartite: metaphrase, paraphrase and imitation. In his understanding, then, word-for-word has been subsumed (since Jerome) for sense-for-sense, but imitation has been opened up as a larger (maligned) possibility.

[39] Friedrich Schleiermacher. “On the Different Methods of Translating.” In The Translation Studies Reader: p. 49.

[40] Schleiermacher. “On the Different Methods of Translating,” pp. 60-61.

[41] Antoine Berman. The Experience of the Foreign: Culture and Translation in Romantic Germany. Albany: State University of New York Press, 1992: p. 150.

[42] Berman, The Experience of the Foreign, p. 149.

[43] Lawrence Venuti. The Translator’s Invisibility: A History of Translation. 2nd ed. New York: Routledge, 2008 [1994]: p. 15.

[44] Venuti, Translator’s Invisibility, p. 86.

[45] Venuti, Translator’s Invisibility, p. 98.

[46] Venuti, Translator’s Invisibility, p. 276.

[47] Venuti, Translator’s Invisibility, p. 85.

[48] Lawrence Venuti. The Scandals of Translation: Towards an Ethics of Difference. London; New York, NY: Routledge, 1998.

[49] Venuti, Scandals of Translation, p. 87.

[50] J. David Bolter and Richard Grusin. Remediation: Understanding New Media. Cambridge: MIT Press, 1999.

[51] In this later work the metaphor has shifted to interfaces being both windows with immediacy and mirrors with reflection, but it is still connected to remediation with both immediacy and hypermediacy. Jay David Bolter and Diane Gromala. Windows and Mirrors: Interaction Design, Digital Art, and the Myth of Transparency. Cambridge: MIT Press, 2003: p. 82.

[52] Metatitles are a extended form of subtitles that I first discussed in my Master’s thesis; Jerome McGann’s work, including IVANHOE and his Rosetti work, can be found through his website <http://www2.iath.virginia.edu/jjm2f/online.html>; Mods are fan/user created game modifications.

[53]Alexander R. Galloway. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press, 2006: pp. 70-84.

[54] Berman, “From Translation,” p. 6.

[55] Jacques Derrida. Glas. Lincoln: University of Nebraska Press, 1986 [1974].

[56] This application is for various ‘smart’ phones and the iPad, but the technology is still not utilized for eReaders. My point is that this lack is not for technological reasons, but for ways that the eReader is both imagined and actualized.

[57] For a general, early look at film translation see: Dirk Delabastita. “Translation and the Mass Media.” in Susan Bassnett and Andre Lefevere eds. Translation, History and Culture. London: Pinter Publishers, 1990.

[58] Lawrence W. Levine. Highbrow/Lowbrow: The Emergence of Cultural Hierarchy in America. Cambridge: Harvard UP, 1988. Referenced in Jennifer Forrest “The ‘Personal’ Touch: The Original, the Remake, and the Dupe in Early Cinema,” In Jennifer Forrest and Leonard R. Koos eds. Dead Ringers: The Remake in Theory and Practice. Albany: State University of New York Press, 2002: p. 102.

[59] As has been stated by many people in the 20th century, there is nothing objective, or reflective, about representation, and there never was for early cinema, however, this belief has never really gone away. See: Ella Shohat and Robert Stam. “The Cinema after Babel: Language, Difference, Power.” Screen 26.3-4, 1985: 35-58.

[60] This is regardless of corruption of subtitles per Abé Mark Nornes. Cinema Babel: Translating Global Cinema. Minneapolis: University of Minnesota Press, 2007.

[61] Arjun Appadurai. Modernity at Large: Cultural Dimensions of Globalization. Minneapolis: University of Minnesota Press, 1996: particularly p. 39.

[62] For Japanese this is particularly a problem; for English this is less of a problem, especially for Americans, due to the assumption that English is a global language.

[63] On MLV see: Ginette Vincendeau. “Hollywood Babel: The Coming of Sound and the Multiple Language Version.” Screen 29.2 (1988): 24-39. On FLV see: Natasa Durovicová. “Translating America: The Hollywood Multilinguals 1929-1933.” In Sound Theory: Sound Practice, edited by Rick Altman, 138-53. New York: Routledge, 1992. Also, see: Nornes, Cinema Babel.

[64] See: Chon Noriega. “Godzilla and the Japanese Nightmare: When “Them!” is U.S.” Cinema Journal 27.1 (Autumn 1987): 63-77

[65] These are visible in the United States, to which I largely refer, but there is another history within India’s Bollywood (often illegal/unofficial) remake practices.

[66] Ironically, the actual words she uses, ホスト, ホステス and キャバレー, are all foreign loan words in katakana. Thus, even her word choice is based in an awkward schizophrenia between local and foreign.

[67] Abé Mark Nornes. “For an Abusive Subtitling.” Film Quarterly 52, no. 3 (1999): 17-34.

[68] L10n is the industry shorthand for localization. There are 10 letters between the L and the n. In addition to localization, the industry uses i18n as shorthand for internationalization and g11n for globalization..

[69] For a discussion on the demonstration and visibility of these early games, see: Van Burnham. Supercade: A Visual History of the Videogame Age 1971-1984. Cambridge: MIT Press, 2003.

[70] In particular see Michel Foucault on the new regime of power/knowledge through a new way of seeing, and Lisa Cartwright on the problems of medical imaging technologies and truth. See: Lisa Cartwright. Screening the Body: Tracing Medicine’s Visual Culture. Minneapolis: University of Minnesota Press, 1995. Michel Foucault. The Birth of the Clinic: An Archaeology of Medical Perception. New York: Vintage Books, 1975. Marita Sturken and Lisa Cartwright. Practices of Looking: An Introduction to Visual Culture. Oxford; New York: Oxford University Press, 2001.

[71] Mary Flanagan. “Locating Play and Politics: Real World Games & Activism.” Paper presented at the Digital Arts and Culture, Perth, Australia 2007: p. 3.

[72] See: Gérard Genette. Palimpsests: Literature in the Second Degree. Lincoln: University of Nebraska Press, 1997; Gérard Genette. Paratexts: Thresholds of Interpretation, Literature, Culture, Theory. Cambridge; New York, NY: Cambridge University Press, 1997.

[73] LISA is “An organization which was founded in 1990 and is made up mostly software publishers and localization service providers. LISA organizes forums, publishes a newsletter, conducts surveys, and has initiated several special-interest groups focusing on specific issues in localization.” Bert Esselink. A Practical Guide to Localization. Rev. ed. Amsterdam; Philadelphia: John Benjamins Pub. Co., 2000: p. 471.

[74] LISA quoted in Esselink, A Practical Guide to Localization, p. 3.

[75] Lev Manovich. The Language of New Media. Cambridge: MIT Press, 2001.

[76] On experience as the core equivalence see the work of Carmen Mangiron and Minako O’Hagan: Carmen Mangiron. “Video Games Localisation: Posing New Challenges to the Translator.” Perspectives: Studies in Translatology 14, no. 4 (2006): 306-23; Mangiron and O’Hagan, “Game Localization;” O’Hagan, Minako. “Conceptualizing the Future of Translation with Localization.” The International Journal of Localization (2004): 15-22; Minako O’Hagan. “Towards a Cross-Cultural Game Design: An Explorative Study in Understanding the Player Experience of a Localised Japanese Video Game.” The Journal of Specialized Translation, no. 11 (2009): 211-33; O’Hagan and Mangiron, “Games Localization.”

[77] Esselink, A Practical Guide to Localization, p. 46.

[78] Frank Dietz. “Issues in Localizing Computer Games.” In Perspectives on Localization, edited by Kieran Dunne. Amsterdam; Philidelphia: John Benjamins Publishing, 2006. Also, Mangiron and O’Hagan, “Game Localization.”

[79] The move to CG from live action might also be a contributing factor to the rise of domesticating, replacement localization. Technically, gaming started with live action cut-scenes with big budgets and famous actors in the 1990s (Wing Commander III (1994); Star Wars: Jedi Knight: Dark Forces II (1997)), but it moved to CG cut-scenes using the game engine by the late 1990s and early 2000s (Half Life (1998), Star Wars: Jedi Knight II: Jedi Outcast (2002)). In part this could be seen as a budget issue, but in part it is an immersion issue as live action cut-scenes could be considered more jarring due to difference from regular game.

[80] This is, of course, ironic as cinema often overdubs the dialogue into the film due to the difficulties of recording clear dialogue when filming.

[81] This is an incredibly rough definition especially due to how ‘piracy’ relates to fan production, modding and copyright.

[82] Piracy is rampant with PC games, due to the ease of duplicating CDs and DVDs, and only slightly better with console games where cartridges are harder to duplicate. For various views on game piracy see: Ernesto. “Modern Warfare 2 Most Pirated Game of 2009.” TorrentFreak. Posted: December 27, 2009. Accessed: June 6, 2010. <http://torrentfreak.com/the-most-pirated-games-of-2009-091227/>. David Rosen. “Another View of Video Game Piracy.” Kotaku. Posted: May 7, 2010. Accessed: June 6, 2010. <http://kotaku.com/5533615/another-view-of-video-game-piracy>. In general, also see the blog Play No Evil: Game Security, IT Security, and Secure Game Design Services, particularly the “DRM, Game Piracy & Used Games” category: <http://playnoevil.com/serendipity/index.php?/categories/7-DRM,-Game-Piracy-Used-Games>.

[83] Mangiron and O’Hagan, “Game Localization.”

[84] That the equivalent experience comes from, and aims toward, generic cultural attributes of a presumed group, and not a complex, real group, is another problem entirely.

[85] Esselink, A Practical Guide to Localization, p. 4.

[86] Appadurai, Modernity at Large. Toby Miller, Nitin Govil, John McMurria, Richard Maxwell, and Ting Wang. Global Hollywood 2. London: BFI Publishing, 2005. John Tomlinson. Cultural Imperialism: A Critical Introduction. Baltimore: Johns Hopkins University Press, 1991.

[87] Harumi Befu. Hegemony of Homogeneity: An Anthropological Analysis Of “Nihonjinron. Melbourne: Trans Pacific Press, 2001. Stephen Vlastos. Mirror of Modernity: Invented Traditions of Modern Japan. Berkeley: University of California Press, 1998. Tomiko Yoda and Harry D. Harootunian. Japan after Japan: Social and Cultural Life from the Recessionary 1990s to the Present. Durham: Duke University Press, 2006.

[88] I have written about both the politics of Square-Enix as a Japanese company and the International Edition as a political force elsewhere. See: William Huber and Stephen Mandiberg. “Kingdom Hearts, Territoriality and Flow.” Paper presentation at the 4th Digital Games Research Association Conference. Breaking New Ground: Innovation in Games, Play, Practice and Theory. Brunel University, West London, United Kingdom. September, 2009; Stephen Mandiberg. “The International Edition and National Exoticism.” Paper presentation at Meaningful Play. Michigan State University, East Lansing. October, 2008.

[89] There are serious issues regarding labor and these two trends of translation. One is in the labor of fans to create translations. This is alleviated through micro-payments for the additional localization packages. They must receive some amount of compensation for their labor, as this situation is dangerously close to exploitation. The second issue is related to the de-skilling of professional translators and localizers due to the possible disappearance of their work to the fans. This is an issue, but micro-payments and the necessity of companies to pay localizers for the primary localizations should alleviate this possible de-skilling somewhat. These are matters that demand attention that I am not giving them in the present paper.

[90] See: Joseph Reagle. Good Faith Collaboration: The Culture of Wikipedia. Cambridge: MIT Press, 2010.

[91] Rocketboom Know Your Meme. <http://knowyourmeme.com/memes/lolcats>; I Can Has Cheezburger. <http://icanhascheezburger.com/>. Hobotopia. <http://apelad.blogspot.com/>.

[92] LOLCat Bible Translation Project. <http://www.lolcatbible.com/index.php?title=Genesis_1>.

[93] A slightly different translation project that utilized the masses is Fred Benenson’s Kickstarter project Emoji Dick. Benenson used Kickstarter, an online funding platform, to fund a translation of Moby Dick into Emoticons using Google’s Mechanical Turk. Thousands of individual Mechanical Turk users were paid pennies to translate individual sentences into emoticons and the results were published. See: <http://www.kickstarter.com/projects/fred/emoji-dick>.

[94] FLOSS Manuals. <http://en.flossmanuals.net/>.

[95] Yochai Benkler. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven: Yale University Press, 2006.

[96] RiffTrax. <http://www.rifftrax.com/>

[97] The Leaky Cauldron. <http://www.the-leaky-cauldron.org/features/dvdcommentaries>.

[98] Fan translations and retranslations have both existed over the past decades. For instance, see the ChronoTrigger retranslation <http://www.chronocompendium.com/Term/Retranslation.html>, the Mother 3 fan translation < http://mother3.fobby.net/>, and the Seiken Densetsu 3 fan translation <http://www.neillcorlett.com/sd3/>.

[99] There are innumerable examples of each type. I am simply listing ones that come to mind.

[100] The Xbox 360 information comes from Rolf Klischewski. IGDA LocSIG mailing list. May 31, 2010.

[101] While Dyer-Mitheford and De Peuter would likely relegate this industry-integrated solution to a form of apologist for Empire, I prefer to think of it as a dialogic solution. See: Nick Dyer-Witheford and Greig De Peuter. Games of Empire: Global Capitalism and Video Games. Minneapolis: University of Minnesota Press, 2009. Mikhail Bakhtin, The Dialogic Imagination: Four Essays. Austin: University of Texas Press, 1981.

On Translation and/as Interface

I. Windows, Mirrors and Translations

Within their book Windows and Mirrors, J. David Bolter and Diane Gromala discuss the interface of digital artifacts (primarily artistic ones, but that is in large part their SIGGRAPH 2000 sample set) as having two trends. The first is the invisible window where we see through the interface to the content, and the other is the seemingly maligned (at least recently an in much of the design treatises) reflective mirror that reflects how the interface works with/on us as users.

This is similar in ways to Ian Bogost and Nick Montfort’s Platform Studies initiative where the interface exists between the object form/function and its reception/operation, and this interface can do many things depending on its contextual and material particulars. We need only look at the difference between Myst with its clear screen and Quake with its HUD, or between Halo and its standard gamepad and Wario Ware: Smooth Moves with its wiimote utilization to see the range.

However, another thing that the discussion of windows and mirrors, immediacy and hypermediacy, seeing through and looking at all bring up when paired with interface is translation. A translation is also an interface. It can be a window or a mirror, transparent or layered, you can see through it to some content, or you can be forced to look at it and the form and translation itself.

But thinking of translation as an interface in the Bolter and Gromala sense, or as Bogost and Montfort’s interface layer is unusual. Usual is to place the translation outside of the game as a post production necessity that enable the global spread of the product, or, at best, an integrated element of the production side that minimally alters the text so that it can be accepted in the target locale. Even researchers within the field of game studies generally ignore the language of the game: nobody asks what version the researcher played because we all recognize that we play different versions; more important is that the researcher played at all.

So translation’s place is in question. Is it production? Post-production? Important? Negligible? And how does one study it? We can barely agree upon how to study play and games themselves, so surely this is putting the carriage before the horse (or maybe some nails on the carriage before both). But, no, I still wish to follow through with this discussion, as I believe it can be productive. My question is how does translation relate to games, and hopefully I can come up with a few thoughts/answers if not a single ‘truth.’

II. Translation and Localization

As Heather Chandler has so wonderfully documented, the translation of game has a variable relationship to the production cycle. It at one point was completely post-productive and barely involved the original production and development teams. At its earliest it was simply the inclusion of a translated sheet of instructions to aid the user in deciphering what was a game in a completely foreign language. This still exists in certain locations, especially those with weaker linguistic and monetary Empires (obviously, not English, but ironically this includes China where the games are often gray or black market Japanese imports). This type of translation, called a non-localization, has slowly given way to more complete localizations including “partial” and “full” localizations. Partial localizations maintain many in game features, but menus and titles switch language, audio may remain as is, but subtitles will be included. In contrast, a full localization tends toward altering everything to a target preference including voices, images, dialogue, background music, and even game elements such as diegetic locations. As the extent of localization increased the position (temporally and in importance) of translation in the production cycle changed. It moved forward and needed pre-planning for nested file structures. It also grew in importance so that more money might be spent to ensure a better product.

However, other than a few gaffs like “all your base” and other poor translations from the early years game translation has increasingly become invisible. This invisibility, or transparency, has been written about extensively by Lawrence Venuti regarding literary translation, the status of the translator, and the relationship of global to national cultural production. For my purposes here I will simply say that he says the fluent translations are a problem (in the context of American Empire) and that current game localization practices (which are multi/international, but in many ways American-centric) do what he claims is bad. We don’t need to accept his arguments regarding empire and discursive regimes of translation (although I do), but we should be aware of the parallels between what he talks about using literary analysis and many translation reviews, and the way that nobody even talks about a game translation.

So the industry hides translation. But why does the academic community ignore it? Is it not a part of games? Maybe. But is it a part of play?

III. Ontology

Ontologies of play typically exclude translation. This is most obviously demonstrated in Jesper Juuls’ summary of common definitions and play that he uses to form his own classic game model. Rules are all well and good, but all games have a context, and it is this context that Juul misses when he dismisses the idea of “social groupings” (Juul 34). Juul pulls this from Huizinga and it is key that it relates to Huizinga’s primary contribution of the magic circle and the “ins” and “ofs” of play and culture.

I would argue that games promote social groups, but they also form in social groups and language is crucial to this as an important (perhaps primary) marker of a social group. However, in Juul’s final analysis “the rest of the world” has almost entirely been removed as an “optional” element (41). It is one thing to say that the outcome might effect the world, but it is another to say it can only be created through that world and its mere playing effects the world. Juul even acknowledges this in the conclusion to the chapter where he notes that pervasive and locative games break the rule. However, I would still argue that even the classic model does not obey the ”bounded in space and time” principle.

The former can be demonstrated trough Scrabble. A game created in English with strict rules, negotiable outcomes, player effort, attachment, valorization of winning, and many ways to do so. But the game is completely attached to English. The letters have point determinations based on ease of use and the scarcity of each letter is based on its common usage. The game is designed around English and cone cannot play it with other languages. Take Japanese: even if one were to Romanize the characters one wouldn’t have nearly enough vowels, and if one replaced all of the characters with hiragana there are still way too many homonyms to make a meaningful/difficult game. Japanese Scrabble might be possible, but it would need to be created by changing a great deal of the game. It is bounded in space and time, but contextually so.

The latter we can return to both Huizinga and Caillois who both locate play/games within a relationship to culture. Their teleological and Structuralist issues aside, it is important to not simply separate games (the text) from culture, time, place (the context) in a reductively formal analysis. Huizinga links play to culture as a functional element. These rules are a purpose even if that purpose has changed. Caillois notes a key association between types of play and particular societies. Games may be a separate place, but they affect the real world and vice-versa.

IV. Platform Studies

So context is important. Essential even. Let’s tack it on and see what happens. Or better yet, let’s say it’s pervasive and inseparable, but also difficult to distinguish. This is much like Bogost and Montfort’s Platform Studies model, so let’s see how translation could be integrated into that model.

Here I will primarily use Montfort’s earlier conceptualization of platform studies from his essay “Combat in Context.” Montfort moves toward a slightly simplified five-layer model from Lars Konzack’s seven-layer model by moving cultural and social context from a layer to a surrounding element. However, it is interesting that while he moves context to a surrounding element it is Platform that is key for them. Everything in his model is reliant on the platform.

As the base level the platform enables what can be created upon it. It is both the question of whether it is on a screen, whether it plays DVDs, cartridges or downloads files, how big those are and what size of a game is allowed on them. It is the capabilities of the system and what this enables. However, the platform layer exists in a context both technological and socio-cultural. The processor chip of the platform is in a particular context and limits the platform, but the existence of a living room with enough space to move can also limit the platform.

Second is the game code. The switch from assembly to higher level programming was enabled by platform advancements, but this also enabled great differences in the further layers. The way the code existed is also integrally related to linguistics/language. Translating assembly code is painstaking and almost always avoided. The era of assembly code was also the era of in house translations and non or partial localizations. In contrast, C and its derivatives enable greater linguistic integration and as long as programs in higher level code are programmed intelligibly translating them is possible. Context with the game code involves language. This much is obvious, as code is language. But I mean something further. I mean that there is a shift tin allowances along the way that reveals how real world “natural/national” languages become integrated, but always subsumed under machine languages.

Third is the game form: the narrative and rules. What we see, hear and play (if not ‘how’ we see, hear and play). This is the non-phenomenological game. The text, as it is. Of course, if it is the text then what is the surrounding context other than everything?

As we’ve seen from Juul, the rules belie languagelessness. We enter a world that has a set of rules that are separate from life and this prevents one from linking the game to life. But the narrative, if one does not think it an inconsequential thing tacked onto the essential rules, is related to contextually relevant things and presented in linguistically particular ways. Language then is here as well and translation bears and important role. In many ways this is the main place in which one might locate translation, but only if one is a narratologist. If the story is of prime importance, form is where translation exists.

The fourth level is the interface. Not the interface that I began with, at least not quite, or not yet, but the link between the player and the game. The “how” one sees, hears and plays the game. To Bogost and Montfort this is the control scheme, the wiimote and its phenomenological appeal compared to the gamepad or joystick, but it is also the way the game has layers of information that it must communicate to the user. The form of the game leads toward certain options of interface: a PVP FPS must be sure to have easily read information that allows quick decisions and full game time experience, but a slow RPG can have layers of dense interface, opaque and in a way that forces the user to spend hours making decisions in non-game time.

The interface also enables certain things. A complicated interface is hard to pick up and understand, but a simple one is easy. This is a design principle that Bolter and Gromala contest, but it has levels of truth in it. A new audience is not likely to pick up the obscenely difficult layering of interface of an RPG or turn based strategy game, but a casual point and click may be easily picked up and learned (if just as easily put down and forgotten).

In some ways this is also where translation exists and in some ways it isn’t. Certainly the GUI’s linguistic elements can be translated, but more often they are programmed in a supposedly non linguistic and universal manner. [heart symbol] stands for life and [lightening bolt] stands for magic or energy, or life is red and energy/magic is blue. Similarly, the audio cues are often untranslated. And controls mainly stay the same. Perhaps one of the few control changes of interface is the PlayStation alteration of O, or ‘maru,’ for ‘yes’ and X, or ‘batsu,’ for ‘no’ in Japanese for X, or check, for ‘yes’ and O for ‘no’ in English.

The fifth level is reception and operation: how the user and society receives the game, how it has come from prequels and gone to sequels, its transmedial or generic reverberations, and even the lawsuits and news surrounding it. All of these point outside of the game, but how does one then separate context? Is the nation the receiver or the context? Is the national language or dominant dialect part of the level or surrounding context? Is it effected by the game or can it then effect the game? And even if it effects the game by being on the top layer is it negligible in its importance? Is this another material vs. ideological Marxist fight for a new generation?

A short answer is that Bogost and Montfort answer all of this by putting context as a surrounding element, but they also fail to highlight its importance. By pushing out context to the surrounding bits it essentializes the core and approves of an analysis that does not include the periphery. The core can be enumerated; the periphery can never be fully labeled or contained.

Elements of importance are too destabilized to be meaningful when analyzed according to the platform studies. Translation is a prime example, but race and sexuality are equally problematic. Their agenda is not contextual, but formal. Mine is contextual and cultural.

V. Translation as Interface

The goal of localization is to translate a game so that a user in the target locale can have the same experience as a user in the source locale. For localization, then, translation is about providing a similar fifth level reception and operation experience. However, to provide this experience the localizers must alter the game form level by physically manipulating the game code level. The interface, beyond minor linguistic alteration, is not physically altered and yet it is the metaphor of what is being done to the game itself. The translation of a game, like Bolter and Gromala’s critique of the interface as window, attempts to transparently allow the user to look into a presumed originary text, or in the case of games, into to originary experience. It reduces the text to a singular experience/text. However, the experience and text were never singular to begin with. In translations, too, we need mirrors as well as windows, so how can we make a translation that reads like a mirror by reflecting the user and his or her own experience?

First, all of Bolter and Gromala’s claims against design’s obsession with windows and transparency are completely transferrable to games as digital artifacts and to the localization industry’s professed agendas. Thus, the primary necessity is to acknowledge the benefit of a non-window translation. Second, the translation must be put in as a visible, reflective interface that both shows the user’s playing particulars, the originals playing particulars and the way that the game form and code has been changed in the process. This could be enabled by a more layered, visible, foreignizing translational style. Instead of automatically loading a version of the game the user should be required to pick a translation and be notified that they can pick another. Different localizations should be visible provided on a singular medium. Alternate, fan produced modification translations should be enabled. If an uncomplicated translation-interface is an invisible and unproductive interface, then a complicated translation-interface is a visible and productive one. Make the translational interface visible.

VI. References

  • Bolter, J. David, and Diane Gromala. Windows and Mirrors: Interaction Design, Digital Art, and the Myth of Transparency. Cambridge: MIT Press, 2003.
  • Chandler, Heather Maxwell. The Game Localization Handbook. Hingham: Charles River Media, 2005.
  • Chandler, Heather Maxwell. The Game Production Handbook. 2nd ed. Hingham: Infinity Science Press, 2009.
  • Juul, Jesper. Half-Real: Video Games between Real Rules and Fictional Worlds. Cambridge: MIT Press, 2005.
  • Montfort, Nick. “Combat in Context.” Game Studies 6, no. 1 (2006).
  • Montfort, Nick, and Ian Bogost. Racing the Beam : The Atari Video Computer System. Cambridge: MIT Press, 2009.
  • Venuti, Lawrence. The Translator’s Invisibility: A History of Translation. 2nd ed. New York: Routledge, 2008 [1994].

Remakes and Demakes: Logics of Repetition in Gaming

Note: The following is a work in progress on remakes and demakes, repetition, and remediation. I post now seeking comments and responses in order to help in the rewrite process.

Abstract: Remakes are a form of repetition well known and increasingly engaged with in cinema and literary studies. Recently remakes have spread to gaming and with them has been a string of games with opposing tendencies. These games have been called demakes. This essay explores the oppositional logics of repetition at play with gaming remakes and demakes in terms of technological, representational and historical modes of knowledge production. Whereas remakes follow the dominant cultural trends and help whitewash the past, demakes are oppositional, playing with technology, the past and nostalgia in a different, if not better, way.

Remakes and Demakes: Logics of Repetition in Gaming

Written: March 21, 2009

Both remaking and demaking have a particular relationship to repetition and time. While their obvious relationship is with the present and the past, they also have a stake in the relationship between present and future. Remaking renews the past; demaking returns to the past. Both are crucially involved with concepts of history, memory and nostalgia. However, these aspects of the remake and the demake seem to be elided in the fetishization of realistic representation, technology and economics. In the following pages I will map out the techno-economic, representation/simulational and historico-nostalgic logics involved with the current repetition of gaming texts.

One relatively recent gamic remake is Tomb Raider: Anniversary. Produced by Eidos, the same company that released Tomb Raider in 1996, the remake marks the 10-year anniversary of the original game that began both the genre and property that has now spread to multiple media (game, film, novelization, et cetera) over the past decade.

Tomb Raider is a 3rd person action adventure game. The player controls the now famous character Lara Croft as she explores various environments (Peru, Greece, Egypt and Atlantis) searching for the keys to unlock and eventually explore the lost city of Atlantis. The Anniversary remake uses the same 3rd person genre, narrative and particular locales as the original. The graphics are updated from clunky, early polygonal representation to high-count polygon graphics that produce a more “naturalistic” or “realistic” representation. [1]  The remake could be considered a “faithful” translation of the original as it reproduces the flow and specific scenes/levels of the original, but it works to erase the dated aspect: the graphics. [2]  By this logic the important, reproduced aspects are the play style and the property itself.

The Anniversary remake, as a highly commercial endeavor, reproduces the “good” elements of the original for simple economic reasons. It brings back what the audience has paid for time and again with extras. The generic mode of 3rd person 3D action/adventure genre, itself an update to 2D platformers such as Super Mario Bros., was popularized in the original Tomb Raider. It has been revived as a genre in each Tomb Raider sequel, and the Anniversary remake flogs the tired generic horse enough to make a (significant) profit. The economic logics of “better-faster-more” are of central importance to the remake. Perhaps the only remakes that do not follow the threefold increase are those that attempt to take the game as is visually and transfer the game from a previous, now increasingly difficult to obtain, or simply obsolete, platform to a modern one. Examples of this are the Square-Enix Final Fantasy games that are being remade from the 1990s Super Nintendo hardware to the 2000s Gameboy Advance and Nintendo DS hardware. In these remakes additional elements (“more”) are added, but the other two aspects remain as they were in the original (faithfully reproduced graphics and speed).

The remake is thus involved in a process of renewal where “old” is turned into “new” in a strictly linear fashion that posits less < more, slow < fast, abstract < naturalistic, and so forth. This techno-economic logic of “better-faster-more” dominates gaming on the top commercial layer, but it is directly opposed within the discourse surrounding the demake.

Demaking is a recent phenomenon where a game is translated in the opposite direction compared to the standard remaking. The term “de-make” was coined by Phil Fish on the TIGForums in August of 2007. In response to recent remakes, Fish writes:

what about the opposite? relatively new 3d games being remade for lesser platforms. like that guy who ported ocarina of time to SNES, or turning doom into a cellphone RPG… i fint [sic] it highly interesting to see what happens when that happens. see how far you can push a game backwards, and see what gameplay elements remain intact. what got cut, what got added? does it play better? can anybody think of other downgrades/de-makes? can anybody think of a better name for those games? (Fish 2007).

Instead of taking an old game and making it new the demake takes a new game and makes it old either through genre or graphics. Fish uses the term “downgrade” in opposition to the normative “upgrade” assumed with the remake’s advancement along the previous mentioned techno-economic logic of “better-faster-more,” and then asks if anybody can think of another name: a year and a half later the hyphen has been removed and numerous other demakes have been made both independently by interested parties and within The Independent Gaming Source’s Bootleg Demakes Competition.

The demake that I will primarily be considering here, D+Pad Hero, was created by Kent Hansen and Andreas Pederson and is a demake of Guitar Hero a popular music simulation game. Guitar Hero is one of many music simulation games (others include Rock Band, SingStar, GuitarFreaks and DrumMania) where the user “plays” an instrument (drumset, guitar) or sings in rhythm to music and beats displayed on the screen. With Guitar Hero the guitar is unplayable as a guitar and simply consists of five input buttons and a “strum” bar that the player uses in order to “play” the song. The player must press the buttons and use the strum bar in accordance with the cues on the screen, which are in tune/rhythm with the song being played, and a score is given at the end of the song based on accuracy. This logic has been in older games such as BeatMania (1997), Pop’n Music (1998) and Dance, Dance Revolution (1998), but has increasingly been combined with simulated musical production and commercial music. While Pop’n Music forced the player to press five large, colored buttons arranged on an arcade frame in accordance to the cues and a son, the later games put the buttons on a faux instrument. The original Guitar Hero and the other music simulation games have had dozens of expansions and sequels. is in its third release and the other games are around a similar sequel number. The sequels follow a similar techno-economic logic as the remake in that they have more realistic controllers (‘guitars’ where you strum while holding the proper buttons down) and more popular music (Beatles, Metallica, et cetera); the expansions simply have more songs.

D+Pad Hero takes the basic rhythm game formula and reproduces it with 8bit graphics and 8bit music for the Nintendo Entertainment System, a system first released in 1983. [3] The player uses the archetypal NES controller instead of the faux-guitar in order to input the arrow keys and the A and B buttons in tune to the music. The concept of accuracy and score remain the same. Other than simply reproducing the game visually on the NES system, the songs themselves have been rendered compatible with the hardware. The authors took various popular songs and converted/translated them to 8bit midi music. [4]  The techno-economic logic of the sequel and remake is reversed on both sides: the game has old graphics, old music and old hardware, which reverses the technological logic; the game is unsellable due to copyright and instead exists as an economic product solely through donations. The demake is thus a work of love, fun or fan/nerd culture.

The second logic is of representation/simulation. The idea of naturalistic graphics or perceptual realism is related to the concept of “better-faster-more” within the techno-economic logic, but it extends beyond the simple technological fetish for photorealistic graphics. Both the remake and the demake are crucially related to re-presentation and the link (or lack thereof) between production and reproduction, image and meaning, and/or original and derivative. The conflict between true meaning, lack of meaning and drawn meaning are in conflict within the original, remake and demake. One line of thinking extends from the Marxist desire to raise the veil of ideology and thereby reveal truth through Althusser and the inability to ever escape ideology to Baudrillard, the hyperreal, simulation and an inability to every return to any ultimate truth. True meaning ends within the concept of simulation, but the path can also be linked with a separate path that goes through semiotics and Barthes’ differentiation of the work and the text, Foucault’s destabilization of the author, and Manovich’s concept of transcoding from a base. These two lines then culminate in Jenkins and the postmodern celebration of fan culture and the user’s ability to make his or her own meaning. Meaningless pluralism is reached through this path. [5]

A third logic at play in the remake is Historico-Nostalgic. The past, as unusable due to technical incompatibilities (systemic difference in coding and hardware), is bracketed off, and, depending on your outlook, either erased or put into the past. The few choice bits are taken out of this graveyard of the past and become “history.” While Doom is remade time and again, Pathways into Darkness (a contemporaneous 1st person shooter) is written out of the revived history: it remains in the past, but Doom is remade as Doom III and taken out of the past to be redeployed in the present as historical.

The third logic within the remake is one of history and memory. However, the nostalgia is slightly different between the remake and demake. Key to understanding the difference between the two forms of repetition is Svetlana Boym’s conceptualization of restorative and reflective nostalgic tendencies. While restorative nostalgia attempts to fill in the holes of the past to produce a utopian present, reflective nostalgia dwells in the signs of the past itself. This difference is key to understanding the difference between the remake and demake; it is crucial to finding some sort of useful meaning beyond the reductive celebration of postmodern repetition and pluralism.

Techno-Economic Logic

The initial logic within the remake is one of technology and economics. The remake is made in a particular way because it sells and the particular way in which it is made reproduces the dominant trend of technological advancement. [6] Since the 1950s computers have grown exponentially more powerful. The course has generally followed Moore’s law, which posits that the speed of a processor doubles roughly every two years. While historically the technology indeed followed this logic in that things double in speed every two years, whether this is a natural development or a self-developing prophecy due to the industry’s desire to maintain the trend is unknowable. Similarly problematic is the chicken and egg effect of the hardware being forced into obsolescence due to Moore’s Law and the public’s desire for faster computers. However, the result is that computers either needed to, or simply could run more complex applications.

The general increase in computer speed needs to be understood in the computer’s ability to process minute tasks faster, which translates equally into producing more tasks in the same amount of time, which can be further translated into the simple idea of complexity of tasks and possibilities. The computer then went from being able to calculate very simple algorithms to running through detailed, complex algorithms. One way to understand this logic is through gaming capabilities. Within the initial era of gaming there were text adventures like Zork, where input was limited and graphics were non-existent, and games like Pong, which had highly pixilated graphics and simple logic. There were limiting factors on both the storage and retrieval sides of computer applications. Limited memory to store data and limited processor power to retrieve and use the information. As both increased the programs could become more complex, and it is this complexity that can be translated into more detailed graphics and games. A rather jagged progression of gaming consoles is Nintendo (1983) -> Super Nintendo (1990) -> /Playstation (1994) -> Playstation 2 (2000) -> Playstation 3 (2006); such a progression can be understood slightly better knowing that each progression has a more powerful processor (8bit, 16bit, 64bit, 128bit et cetera) and increased storage (ROM cartridges, memory cards, CDs, DVDs and BluRay disks). [7] The 8bit graphics of the Nintendo were pixel dominated, had limited color and music. The Super Nintendo allowed far more detailed graphics due to the increase in processor speed, and the music and sound changed from very limited beeps and boops to a vast supply of beeps and boops. With the Playstation graphics progressed beyond combinations of pixels into polygons and the production of 3D representations instead of flat, pixilated scenarios. Playstation 2 and 3 utilized CD quality sound due to the movement to the DVD and BluRay disk with greatly expanded storage, and increased the number of polygons involved in any graphical representation scenario so that a tube legged, square shoed, triangle breasted, ovoid headed polygonal Lara Croft eventually turned into the far more “naturalistic” if not realistic character of the Anniversary remake. [8] The graphics of the Anniversary remake are far from photo-realistic, but they are certainly an upgrade brought about by a decade of increased technological development.

The increase in calculation power has brought with it the possibilities of an increase in realistic representation. Whether by chance or naturally there has been a parallel development between realistic representation and processor speed as games that have increased their naturalistic representation as the processing power has increased have also sold better. This logic is best seen in the dominance of first person shooter (FPS) games such as Doom as compared to visual adventure games such as Myst (Hutchison 2008). One aspect of the techno-economic logic within the remake follows this assumed parallel between realisticness and economic well-being. Making a remake with better graphics will make it sell even better than the original game.

A second aspect of the techno-economic logic within the remake involves the price to produce a game. Remaking something that has already existed invariably involves less than making something new as long as all other aspects of the production remain the same. This is the same reasoning behind Hollywood’s remaking industry: it is cheaper to remake an old movie than it is to make a new script, figure out how to enact that script and then hoping in the end that the ideas behind the new script were not poor in the first place (Forrest and Koos 2002). The remake attempts to take out that uncertainty by simply remaking a well-worn, sure-shot idea/script. While it is often problematic to map a concept developed for a particular medium onto another medium the logic should hold here. The costs of remaking an older game are cheaper than making a new game from scratch as long as at least some aspects of the storyboards, narrative, character ideas or even code itself are reused. Only certain (popular) games are chosen for remaking, and both the following of technological evolution and low(er) production cost ensure that they will be better economic success than otherwise. [9]

While the remake follows the linear techno-economic logic, the demake exhibits an opposing logic that breaks with both the technological and economic expectations. Unlike the parallel logic of upgrading the computing power and realisticness, the demake’s “downgrade” simulates a previous level of computing power. This forces the creator of the demake to creatively reproduce tropes of the present in alternate ways; as Fish writes, coding a demake involves “see[ing] how far you can push a game backwards, and see[ing] what gameplay elements remain intact.” In the case of D+Pad Hero, the demake fundamentally questions the benefits of the increased graphics as every element of the actual game is reproduced within the demake on a different level of perceptual representation. Gang Garrison 2, a demake of the popular Team Fortress 2, follows a similar logic by taking away the advanced graphics, but maintaining the cooperative team play. This is somewhat different when the demake actually changes the genre. An example of where the demake forces a shift due to lack of processing speed is when Portal, a FPS game, was demade first as an Internet browser game and then as an Atari 2600 game. The generic formula both demakes enacted was of a 2D puzzle game, which, unlike the first person shooter genre, is something reproducible in the current generation of web browsers and on a twenty year old console. This alteration questioned the currently naturalized economic dominance of the FPS game.

The second aspect of the techno-economic logic is also problematized within the demake as all demakes are fan produced programs that are not designed to be (and in fact cannot be) sold. As the Independent Gaming Source proudly proclaims, the competition is of bootleg demakes. Thus, the increase in monetary gain that might come from repetition followed by sale is in fact stymied because the sale cannot happen. Because the demake breaks with the techno-economic logic there must be some sort of logic that drives people to produce such programs. One answer is the old hacker love of taking something apart, figuring it out and putting it together again differently/better (Galloway 2004). A second answer is that such subcultural proclamations do not prevent cooptation. By riding the popularity of a current game, demake programmers might obtain enough popular support to get a break into the official industry. Bootleg subcultural pride often ends with selling out. A third reasoning, the logic that I believe the demake follows, and something I will return to later is of memory, nostalgia and pleasure. For now I will expound on the logic of representation, which connects most with the techno-economic logic of the remake, even if it does not quite meld well with the demake.

Representation/Simulational Logic

The techno-economic logic that flows in a single direction in the remake and is disrupted into unlinked stops and starts in the demake is paralleled by the second logic of representation and simulation. Representation is to re-present something from a different time/space, to bring something from a previous time/space into the here and now.

Roland Barthes writes that “the photograph profess[es] to be a mechanical analogue of reality, its first-order message in some sort completely fills its substance and leaves no place for the development of a second order message” (Barthes 1977, 18). As representation it claims perfect correspondence, but Barthes points to the different orders of meaning within the image that necessarily block any type of objective analogousness. The image has three levels of meaning: the linguistic message that relays a particular meaning, the non-coded iconic, denoted message that attempts to claim correspondence and objective innocence, and the many coded iconic, connotative meanings that disrupt any possibility of perfect representation (Barthes 1977). Thus, representation claims to simply represent what was with a clear, singular meaning, but in fact has numerous meanings and in fact never brings back the entirety of what was. Re-presentation is never one-to-one repetition even though it claims to be.

DN Rodowick notes representation is often “defined as spatial correspondence” (Rodowick 2007, 102), but I would add that the concept of temporal correspondence (present, presence) is just as important, if slightly more obviously impossible to achieve. Rodowick himself protests the image’s (analog and digital) link to both representation and perceptual realism claiming that photography does not provide spatial semblance, but in fact corresponds to “our perceptual and cognitive norms for apprehending a represented space” (Rodowick 2007, 103). Thus, that which is re-presented is not a physical reality, but a mental and psychological one obtained through perception (Rodowick 2007, 105). Obviously, this argument holds that representation is not reality, but that does not answer what it is, nor does it answer why it is both produced and consumed. One way of getting at the questions of ‘what’ and ‘why’ is primarily through Marxist analysis, and the other is through psychoanalysis, but both include a dosage of semiotics.

Following Marx’s conceptualization of the proletariat’s existence within the capitalist mode of production as false consciousness, the Frankfurt school theorists extend from the economic base to also include superstructural false consciousness. Horkheimer and Adorno write, “Capitalist production so confines [the workers and employees, the farmers and the lower class], body and soul, that they fall helpless victims to what is offered them… the deceived masses are today captivated by the myth of success even more than the successful are” (Horkheimer and Adorno 1972, 133-4). The captivating myth in question is the culture industry’s representation of everyday life.

The whole world is made to pass through the filter of the culture industry. The old experience of the movie-goer, who sees the world outside as an extension of the film he has just left (because the latter is intent upon reproducing the world of everyday perceptions), is now the producer’s guideline. The more intensely and flawlessly his techniques duplicate empirical objects, the easier it is today for the illusion to prevail that the outside world is the straightforward continuation of that presented on the screen. (Horkheimer and Adorno 1972, 126)

The process they describe is one where the culture industries create a representational system where the world of leisure and entertainment is inseparable from the real world, which results in the molding of people into unquestioning consumer citizens who believe the represented world is just out of their grasp, but still obtainable. For Horkheimer and Adorno the culture industry and the progression of representational technologies leads to increased mass deception against which it is the duty of critical theory to oppose, and it is the duty of Marxists theorists to denaturalize. The intent is, like with Marx, to raise the veil and thereby enable the teleological dialectic of progress to lead toward some (better) existence. While (justifiably) doom and gloom, the Frankfurt school hopes to open peoples’ eyes to the ideological brain washing of the culture industry’s representational system.

Henri Lefebvre follows a similar Marxist methodology in his critiques of everyday life. Lefebvre argues that the abstract capitalist conceptualization has worked on and produced the lived, concrete space of life and experience. This is a similar Marxist idea of production even if he has moved beyond the early Marxist declarations of false consciousness and mass deception. We are not being tricked, but we are living in a constructed world/consciousness that he believes needs to be protested. Thus in his multi volume Critique of Everyday Life he proposes that everyday life is fleeting and sought after: utopian (Lefebvre 1991). Unlike de Certeau’s practical tactics of dealing with everyday life as it is, Lefebvre is unsatisfied with the Situationist practicality and wants to point toward the utopian variation of everyday life, the variation that is just out of reach. Thus in his late work he proposes a new science of studying the rhythms of life. By looking at the rhythms we can see the disjunctions and recognize both the (Capitalist) system and the parts that are out of the system (Lefebvre 2004). For Lefebvre representation is not the real, but there remains the possibility of an out, a denaturalization of the constructed space of Capitalism. Althusser’s work on ideology is one of the first steps of the removal of an out (even if it leaves the possibility of understanding).

Writing at approximately the same time and intertwined with Lefebvre, Louis Althusser’s work is on ideology as an always-already constituted element where there ceases to be access to some untouched origin: the veil might be raised, but nothing but another veil will be seen; you might be outside of an ideology, but never ideology (Althusser 1986). Althusser’s work seeks to answer the question of just why the proletariat revolution never happened by bringing out Gramscian notions of hegemony and formulating a dual imposition of Repressive State Apparatuses and Ideological State Apparatuses. While ultimately side-stepping the ire of the orthodox Marxists by giving more importance to RSAs and the economic base in the last instance, Althusser’s analysis in fact identifies the superstructural process of interpellation through which the person is made one with the society so that he or she does not in fact want to rebel. He argues that turning around to a police officer’s hailing shows a person’s interpellation within an ideological system; another example would be identifying (or seeking to identify with) an advertisement. Even if one rejects the hailing of one particular ideology one cannot escape: such a rejection indicates a separate subjective ideology, but not the existence of being outside ideology, and even then one can still intersubjectively imagine a subject that would be interpellated and thereby still be within the system. The second part of Althusser’s theory holds that because one’s identity is formed within an all-inclusive ideology one can never get back to some untouched, uninfluenced origin. We are always-already within ideology and we are always-already constituted as particular subjects. Because of the formulation of always-already there is in fact no false-consciousness from which we can escape, as it is all we have and all we can ever have.

Althusser’s crucial break from the hopes of denaturalization was enabled in part by Jacques Lacan’s psychoanalytic work on reality, language and the three orders: imaginary, symbolic and real. The real is impossible to interact with/see/witness as it resides outside of language; any attempts to get back to a real (through representation) are necessarily fractured through the very structure of language. Language then is part of the symbolic order as it structures and stands between the real and the imaginary (experienced) world. Finally, Lacan’s third order, the imaginary, is the world that we inhabit, our subjective experiences. Althusser’s always-already corresponds to Lacan’s imaginary: it is the represented world. Thus, the problem posed by Althusser is not of getting back to the origin, but understanding and problematizing the (grammatical and material) means and conditions of production of desire, the imaginary, ideological world. He has taken away the Marxist out (the veil), but he has left the possibility of understanding the world.

The connection between Lacan, Lefebvre and Althusser ends up with Jean Baudrillard who, in his later work, radically broke with his Marxist background informed by Lefebvre and the possibility of an out by claiming, like Althusser and Lacan, that there is no recourse to the real. Baudrillard claims:

abstraction is no longer that of the map, the double, the mirror, or the concept. Simulation is no longer that of a territory, a referential being, or a substance. It is the generation by models of a real without origin or reality: a hyperreal. The territory no longer precedes the map, nor does it survive it. It is nevertheless the map that precedes the territory – precession of simulacra – that engenders the territory. (Baudrillard 1994, 1).

Taking a step beyond Lefebvre, Baudrillard takes the possibility of return, of an out from a life in a produced world, away. The territory does not survive the map because the hyperreal/simulation “threatens the difference between ‘true’ and ‘false,’ ‘real’ and the ‘imaginary’ (Baudrillard 1994, 3). Baudrillard identifies a very particular moment when wars were being enacted in front of cameras, re-presented to the world, and being reacted to as ‘real’ events: his argument thus states that there is no difference between real and fake events. [10] The representation “has no relation to any reality whatsoever: it is its own pure simulacrum” (Baudrillard 1994, 6), which leads to deterrence and simply dealing with the hyperreal as the only reality around. Baudrillard takes Lacan’s imaginary order and runs with it as the only reality to which we have access. While Baudrillard was far from celebratory of the hyperreal world, a few years later Zizek tells us to love our symptom. We have no recourse to the real to denaturalize, so we might as well do the best we can. Unfortunately, this logic combines in an uncomfortable way with the destabilization of meaning within postmodern semiotic theory.

Linked in some ways to his work on meaning in photography, Barthes’ differentiation between a work and a text is important as it points to the point of meaning making and what it means for the text itself (Barthes 2007). The work is singular, has a proper meaning that “closes upon a signified,” has “filiations” to field and author, and is ultimately an “object of consumption.” In contrast, the text “is experienced only as an activity in production,” resists classification as a “paradoxical” thing, is “radically symbolic,” plural and understood within a network and not linked to a father/author. Michel Foucault further questions the text’s link to an authorial function and claims that it is imperative to reverse the author function in order to change the discourse of research from subject based to discourse based knowledge (Foucault 2003, 390-1). Opposed to the modern work, the text is ultimately a postmodern, intertextual production. Both of these claims are important, but become problematic when eventually linked up to the remake.

Lev Manovich does not explicitly bring up the concept of a work or text in his discussion of new media, but his five aspects of new media resemble Barthes’ tropes of a text. Manovich claims that there are five relevant principles within new media: numerical representation (digitization), modularity (workable chunks of data), automation (high and low forms of computer automation), variability (which is linked to modularity and postfordism as well as translation through the concept of the “base object”), and transcoding/programmability. The principles obviously move from old to new media in the same way that Barthes moves from work to text, but the ideas of variability and transcoding hold further interest. Manovich’s concept of variability implies a sort of base object that can be altered into varied new forms. However, unlike translation where there is an original and a translation, a first and a second, the singular base object exists as an incomplete entity and therefore does not actually exist in a power relationship to the varied finished objects. Similarly, “to ‘transcode’ something is to translate it into another format” (Manovich 2001, 47). Manovich’s language of new media renders horizontal the last shred of authorial/original meaning and continues with Barthes and Foucault’s intertextualization of the text so that by the time we get to the original, remake and demake they are all simply transcodings of some other base object of simulation.

It is at this point that I can relate the Representational Logic to the remake and demake. The remake follows the simulational logic that stems from Baudrillard’s discussion of hypersimulation (Baudrillard 1994, 19-20). The remake seeks greater perceptual realism so that one cannot tell the difference between a game and life and in fact doesn’t matter. The simulated bank robbery becomes/is a real bank robbery; the person pushing the button to drop the bomb is the same whether he or she knows or not. In contrast, the demake D+Pad Hero problematizes the simulational logic by moving from the simulated experience of playing a (fake) guitar to simply inputting the buttons on a controller. There is no difference within the coded level of the game itself, but the personal experience is completely different. The progression of music simulation games has increasingly tried to reproduce the experience of playing an instrument. While the movement of buttons to a fake guitar is still far from “real,” the movement of buttons to fake drums is slightly less problematic as the action of drumming is within the fake drumming. In contrast, D+Pad Hero removes the instrument and brings back the four direction game pad from deep in the gamer’s memory. Similarly, the return to 8bit music from an era of CD quality reproduction in the original Guitar Hero disrupts the standard representational experience by forcing the player to return recognize that he or she is gaming and not actually playing music with an instrument.

If we follow Baudrillard’s simulation and the lack of recourse to any real and any meaning we are abandoned in our attempt to differentiate between an original, a remake and a demake. As all three are transcodings there is no real difference at the level of simulation: game pad or instrument, photo-realism or 8bit it’s all hyperreal, postmodern pastiche and pluralism all the way down. This is the logic that Constantine Verevis follows in his analysis of cinematic remakes in the postmodern era. By understanding remakes as intertextuality he leads directly toward the removal of difference through an understanding of the remake as New Hollywood citationality. However, by doing so he limits the possibility of seeing the neutralization of the foreign elements and harsh economic realities of remaking precisely because it emphasizes laissez-faire economics and global cinematic modernity (Verevis 2006). Such details are important in the unequal, disjuncture filled world that we make sense of through personal experience. Like Viviane Sobchack’s critique of Baudrillard that resorted to a phenomenological experience of one person having lost a leg and remembering/knowing the difference, the place I seek to reclaim meaning and differentiate between the remake and demake is in history and the personal experiences of memory and nostalgia.

Historico-Nostalgic Logic

The third logic related to the remake and the demake is what I am calling historico-nostalgic. This logic is about the relations of the game and player to the past and to the future. In order to tease out the relationships between remake, demake, player and meaning I will explore ideas of time, history and nostalgia. On nostalgia, Svetlana Boym writes that: At first glance, nostalgia is a longing for place, but actually it is a yearning for a different time – the time of our childhood, the slower rhythms of our dreams. In a broader sense, nostalgia is rebellion against the modern idea of time, the time of history and progress. The nostalgic desires to obliterate history and turn it into private or collective mythology, to revisit time like space, refusing to surrender to the irreversibility of time that plagues the human condition (Boym 2001, XV). Similar to Boym’s obliteration of history, Reinhart Koselleck writes of the constant need to rewrite history so that it aligns with the concept of the modern present (Koselleck 1985, 250). History is never simply dates in the past, one after another, but a specifically aligned genealogy that culminates in the present and leads to the future. It is this politics of alignment between past, present and future that can be seen in the crisis of memory of which the cinematic remakes of end of the 20th century are a part, as are the gaming remakes and demakes of the beginning of the 21st century. [11]

Especially since the late 20th century there has been a crisis of memory. While some theorists write of the entanglement of memory and history in the production of national or cultural identity (Sturken 1997), others have written specifically of the rise of nostalgic, retro styles such as black and white film in the 1990s (Grainge 2002), or the nostalgic consumption of mid 20th century film and television classics in the home (Klinger 2006). All of these engagements with memory and the past are slightly different: while some focus on cultural/national memory and the construction of cultural/national identity, others focus on personal forms of nostalgia and the individual’s own interaction with his or her past. My contention is that the difference between remakes and demakes lies on the line between memory and history, and to conflate the two forms of repetition leads toward the representational logic, but away from any ability to make useful meaning out of the texts: while they might be tangled, they are not the same thing.

Boym writes of two tendencies of nostalgia that help differentiate between the two types of gaming repetition. She explores the concept of nostalgia related to place and time in her native Russia. Having left for what she thought was for good, Boym returns after the fall of the USSR and explores both her and the national interaction with nostalgia. Her framework establishes two general tendencies of nostalgia, restorative and reflective. “Restorative nostalgia puts emphasis on nostos and proposes to rebuild the lost home and patch up the memory gaps. Reflective nostalgia dwells in algia, in longing and loss, the imperfect process of remembrance” (Boym 2001, 41). The two tendencies do not map perfectly onto any single example as they exist in relative amounts, but it is possible to use them to talk about the remake and demake.

The restorative tendency aims toward a unified truth generally understood as the national project. It “manifests itself in total reconstructions of monuments of the past” (Boym 2001, 41). Restorative nostalgia can be seen in the active use of the past to form a particular history. As said before, Tomb Raider: Anniversary reconstructs the entirety of Tomb Raider and what it leaves out is in fact excised from history: the reconstruction becomes history, not the past/original itself. As Constantin Fasolt writes against the historians’ rule against mixing the past as immutable and present as occurring:

History is constitutive of modern politics, constitutive of the kind of modern state that claims sovereignty for itself and the autonomy of individuals subject to nothing except their conscience and the laws of the physical universe. The prohibition on anachronism? It merely seems to be a principle of method by which historians secure the adequacy of their interpretation. In truth the prohibition on anachronism defines the purpose for which the discipline of history exists: to divide the reality of time into past and present. History enlists the desire for knowledge about the past to meet a deeper need: the need for power and independence, the deed to have done with the past and to be rid of things that cannot be forgotten. (Fasolt 2004, 13)

Similarly, the remake has unifying, restorative elements within it. Through remaking an old game the producer and industry create a specific history that highlights very specific aspects. Certain games (Doom and Tomb Raider) or genres (FPS and 3rd person action/adventure) are identified as important and a unified gaming history is created that further mirrors the techno-economic logic that both supports and is supported by the Capitalist mode of production. If, as Thomas Kuhn states, “history…disguises the nature of the work that produced it” (quoted in Fasolt 2004, 39), then the remake disguises the nature and meaning of its reproduction. Instead of questioning the logic that revels in increased realisticness instead of realism, the remake highlights the logic of realisticness and simulation, and justifies itself through the basic concept of economics.

In contrast, the reflective tendency can be seen in the demake, which dwells in the ruins, patina and dreams of the old genres, sights, sounds and experiences that the creators of the demakes themselves witnessed, remember(ed) and attempt to reflect on. The demake drive shuns any form of linear progress by flipping back and forth between the present and past modes and in fact brings the singular primacy of natural progression into question. Unlike the justification of certain games and genres the demake highlights those genres that have been abandoned in the past (text adventure, side-scroller and 8bit sound) and those games that have been ignored due to their lack of economic appeal (artistic, serious and cult-classic games such as Portal and Shadow of the Colossus).  While there is the very possibility of such reflective nostalgia to be co-opted back into the dominant mode it is important to note the entire chain of meaning making including the differences in the loop of production, dissemination, consumption and alteration (Du Gay 1997). The production side is still important even if we have abandoned the reductive injection model of media effects.

It is the personal interaction with one’s own past, remembering the games and genres of childhood that is causing demakes and their interaction with reflective nostalgia. Unlike the ultimate reduction through destabilized postmodern meaning on the consumption end and through the simulational equality of forms of repetition, the impetus behind both those making and those playing demakes questions the dominant system. One of the rhythms in life is that of repetition, and the liminal moment of reflectivity, before it is whitewashed out under a restorative move, reveals the cracks in the dominant system. While nostalgia, restorative or reflective, is never literal in that it never actually brings back the past, such protest is important if one still believes in some sort of rendition of a dialectic, of progress, or even of simply understanding reality, be it imagined or Real.

Travelling in Place

Perfect memory is impossible, but it is also undesirable: there is a need on both the cultural and personal levels to forget in order to heal and be made whole as both a nation and an individual (Ricoeur 2004). This impossibility of perfection extends to all levels of repetition including translation, memory, archiving, history, and representation. Further, like representation is never simply re-presentating something of the there and then in the here and now, repetition is never simply repeating something. Repetition always repeats some things, but leaves other things out. The need to forget, to not repeat, implies the selection of particular pasts through a filtering of history, but this does not necessarily lead to the conclusion of postmodern pluralism where there is no difference between what is remembered, translated, archived, re-membered, and written into history, or what is remade or demade. There are important differences that cannot and should not be ignored.

In his reconsideration of theory travelling between contexts, Edward Said wrote that “The point of theory… is to travel, always to move beyond its confinements, to emigrate, to remain in a sense in exile” (Said 2007, 252). While Said writes of the affiliation to Lukàcs beyond mere borrowing or adaptation, he also writes that conflating Viennese twelve-tone music of Adorno with the Algerian resistance and French colonialism of Fanon would be grotesque. Similarly, the demake and remake must be understood as tangled with repetition, but not as inextricably tied, which is something that is impossible through simply following concepts such as simulation, remediation and transcoding. Change is a type of production of knowledge and knowledge is never innocent. As technological, representational and historical modes of knowledge production the remake and demake are neither objective nor innocent and must be understood as such.


[1]  Such realisticness is of course separate from concepts of social realism (Galloway 2006).
[2]  Within translation theory the trope of faithfulness is opposed by an assumed impossibility of perfect translation (also seen in the Italian adage traddutore/traditore) and both are bracketed by sub-methods such as source and target orientation, literary and literal styles, fidelity to meaning or word and finally the opposition of domestication and foreignization. In my thinking translation is an unspoken/unacknowledged trope within the remake and the demake (See: Bassnet and Lefevere 1990, Venuti 1994 and 1998, and Berman 1992).
[3]  The interaction with 8bit culture is not limited to games and music, but extends into the art realm. Cory Archangel’s Super Mario Clouds and the recent exhibit Ich Bin 8-Bit are examples of artistic engagement with 8bit culture (Archangel 2002 and Ablan et al. 2009).
[4]  So far there are four playable songs: Guns N Roses, Sweet Child o’ Mine; Michael Jackson, The Way You Make Me Feel; Daft Punk, Harder, Better, Faster, Stronger; A-Ha, The Swing of Things. Chicane, Low Sun and Daft Punk, Aerodynamic are used in the program, but are unplayable as songs.
[5]  The dead end of Baudrillard’s simulation can also be sidestepped by the logic of “social realism” and the phenomenological link between the simulation within the game world and in the player’s lived environment (Galloway 2006).
[6]  While I refer specifically to the computer’s development, it is more useful to link this technological trend to the teleological view of history and civilization that is dominant in modernity.
[7]  This is jagged as I am ignoring the personal computer’s processor, Nintendo’s later consoles and Microsoft’s consoles. I am using these particular consoles as they are the ones that I know the best at the moment.
[8]  It should also be noted that due to the “uncanny valley“ programmers have in many instances attempted more stylized representation in lieu of unsettlingly real and yet not real CGI and polygonal characters (See: Mori 1970).
[9]  Like with cinematic remakes, gaming remakes can flop (Psycho 1998 is a good example), but the logic remains that a remake is safer than (with the emphasis on the relative aspect) an entirely new game.
[10]  In fact, the term “event,” something real, becomes a rare entity within Baudrillard’s work (Galloway 2007).
[11]  This could also be related to Derrida’s discussion of archives as dealing with the future, through the sur-vival of the event as opposed to forgetting, which is superrepression/anarchive and dealing with the past (Derrida 1996).
[12]  Portal was demade twice as Super 3D Portals 6 and Portal: The Flash Version; Shadow of the Colossus was demade twice as Hold Me Closer, Giant Dancer and Shadow of the Bossus.


Althusser, Louis. “Ideology and Ideological State Apparatuses (Notes Towards an Investigation).” In Video Culture: A Critical Investigation, edited by John C. Hanhardt. Salt Lake City: G.M. Smith in association with Visual Studies Workshop Press, 1986.
Appadurai, Arjun. Modernity at Large: Cultural Dimensions of Globalization, Public Worlds. Minneapolis, Minn.: University of Minnesota Press, 1996.
Archangel, Cory. Super Mario Clouds. 2002. <http://www.beigerecords.com/cory/Things_I_Made/SuperMarioClouds>.
Barta, Tony, ed. Screening the Past: Film and the Representation of History. Westport; London: Praeger, 1998.
Barthes, Roland. “From Work to Text.” In The Cultural Studies Reader, edited by Simon During, Donna Jeanne Haraway and Teresa De Lauretis. London: Routledge, 2007.
—. Image, Music, Text. New York: Hill and Wang, 1977.
Bassnett, Susan and Andre Lefevere eds. Translation, History and Culture. London: Pinter Publishers, 1990.
Baudrillard, Jean. “The Precession of the Simulacra.” In Simulacra and Simulation. Ann Arbor: University of Michigan Press, 1994.
Berman, Antoine. The Experience of the Foreign: Culture and Translation in Romantic Germany. Albany: State University of New York Press, 1992.
Bolter, J. David and Richard Grusin. Remediation: Understanding New Media. Cambridge, Mass.: MIT Press, 1999.
Boym, Svetlana. The Future of Nostalgia. New York: Basic Books, 2001.
Cook, Pam. Screening the Past: Memory and Nostalgia in Cinema. London ; New York: Routledge, 2005.
Deleuze, Gilles, and Claire Parnet. “Politics.” In The Cultural Studies Reader, edited by Simon During, Donna Jeanne Haraway and Teresa De Lauretis. London: Routledge, 2007.
Derrida, Jacques. Archive Fever: A Freudian Impression. Chicago; London: University of Chicago Press, 1996.
Du Gay, Paul. Doing Cultural Studies: The Story of the Sony Walkman. London: Sage, in association with The Open University, 1997.
Fasolt, Constantin. “A Dangerous Form of Knowledge.” In The Limits of History. Chicago: University of Chicago Press, 2004.
Fish, Phil. “de-makes.” TIGForums Independent Gaming Discussion. The Independent Gaming Source. Written: August 20, 2007. Accessed: March 12, 2009 < http://forums.tigsource.com/index.php?topic=448.0>
Forrest, Jennifer and Leonard R. Koos eds. Dead Ringers: The Remake in Theory and Practice. Albany: State University of New York Press, 2002
Foucault, Michel. “What Is an Author?” In The Essential Foucault: Selections from Essential Works of Foucault, 1954-1984, edited by Paul Rabinow and Nikolas S. Rose. New York: New Press, 2003.
Galloway, Alexander R. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press, 2006.
—. Protocol: How Control Exists after Decentralization. Cambridge, Mass.: MIT Press, 2004.
—. “Radical Illusion (a Game against).” Games and Culture 2, no. 4 (2007).
Grainge, Paul. Monochrome Memories: Nostalgia and Style in Retro America. Westport, Conn.: Praeger, 2002.
Harvey, David. “Space, as a Key Word.” In Spaces of Global Capitalism: Towards a Theory of Uneven Geographical Development. London; New York, NY: Verso, 2006.
Hebdige, Dick. Subculture: The Meaning of Style. London: Methuen, 1979.
Horkheimer, Max, and Theodor W. Adorno. Dialectic of Enlightenment. New York: Seabury Press, 1972.
Hutchison, Andrew. “Making the Water Move: Techno-Historic Limits in The Game Aesthetics of Myst and Doom.” Game Studies 8, no. 1 (2008).
Ablan, Love, Jon M. Gibson and Derek Puleston (curators). Ich Bin 8-Bit. Neurotitan Gallery for the Pictoplasma Character Walk. Berlin. March 17, 2009 – April 4, 2009. < http://loveablan.com/exhibitions/IchBin8Bit/>
Klinger, Barbara. Beyond the Multiplex: Cinema, New Technologies, and the Home. Berkeley: University of California Press, 2006.
Koselleck, Reinhart. “‘Neuzeit’: Remarks on the Semantics of the Modern Concepts of Movement.” In Futures Past: On the Semantics of Historical Time. Cambridge, Mass.: MIT Press, 1985.
Lefebvre, Henri. Critique of Everyday Life. Translated by Michel Trebitsch. London ; New York: Verso, 1991.
—. Rhythmanalysis: Space, Time and Everyday Life. Translated by Stuart Elden. Athlone Contemporary European Thinkers. New York: Continuum, 2004.
Manovich, Lev. The Language of New Media. Cambridge, Mass.: MIT Press, 2001.
Mori, Masahiro. “The Uncanny Valley.” Energy 7, 4, 1970: pp. 33-35.
Ricoeur, Paul. Memory, History, Forgetting. Chicago: University of Chicago Press, 2004.
Rodowick, DN. The Virtual Life of Film. Cambridge, Mass.: Harvard University Press, 2007.
Said, Edward W. “Traveling Theory Reconsidered.” In The Cultural Studies Reader, edited by Simon During, Donna Jeanne Haraway and Teresa De Lauretis. London: Routledge, 2007.
Sturken, Marita. Tangled Memories: The Vietnam War, the Aids Epidemic, and the Politics of Remembering. Berkeley: University of California Press, 1997.
Tagg, John. The Burden of Representation: Essays on Photographies and Histories. Minneapolis, Minn.: University of Minnesota Press, 1993.
Venuti, Lawrence. The Translator’s Invisibility. New York: Routledge, 1994.
—. The Scandals of Translation: Towards an Ethics of Difference. London; New York, NY: Routledge, 1998.
Verevis, Constantine. Film Remakes. Edinburgh: Edinburgh University Press, 2006.
Wardrip-Fruin, Noah, and Pat Harrigan. First Person: New Media as Story, Performance, and Game. Cambridge, Mass.: MIT Press, 2004.
Whalen, Zach, and Laurie N. Taylor. Playing the Past : History and Nostalgia in Video Games. Nashville: Vanderbilt University Press, 2008.
Yu, Derek. “Bootleg Demakes Competition.” The Independent Gaming Source. Accessed: March 12, 2009 <http://www.tigsource.com/features/demakes/>.


Bigpants. Hold Me Closer, Giant Dancer. The Independent Gaming Source. Accessed: March 16, 2009 <http://forums.tigsource.com/index.php?topic=2817.0>.
Bungie Software. Pathways Into Darkness. Bungie Software. 1993.
Core Design Ltd. Tomb Raider. Eidos Interactive. 1996.
Crystal Dynamics. Tomb Raider: Anniversary. Eidos Interactive. 2007.
Hansen, Kent and Andreas Pederson. D+Pad Hero. 2009. Accessed: March 11, 2009 < http://dpadhero.com/Home.html>.
Harmonix Music Systems. Guitar Hero. RedOctane. 2005.
Hinchy. Super 3D Portals 6. The Independent Gaming Source. Accessed: March 15, 2009 < http://forums.tigsource.com/index.php?topic=2391.0>.
Id Software. Doom. Id Software. 1993.
—. Doom III. Activision. 2004.
mrfredman and MedO. Gang Garrison 2. Gang Garrison. Accessed: March 20, 2009 < http://ganggarrison.com/>.
Saint. Shadow of the Bossus. The Independent Gaming Source. Accessed: March 16, 2009 < http://forums.tigsource.com/index.php?topic=2402.0>.
SCEI. Shadow of the Colossus. SCEI. 2005.
Tal, Ido (Dragy2005) and Hen Mazolski (Hen7). Portal: The Flash Version. Newsgrounds. Accessed: March 15, 2009 < http://www.newgrounds.com/portal/view/404612>.
Valve Software. Portal. Valve Software. 2007.
—. Team Fortress 2. Valve Software. 2007