Multilingualism and Japan (in translation)

As an American living abroad (again), it continually strikes me that one of the essences of living within this world, and being cosmopolitan in that particular sense that means you don’t run from others or hate them, is that you deal with multilingualism on a daily basis. You might not be multilingual, but you deal with it. You live with it. You do not simply have a “coexist” bumper sticker, but you really do exist in tandem with otherness on a daily basis. Again, as an American who was brought up simultaneously hearing about how multiculturalism is good, but living in one of the whitest cities around (Portland, OR [1]), such linguistic mixture, which seems to be at the heart of living with and between multiple cultures, is both a good and necessary thing. Why is it, then, that this mixture is the first thing to go with adaptation into American contexts? Suddenly, everybody speaks English (albeit with some sort of disparaging accent) and all of the signage is in English. This happens in American remakes of both movies and games, and doesn’t even begin to touch on the oddity of people mysteriously speaking English in books. This mixture might not the be all and end all of existence in the world, but it is certainly important in certain places. And yes, Japan is one of those places.

Japan is a strange place. Not in the “Oh, Japan” sort of dismissal, but in the, wow, almost every sign around is in both English and Japanese, yet the majority of people cannot for the life of them respond to a simple question in English despite national training at (minimum) the middle school level, sort of way. Why is this? Granted, I can’t say that my French would allow me to respond despite 1-11 “training” that I’ve forgotten abysmally, buy then again, I don’t live in Quebec. At various points in Japan’s history English has almost become a national language, and as stated above, the Ministry of Education (MEXT) has made English a mandatory subject in middle school (and recently this has expanded down to 5th graders despite teachers’ inability to properly speak/teach the language) [2]. There is a LOT of English here. As long as they don’t really speak to anybody, most foreigners can get around without any trouble (as long as they speak English, and not, say, Polish). However, I need to iterate that it’s not simply that signs are translated. That’s happened (often with expectedly poor results that end up on webpages thanks to botched machine translations [3]), but such simple sign translation is not the point. The point is actually that signs are mixed. Many businesses and buildings are only in roman characters that often are arranged to make English words. I’m sitting in the basement of OICITY in Ueno. Technically, OICITY is pronounced ‘maruishiti’ [マルイシティ] in the crib underneath the sign in front of me, but EPOS CARD and GAP, both in the nearby visual space, are not given similar translations. The only sign that is only Japanese in front of my is 無地良品 (which, as an aside, is simply localized in the U.S. as MUJI — the brandless company itself becomes a brand). However, the truly common one is not these that politely keep the English and Japanese separate, but ATMコーナー, (ATM Kounaa, which is the loanword for corner) the sign right below the one for the EPOS CARDs. Here we see the mixture that is present and implied with all of the signage. It’s like a bilingually trained child was never told they were actually speaking two different languages when they were grew up code-switching and as an adult they now expect everybody to follow both of their languages.

This is everywhere. I just came from the Ueno Zoo, and all standardized signs were written with katakana (despite the animal’s nativity) with English underneath, then the Latin name, and finally in the standard usage with both kanji and hiragana to describe the animal’s eating habits and place of origin. And any argument that katakana might be easier to read for children does not hold, as the nearby signs saying not to feed the animals (arguably what children must read and understand) is written in hiragana, not katakana. The animal, as an essentially alterier creature, is unknowable/not human, and it is marked as such through katakana. Just like you can never get in the cages, the animals can only be known from afar. Similarly, foreignness is held at bay linguistically through a simultaneous embrace and rejection through the constant utilization, but never full incorporation. Such mixture is a major part of Japan. Not the the most important part, but definitely an important element. So, why then if translation is supposed to bring understanding, does translation never deal with this? And, no, I’m not arguing that Japan is loveydovey, and translations must represent this. Rather, I’m arguing that it is the ethical responsibility of translations not simply to be enjoyable, but to bring understanding of what it is like to consume that text in its place of origin. A text does not travel as some unmarked pleasure, ready for easy consumption, but loaded down with context, necessarily.

Take Murakami Haruki’s (relatively) recent 1Q84. Even the title has wordplay where the pronunciation of 9 and Q is the same, but it goes further than that. Murakami, like many postmodern Japanese writers (including Yoshimoto Banana) deliberately flirts with the West both textually and thematically with resultant negative reviews from the more traditional Japanese literary circle. It is this flirtation, often in the form of mixture, that makes Japanese multilingualism (seen everywhere, including in postmodern Japanese writing)… not unique, but at least interesting. But why does none of this flirtation, this mixture, come across? Obviously, the simple answer is that adaptation to local tastes sells well, but that is neither helpful, nor when you get down to it, a good translation. Adaptation to local tastes is simply translation that sells well.

And the merry-go-round comes back again and I’m on ethics.


  • [1] “Racial/Ethnic Segregation.” Greater Portland Pulse.
  • [2] “Teachers worried about new English classes.” The Japan Times Online.
  • [3]
  • [4] Murakami, Haruki. 1Q84.

Toward a Multi-Layered Digital Translation Methodology (Qualifying Paper #1)

In this paper, I approach new ways of translating digital media texts— from digital books, to software applications, but particularly my own focus on video games —by mixing traditional translation theory and new media theory. There are similarities between these two fields, but they do not refer to each other. Translation theory rarely looks to films and television, let alone websites, software and games; new media theory fetishizes the ‘new’ and rarely considers that it’s all been done before.[1] I cross the fields because there are mutual benefits to be had by doing so: translation can get new material practices; new media can get more history. I also cross the fields because that is what I see as the work of Communication. Finally, I cross the fields because my own work on video game translation can emerge from their crossing.

Introduction: ‘From Translation to Traduction’ to Localization

In this paper, I approach new ways of translating digital media texts— from digital books, to software applications, but particularly my own focus on video games —by mixing traditional translation theory and new media theory. There are similarities between these two fields, but they do not refer to each other. Translation theory rarely looks to films and television, let alone websites, software and games; new media theory fetishizes the ‘new’ and rarely considers that it’s all been done before.[1] I cross the fields because there are mutual benefits to be had by doing so: translation can get new material practices; new media can get more history. I also cross the fields because that is what I see as the work of Communication. Finally, I cross the fields because my own work on video game translation can emerge from their crossing.

While I have already started this paper with confusion (complexity and fusing togetherness), the word ‘translation’ itself has a confused (or perhaps, defused) past. As Antoine Berman notes, it is only in the modern period (post 1500) that the word (renamed ‘traduction’ in romance languages other than English) has taken on its present meaning.[2] Previously, the word (‘translation’) had an unstable meaning because writing itself was never considered the originary act of an author. Instead, all writing, from musing, to marginal notations, to transcriptions, to commentary, to linguistic alteration was considered translation. We are in the process of discursively moving back to the earlier understanding of the word.

The earlier understanding, ‘translation,’ comes from the Latin translatio, which can include the transportation of objects or people between places, transfer of jurisdiction, idea transfer, and linguistic alteration.[3] As Berman stresses, the premodern understanding of translation is as an “anonymous vectorial movement.”[4] In contrast, the post 1500 term, ‘traduction,’ signifies the “active energy that superintends this transport – precisely because the term etymologically reaches back to ductio and ducere. Traduction is an activity governed by an agent.”[5] For Modernity and its lauded author this “active movement” through a subjective traducer makes sense, as it distances the iterations by emphasizing a particular hierarchy of original over derivative. However, in a Postmodern culture where global flows and exchanges have moved well away from the author function and the primacy of the work it is helpful to understand the elements of translation that were lost “vectors” in the move to traduction.[6]

For romance languages where ‘translation’ became ‘traduction,’ certain formal and temporal vectors have been lost and taken up by other concepts such as adaptation, repetition, convergence, and intertextuality. While all of these terms have particulars, intertextuality is a useful example due to its link with postmodernity and the move away from grand theories.[7] With postmodern intertextuality there is no singularity of a work. Rather, everything is texts with borrowed themes, images, and sections. Intertextuality follows the formal vector of transformation, which has left translation, but it does not consider power and difference. In the early 21st century United States context, both power and difference are increasingly important and yet elided.

Some vectors were never actually lost in English, as it never switched over to the word traduction. As Berman notes, “English does not ‘traduce,’ it ‘translates,’ that is, it sets into motion the circulation of ‘contents’ which are, by their very nature, translinguistic.”[8] As the problematically designated world language, English sets itself up as a translinguistic universal, but it does so in opposition to a host of other languages that have switched over to thinking about translation as the necessary and active linguistic alteration to move a text from one place to another. Similarly, while there is an underlying energy that fuels the translational movement of a modern video game over space, there is a simultaneous understanding that nothing the game translator does can change the game as they are not changing the play level. Just like English, play is translinguistic and universal. Current forms of game translation, then, have retained a link to some of the anonymous vectors of translation.

I define translation as the ‘carrying over’ of a text from one context to another, where context can be understood as spatial, formal, or temporal. This broad definition begins to reclaim previously lost vectors, particularly a criticality necessary for the analysis of video games, which are currently exempt as they reside in an area of pure entertainment. This broad definition allows me to consider other forms of textual manipulation including video game localization—the process of translating games for new cultural contexts, which includes linguistic, audio, visual and ludic [play/action] alterations —that has theoretically and practically separated itself from simultaneous interpretation and literary translation. By doing so I wish to force open the definition to include what is already happening, localization, where much of the text is changed for the purpose of a “better” user experience. However, this move allows opens a space for what might happen, such as new forms of translation that use unofficial production to destabilize the meaning of the text by building it up.

I link traditional foci of literary translation theory with some of Jacques Derrida’s theories of deconstruction (particularly of ‘trace,’ ‘living-on,’ and ‘relevance’), and J. David Bolter and Richard Grusin’s concept remediation, in order to reconnect ‘translation’ with its (not quite) lost vectors.[9] I begin with the standard tropes of translation theory — sense and word, source and target, domestication and foreignization — as they do well to show the different possibilities at play with translation. However, disciplinary bound theories are never complete as they ignore extradisciplinary connections. One such connection is remediation. While the concept comes from a literary origin, remediation exists between literary and new media theories; I believe it can help to combine translation in the two areas and help move understandings of translation to new alternatives.

I argue that current practices of translation focus on only one side of the literary theories, thereby turning them into mutually exclusive binaries (sense or word, foreignization or domestication, immediacy or hypermediacy). However, Bolter and Grusin show that remediation is not a binary between hypermediacy and immediacy; rather, remediation utilizes both sides of the equation. Essential to new media is the simultaneous existence of both hypermediacy and immediacy. Current translations espouse only one of these sides, and ignore the benefits of the other. Translation can learn from this simultaneity in new media theory. This paper argues through to a material instantiation of new media translation that takes into consideration both sides of these pairings.

In the second section I show how the dominant practice of translation at present utilizes a domesticating, immediate strategy that overwrites (and thereby renders falsely singular) texts, whether they are literary, filmic, or ludic. In contrast, I argue that a foreignizing, hypermediate strategy that layers texts, which has always existed despite its current lack of presence, can facilitate an alternate, much needed ethics of both translation and cultural interaction. I am not arguing for a simplistic multiculturalism where difference can be subsumed under mere celebration, but for a difficult, abusive, and often painful form of interaction with difference that can reveal the actual ways in which culture functions. As Derrida argues, there is violence and pain that comes with eating the other, but there is also a necessity to eat. One must thus eat [ethically] (bien manger).[10] The same holds for translating.


Tenets of Translation

In the following sections I will review the key principles that have been the focus of translators throughout Western translation history. These examples are primarily from a European/English perspective although I try to use alternative examples where available, applicable and known. I will begin with the impossibility of a perfect translation. Second, I will elaborate on the ways of escaping this core dilemma beginning with the argument between sense-for-sense and word-for-word, and ending with the concept of equivalence. Third, I will review the opposing tendencies of domestication and foreignization as an alternate focus on the author and user instead of equivalence’s focus on the text itself. Finally, I will bring up remediation as concept terms that help bridge literary translation with new media and video game translation and transformation. By linking translation with remediation I can, in the later half of the paper, re-approach Berman’s ‘lost vectors’ of translation, recombine translation and localization, and point out alternate possibilities that are currently unconsidered due to the discursive dominance of fluent translations.


(Im)possibility of Translation

In an almost fetishistic move translation is known for its parts in lieu of its whole. The whole in this case is a holistic notion of perfect translation that completely reproduces a text in a secondary context. As George Steiner notes:

A ‘perfect’ act of translation would be one of total synonymity. It would presume an interpretation so precisely exhaustive as to leave no single unit in the source text —phonetic, grammatical, semantic, contextual — out of complete account, and yet so calibrated as to have added nothing in the way of paraphrase, explication or variant.[11]

Steiner rightly notes such a task is impossible for both an original interpretation and a translational restatement. In fact, the sole example ever given for a perfect translation is the mythical Biblical Septuagint translation where 72 individually cloistered translators made 72 simultaneous translations of the Torah from old Hebrew to Greek over 72 days. As the story goes, their translations were exactly the same indicating divine intervention. However, if one considers the logic of the translation it was the absence of any particular tenet, or focus, that enabled the translation to be considered perfect. God’s weight, on some tenet or another, was imperceptible, so it is the absence of a particular reference that marks the example of perfection. It is the unmarked translation that can be considered perfect, but this does not help with real translations. The practical lesson from the Septuagint is thus that perfect translation is impossible.

The impossibility of a perfect translation has forced all practical translation to focus on certain elements. These elements—sense, rhythm, original meaning, feel, length, and experience—are routinely marked as essential and elevated to primacy. The elements that are considered non-essential are then justifiably negated. One is hard pressed to find some moment, including the present, where this fetishization of certain tenets does not happen.

In contrast to such a partial focus with translation I hope to encourage a use of materiality, which can lead to a fragmented, built translation; imperfect and incomplete, but hopefully leading to a partial picture of what could be. A postmodern translation that is hardly ‘perfect,’ but in contrast to other forms of translation it does not assume justifiable negligibility of unconsidered elements.

I argue that digital new media in particular can enable this form of translation. However, this new method is anything but new, just as new media is anything but new. Rather, it borrows from, and builds upon, both Jacques Derrida and Walter Benjamin’s theorizations of translation. Derrida, in strict opposition to the dream of perfect translation and meaning argues for the slippery sliding of signifiers as a way to point back, but never get back, to an originary moment, text, or meaning. In contrast, Benjamin understands the failures of translation as a necessary part of the dream of messianic return in that they build up to perfection. These two provide theoretical groundwork for what can be made possible by the impossibility of translation.

Derrida’s concept of deconstruction is based in Ferdinand de Saussure’s semiotics taken up to postmodern instability instead of the Formalist dream of an ultimately stable meaning. In the Course in General Linguistics Saussure argues that the linguistic sign is arbitrary in that there is no natural relationship between signifier and signified;[12] it is both variable and invariable in that it changes, but nobody controls the change;[13] it exists as a system (la langue) and individual instances (parole), and this duality makes it both synchronic in its permanence related to langue and diachronic in its relation to parole.[14] As Jonathan Culler argues, what is interesting in Saussure’s linguistics is the relational nature of signs, and therefore how “[l]anguage is a network of traces, with signifiers spilling over into one another.”[15] Words do not equal each other. Rather, they stand in positions of relationality that depend on time and space.

While Saussure focused on both the synchronic and the diachronic, stable and unstable, system and individual, ways that language exists, the Russian Formalists after him dreamed of a study of stable signs, a Science. Formalists such as Shklovsky and Jakobson (against which Mikhail Bakhtin later wrote) dreamed of an ultimate equality between signified and signifier, of a way that language made Scientific sense. This impetus toward stability and reason drives a great deal of language usage, and it informs practical translation. However, Derrida takes the instability of language, the ‘traces’ that Culler mentions, and runs with it.[16] There is no formal structure to language, there is no deep structure, there is simply the sliding of signifieds on signifiers as words change meaning over time and between utterances. Derrida represents this by the trace, the word under erasure (‘sous rature’). The word is unstable, but this does not indicate that it is free; rather, the word is loaded down with all of the past meanings, the traces of history (whether we recognize those past meanings or not). For Derrida, like with Saussure, meaning can never be pinned down, which means that words are never singular and always slide back along different signifiers; however, for Derrida, this instability means that a translation is twice as meaningful as the original text itself. It is an added sense above; it is an after erasure, a meaning after the original. In light of such polysemy, translation ultimately does something different than simply move a text between form, time and space: it helps the text “live on.”

In “Des Tours de Babel” Derrida argues that the proper name (Babel, but all names) is the ultimate example of translation’s impossibility. Coming from the Biblical story, Babel is the tower, it is ‘chaos’ (the multiplicity of tongues), but it is also God, the Father.[17] Names remain as they are in translations, they are untranslatable, but this is further the case with God’s name, and the tower itself, both of which cannot be translated/written/completed. Ultimately, Derrida argues that translation is the ‘survie,’ the ‘living on’ and ‘afterlife’ of the original text through the translation, but not the dead, original author whose sole means of immortality is through ever transforming literary texts.[18] As he summarizes in his discussion of a ‘relevant’ [meaningful and raising] translation of Merchant of Venice’s Shylock:

It would thus guarantee the survival of the body of the original… Isn’t that what a translation does? Doesn’t it guarantee these two survivals by losing the flesh during a process of conversion [change]? By elevating the signifier to its meaning or value, all the while preserving the mournful and debt-laden memory of the singular body, the first body, the unique body that the translation thus elevates, preserves, and negates [relève]?[19]

Translation allows a text/body/father, to live on, to survive, but in so doing the original is necessarily changed.

The lesson from Derrida in regards to translation is that it is impossible. This much is obvious. However, impossibility does not mean that it should not be done. Translation is a necessary act despite its flaws: a text would not ‘live on’ without translation, just as we cannot ‘live on’ without eating, consuming, translating the other into sustenance.[20] We can learn two things from Derrida: the first is that deconstruction is about the psychoanalytic working through of the trauma, the historical weight imbedded in the word due to the impossible overload of meanings. The second, the lesson that I take, is that the failure of translation must be flaunted, highlighted. The Derridian methodology (not deconstruction, perse, but the productive theory we may take from deconstruction) is about showing how language and texts have multiple meanings and in fact can never be pinned down to any single meaning. Translation, just like language and original texts, must show this built-in instability. As all language is sliding along unstable signifiers, and all texts float along the backs of others, translation too must show its layeredness, its historicity. However, the instability is not flexibility and freedom, but a painfully historical burden (a ‘haunting,’ even[21]), and Derrida shows this uncomfortable instability by writing with asides, marginal notes and what Philip Lewis has argued as abusive translation.[22] Because this abusive, Derridian style of translation is painful and difficult to read, it is not often considered useful to translation practice, which focuses on clarity, consumption and entertainment.[23] However, the build-up of meaning through layering is a key method to bring together the various modes of translation that I will return to throughout this paper.

Like Derrida, Benjamin argues that perfect translation is impossible, but he does so toward a completely different end. In “The Task of the Translator,” Benjamin argues that the ‘Aufgabe’ [task, give up, failure] of the translator is impossible, but such failures add up to something more.[24] A translation must not reproduce the original, but must be combined with the original to approach something more. His master metaphor is of an amphora, representing language, which has been shattered into innumerable pieces:

[A] translation, instead of resembling the meaning of the original, must lovingly and in detail incorporate the original’s mode of signification, thus making both the original and the translation recognizable as fragments of a greater language, just as fragments are part of a vessel.”[25]

The amphora is language and in order to piece it together individual, failed translations (and the original) must be undertaken piece by piece in order to piece the ‘Reine Sprache’ [pure language] together. Finally, translations are not necessarily possible in any given time; there is a timeliness, or “translatability” that allows or prevents certain translations.[26] For Benjamin, no translation is necessarily possible and no translation does everything, but translations must be undertaken both for Messianic (it facilitates the return to a pure language) and logistic (it enables the spread of ideas and texts) reasons.

Individual translations do not do everything, but as particular translations in particular contexts they give a glimpse of the pure language. From Benjamin I take the notion of seeing something more even if the singular is not perfect, and I take the idea that particular translations are better in particular contexts. Both of these oppose the idea of a singular, perfect translation, which, like Derrida’s insistence of abuse, is little desired by practitioners of popular translation. However, it is something that has great importance in a world where the difference between believing in a perfect translation and understanding the problems of translation can be the difference between fun and boredom, but also between death and life.[27]

While I do not believe in a Messianic return of an Adamic language, I do agree with Benjamin’s insistence on the unequal benefit of different translations. Certain languages at certain times translate better than others due to contextual issues. This is not to say that translation at any given point is fundamentally impossible, but rather that translations are unequal. While Benjamin might hold that this renders useless certain translations at certain times, I believe that it is possible to use the materiality of new media to combine Derrida’s abusive slipperiness of language with Benjamin’s build up of languages to create a more complete translation. Such a new form is where this paper will ultimately conclude.


Word, Sense, and Equivalence(s)

While Benjamin, Derrida, and a large number of other theoreticians of translation confront (and embrace) the impossibility of translation, practitioners of translation routinely deny the impossibility by necessity. Translation must (and does) happen, so instead of a holistic notion of perfection, individual elements are highlighted. Historically, the two primary tenets of translation have been the oppositional mandates of translating word-for-word, and translating sense-for-sense. However, theorists in the 20th century expanded the either/or of word vs. sense to include a host of other correspondences and equivalences. In the following section I will go over these different forms of practical translation, but I will conclude by pointing out that at issue with all of them is that they naturalize a single element, which blocks off the possibilities of any other options.

The oppositional mandate between word and sense has been a major focus in Western translation since the Greeks in part because of the importance of the Bible in Western translatology. The conundrum posed within the oppositional mandate is simply does the translator translate the words in front of him/her [word-for-word], or the meaning of those words as a larger whole [sense-for-sense]? However, because this debate has been contextualized historically within the realm of Bible translations it has never been a simple question between sense and word, but between worldly sense and divine word.[28]

The ‘first’ Bible translation was the previously discussed Septuagint translation from Hebrew to Greek, which was done ‘by the hand of God,’ but manifested through the separate acts of 72 individual translators. In this instance, the translators create what is known thereafter as a perfect translation. The words are God’s words and can neither be altered nor denied. It is the perfect translation as there was unified meaning between original and translation in word and sense. Such claims for perfect word-for-word and sense-for-sense translation are quite problematic, but they go unquestioned until St. Jerome again translates the bible from Greek to Latin. The problem (or so it is claimed) is that he refers back to the old, pre-Septuagint Hebrew version of the Torah, and in so doing denies the primacy of God’s perfectly translated words. How can the Greek version be perfect, with all of the sense of the original in the new words, if Jerome must go back to the Hebrew?

While St. Jerome argues for sense-for-sense translation, he does so in an interesting bind having translated the Septuagint Bible while referring back to the older version and highlighting the importance of particular words. He thus pays very close attention to word-for-word ideals, noting the importance of word order with mysteries, but ultimately argues, “in Scripture one must consider not the words, but the sense.”[29]

Word-for-word translation schemes never work, as there are never equivalent words. To show how this works I’ll take the word ‘wine’ between English and Japanese: wine is not blood; wine is not saké; saké is definitely not blood. Wine rhymes with dine and whine, but it is also either white or red and can be related to both debauchery and blood, and even metonymically to Christ’s blood. Of course, wine is the fermented liquid from grapes, but also just the general fermentation process itself so that “rice wine” is fermented rice starch, and “plum wine” is fermented plum liquid, but “grape wine” would be considered redundant. On the other side, saké, the Japanese word from which “rice wine” is often translated, stands as a general word for all alcohol, but nihonshu, or Japanese alcohol, which is the more explanative Japanese word for saké, is unused in English. Finally, there is no link between sake and blood in color, rhyme, or any other mode of meaning. If one single word can cause this (and more) trouble it should come as no surprise that a word-for-word translational scheme must fail.

From Jerome through to the modern period there is a fixation upon sense-for-sense translation, and by the time of John Dryden sense-for-sense translation (except when dealing with mysteries of the divine word) is cemented. While metaphrase, word-for-word, is one of Dryden’s three types of translation it is only done in extreme cases. The main debate is between paraphrase, sense-for-sense translation with fidelity to the author, and imitation, which is a type of adaptation that partially betrays the original author.[30] Dryden’s third form of translation, imitation, is the divergence point between adaptation, what I seek to note as a carrying over of form where the translator hints at the style, form or sense of an author, but not the content. Between Dryden and the present this form has completely diverged into adaptation and intertextuality, which are considered entirely separate from translation. This is the final splitting point between translation’s original vectors and traductions’s linguistic and authorial focus in the modern period. Finally, Dryden’s second form of translation, paraphrase, is the most general concept of sense-for-sense translation as it is about what the author said in one language said in another language.

Paraphrase translation has enjoyed the primary role in translation from the time of Dryden to the present, and has only faced significant opposition during the 20th century from semiotics, formalism, and postmodern ideas of language. All three of these provided different oppositions, but all significantly affected the word/sense divide.

While Jakobson is mainly known within translation studies for his three types of translation (intralingual, interlingual, and intersemiotic), as a formalist he is understandable as one looking at the formal qualities of language, and therefore what happens to those essential elements in the process of translation within and between languages and forms. Moving from a semiotic understanding of language where “the meaning of any linguistic sign is its translation into some further, alternative sign,” Jakobson argues that there is never complete synonymy as “synonymy, as a rule, is not complete equivalence.”[31] A translation, regardless of word or sense, cannot fully encapsulate the source text. As Jakobson claims, “only creative transposition is possible” where this creative transposition focuses on something, but loses some other specificity.[32] While two possibilities coming from this failure of translation are Derrida and Benjamin, a more common one is to instead focus on creative transposition of one particular element of the text, but ignoring the rest. This is most visible in Nida’s ideas of correspondence with Bible translations, Popovic’s four equivalences in literary translation, and finally the current style of game localization.

Eugene Nida is best known for his principles of correspondence, formal and dynamic (or functional) equivalence, which he has primarily enacted with Bible translations. As a translator closely linked to the American Bible Association most of Nida’s work is also linked to principles of missionary work and the spread of Christianity through rendering the Bible understandable and close to a target audience. His two sides of translational equivalence, formal and dynamic/functional, are quite similar to Dryden’s metaphrase and paraphrase. However, in particular, formal equivalence focuses on fidelity to the source text’s grammar and formal structure. In contrast, dynamic equivalence seeks to make the text more readable to a target audience by adapting it to a target context. Nida’s scalar of equivalence is similar to both the word and sense debate as well as the domestication and foreignization debate, which I will elaborate below, however, that he uses the idea of equivalence in the singular and deliberately notes that one must sacrifice one side or another is important for the current discussion.

In a slightly more expanded sense, Anton Popovič writes of four types of equivalence within a text: Linguistic, Paradigmatic, Stylistic (Translational) and Textual (Syntagmatic).[33] The first, linguistic word, is the goal of replacing a word in the source language with another, equivalent word in the target language; it is different from the word and sense debate in that it simply indicates that the translator must pay attention to the phonetic, morphological and syntactic level of the text, which is to say the words that are written. The following three expand on the idea of equivalence in that a translation may focus on the grammar, the style, or the expressive feeling of the text.

Popovič’s focus is on a very literary understanding of the text. These four methods are for understanding the formal qualities of the written word, and therefore how to translate literary texts. Obviously, these four equivalences do not cover the entire realm of human experience. Other media involve different essential qualities, which have been the focus of those types of translation.

While any medium can offer an example of a different essence, I draw from my own focus on game translation. Game translation highlights experience. Games, as mass produced commodities, are considered interactive entertainment, and the core of the game is the active, fun experience.[34] In light of this gaming essence, the equivalence sought by game translators is the experience of the player in the source culture. As Minako O’Hagan and Carmen Mangiron, two of the few theorists on game translation write:

[T]he skopos of game localization is to produce a target version that keeps the ‘look and feel’ of the original… the feeling of the original ‘gameplay experience’ needs to be preserved in the localized version so that all players share the same enjoyment regardless of their language of choice.”[35]

Because the optimal experience when playing a game is entertainment, a good game translation is one that entertains and nothing more.

While Popovič believes there is an “invariant core [meaning]”[36] that remains regardless of any translational variations, one may translate with the goal of rendering equivalent only one of the elements, and in so doing the other three are sacrificed. Such a sacrifice works directly off of the understanding that perfect translation is impossible. Choosing one equivalence over another does not elevate it in importance over the others. However, in the practical integration of translation and reception only one rendering of one equivalence is ever seen, indicating that it is the true equivalence. Because only one type of equivalence is ever seen it is retrospectively elevated to the true equivalence. The equivalence highlighted becomes the essence of the text regardless of it being only one of many and any other types of translation that highlight the other elements of the text are useless. In the case of video games the fetishistic focus on the experience of the player renders invisible and invalid all other levels of the game. As a result, games become pure entertainment and all artistic, political, or cultural levels are ignored.

A text does not have a single essence; it has many different sites of differing importance to different people. The author might be intending to highlight one thing; the reading takes another; one cultural context focuses on one element, but another focuses on another. While the essence of a text spread to innumerable sites (rhyme, look, site, context, etc) equivalence seeks to focus on one and sacrifices the rest. This sacrifice is naturalized, and the equivalent element is constructed (after the fact) as the ultimate/important thing to be translated. As Lawrence Venuti notes regarding Jerome’s Bible translation, “Jerome’s examples from the gospels include renderings of the Old Testament that do not merely express the ‘sense’ but rather fix it by imposing a Christian interpretation.”[37] Translation does not just move a text from one language, time or place to another, but rather, it imposes particular meanings on that text and through the text on both the source and target cultures. Translational regimes and translations themselves exist within a political world. Translation is inseparable from power.


Domestication and Foreignization

While equivalence flows logically from the debates of sense-for-sense vs. word-for-word it also comes from the other primary concern in translation, which is between domestication and foreignization.

In an attempt to move beyond the debate between paraphrase (sense) and imitation (adaptation),[38] Friedrich Schleiermacher argued that there were two main ways of translating: either the translator makes the text in the style of the foreign original and forces the reader to move toward that source text and context [foreignization, or Source Text orientation (ST)], or the translator relocates the text into the target culture, pushing the text into the local context making it easier for a reader to understand [domestication, or Target Text orientation (TT)].[39] Schleiermacher argued that the debate between sense and word was defunct as both fail bring together the writer and reader. Instead, he contended that the translator needed to decide between foreignization and domestication, as the act of translation was necessarily related not to texts, but to cultures.

Schleiermacher argues that different types of translation are necessary to provoke different reactions in different audiences. Imitation and paraphrase must come first to prepare readers for the higher phases of true translational style: foreignization and domestication. He then argues that writers would be different people were they to write in, or be positioned as if they were writing in, foreign languages as domestication claims to do, and that such a repositioning would take the best elements out of the writers.[40] Thus, his argument ultimately supports foreignizing translation.

Antoine Berman understands Schleiermacher’s call for foreignization as a particular moment where an ethics of translation is visible. This ethics relates to the formation of a German language and culture. To Berman, domestication denies the importance of a mother tongue itself, and foreignization has the possibility that the mother tongue is “broadened, fertilized, transformed by the ‘foreign.’”[41] However, he also notes there are extreme risks to such nation building:

inauthentic translation [domestication] does not carry any risk for the national language and culture, except that of missing any relation with the foreign. But it only reflects or infinitely repeats the bad relation with the foreign that already exists. Authentic translation [foreignization], on the other hand, obviously carries risks. The confrontation of these risks presupposes a culture that is already confident of itself and of its capacity of assimilation.[42]

The prime assumption here is that Germany exists on the cusp of the ability to incorporate the foreign tongue in order to grow, but more importantly it also exists in a situation of being dominated by the French. In order to negate the French dominance over the German culture and tongue (that is extended through domesticating translations and bilingualism) it becomes necessary to take the dangerous plunge and move toward a foreignizing form of translation.

Texts do not exist outside of contexts, so any choice is necessarily related to political interests. In the case of Germany in the 19th century it was the relationship between Germany trying to develop against a dominant France. As Lawrence Venuti notes about Berman and Schleiermacher, “The ‘foreign’ in foreignizing translation is not a transparent representation of an essence that resides in the foreign text and is valuable in itself, but a strategic construction whose value is contingent on the current situation in the receiving culture.”[43] In the case of 19th century Germany, Venuti argues that the “Schleiermacher was enlisting his privileged translation practice in a cultural political agenda: an educated elite controls the formation of national culture by refining its language through foreignizing translations.”[44] Venuti’s argument requires jettisoning the nationally chauvinistic quality of Schleiermacher’s call for foreignization, but maintaining foreignization’s oppositional quality. To Venuti such a foreignization is necessary to oppose the current discursive regime of transparency that is dominant within the 20th and 21st century United States.

Venuti argues that the dominant discourse of translation within the United States is transparency. The translation must read as if it were written in the local language. This is a modern rendition of Schleiermacher’s domesticating translation that has been normalized to the extent that foreignization as a method is not an alternative, or different choice, but an awkward oddity.[45] As his subtitle “A History of Translation” indicates, Venuti lays out a genealogy that shows the rise of fluent translations in Europe between the early modern period and the late 19th century and how during this period the translator’s status dropped. By pointing out the constructed nature of the ‘fluency is good’ discourse, Venuti is trying to argue a move away from such fluency. He does so both to raise the status of the translator in relation to the author and originality, and to problematize the United States and English’s relationship to other countries and languages. As he writes in his conclusion:

A change in contemporary thinking about translation finally requires a change in the practice of reading, reviewing, and teaching translations. Because translation is a double writing, a rewriting of the foreign text according to values in the receiving culture, any translation requires a double reading… Reading a translation as a translation means not just processing its meaning but reflecting on its conditions – formal features like the dialects and registers, styles and discourse in which it is written, but also seemingly external factors like the cultural situation in which it is read but which had a decisive (even if unwitting) influence on the translator’s choices. This reading is historicizing: it draws a distinction between the (foreign) past and the (receiving) present. Evaluating a translation as a translation means assessing it as an intervention into a present situation.[46]

Writing, translating and reading are contextually contingent acts and one must be aware of the contexts from which and to which such texts move. It is key that the discursive regime of domesticating/fluent translation does not allow such historicizing or cultural understanding, as the foreign is simply rendered invisible

The current regime of translation is one in which the translator has become invisible and this has negative effects regarding the translator’s status, but also in regard to couching the United States’ translational imperialism. Venuti argues, “Schleiermacher’s theory anticipates these observations. He was keenly aware that translation strategies are situated in specific cultural formations where discourses are canonized or marginalized, circulating in relations of domination and exclusion.”[47] Results of this naturalized, extreme form of domestication are transparent cultural ethnocentrism and domination. These are, as Venuti argues, “scandals” of translation.[48] In opposition to these scandals, a foreignizing translational regime can link up to an “ethics of difference” that “deviate[s] from domestic norms to signal the foreignness of the foreign text and create a readership that is more open to linguistic and cultural differences.”[49] It is Venuti’s argument that acknowledgement and accommodation of difference are sorely lacking with late 20th and early 21st century United States context, thus requiring the switch to foreignizing translation. However, as previously stated, such a foreignizing method is completely opposite the dominant trend of the present.

Venuti argues for a switch to foreignization and away from the domestication that has been naturalized. He argues that “invisibility” refers to both the status of the translator as negated under the writer economically and functionally, and that translations must be presented so fluently, as if they were made in the local language and culture, that the translator is rendered invisible. The invisibility of domestication overlaps in instructive ways with Bolter and Grusin’s concept immediacy, the transparent side of remediation. Ultimately, remediation is a way out of the problematic discursive regime of translation that Venuti locates.



In their seminal new media text J. David Bolter and Richard Grusin coined the term remediation in response to what they saw happening with new media at the time, but also how all media had been changing over the twentieth century.[50] For Bolter and Grusin all media is remediated: a medium remediates other media. Web pages have text, icons that tell people to ‘turn to the next page,’ and imbedded movies with standard filmic controls; Microsoft Word has a ‘page’ as it remediates writing on paper. This remediation has two qualities, or sides. The first, immediacy, is where the fact of remediation is cut away, or rendered invisible. The HUD (heads up display) of a game is lessened, removed, or rendered diegetically relevant. From a literary standpoint the content and diegesis is all that matters and the user need not leave this place of immediate access to the text. As Bolter and SIGGRAPH director Diane Gromala write a few years later, “we…have lost our imagination and insist on treating the book as transparent….  We have learned to look through the text rather than at it. We have learned to regard the page as a window, presenting us with the content, the story (if it’s a novel), or the argument (if it’s nonfiction).”[51] The second, hypermediation, can be seen in TV phenomena such as showing a miniature window in one corner of the screen and in the scrolling information bar on the bottom of the screen, but it is also footnotes, side notes and commentary with books. For Bolter and Grusin remediation is simply something that happens with all media and has happened since writing remediated speech, much to Plato’s chagrin. However, it has interesting links with translation, particularly in how immediacy can link up with Venuti’s fluency, and how hypermediacy can link up with the possibilities of layered translation, which come from Derrida and Benjamin.

Venuti claims that the current regime of domesticating translation within the United States leads toward a fluency that renders invisible both the translator and the fact of translation. According to the majority of American readers who enjoy this type of translation and experience such a goal is admirable. According to Venuti, fluency is quite problematic due to the translational ethics of difference involved. Within the logics of remediation, by rendering the translation invisible the original text is made an immediate fact for the reader even though it is not the original text, but the translated version. This type of immediacy materializes in particular ways with particular media: for books it is in a one to one fluent translational strategy, with film it is dubbing and remaking, and with video games it is localization. While these fluent/immediate strategies are dominant at present there are alternatives.

For Venuti, the opposite of translational fluency is a foreignization that highlights the ethics of difference. As cited above, most important in this is creating a new style of “double reading” that requires the reader read the text as a translation. However, if we take Bolter and Grusin’s oppositional strategy of remediation, hypermediation, we can see alternative methods of highlighting an ethics of difference. Translational hypermediation would entail highlighting the fact of translation; it could be abusive, Derridian translation; it could be Jerome McGann’s hypermedia work; it could be cinematic subtitles and metatitles; it could be game mods.[52] All of these interact with the medium in a way that utilizes its particular form.

Hypermediated translations of new media could easily exist, because of the particularities of digital alterability, but they do not. In the following section I will elaborate the particular way that translation happens materially with books, film and games. Primarily, these current ways are domesticating, fluent and immediate. Then, I will explain how translation could bring out both a type of foreignizing, layered and hypermediate relationship with the text.


Specific Iterations in Media

While the above section has summarized tenets of translation primarily coming from literary studies, the following will elaborate how these different trends intersect with three particular media: books, film and games. These three media are chosen very deliberately. Gaming is my main focus in part because of industry and theoretical denial of its translated nature, and in part due to its ability to lead to new translational possibilities. However, books and film are necessary predecessor forms on the route to games. Books are important as the primary textual form in current Western literary culture. While poetry, newspapers, magazines and other printed forms are also relevant I limit my analysis to the Modern novel both for space issues, and for the novel’s focus on, and obsession with, the author. Secondly, film is important as games have been created in the wake of the 20th century’s cinematic revolution where the language of games comes in part from the language of cinema: cut scenes, 1st person perspective, and increasing obsession with realisticness.[53] While the link between gaming and cinema has been critiqued on the grounds of gaming’s material and experiential differences from cinema, this does not deny its historical and stylistic links despite their unwieldy application in games.


Books, Supplementarity, and Digital Culture

Books in the modern period are singular objects created by singular authors. An author has an idea, struggles to bring this (original) idea to paper, and over time eventually uses his or her singular language to write the work. While books are made at one point in time, there is a belief in their timelessness: they are able to stand up to decades, centuries, and millennia (although such durability is also a test of worth) due to their original language (or rather, despite their original language, as it is translation that allows the text to ‘live on’). There is an essential link between author, nation and language, which is brought out in the book, and readers partake in this art when they read the book.

A translation is something that comes chronologically after the book. It is the result of taking the words and sentences (the content), and changing it into another language in order to facilitate the book’s movement over spatial-linguistic borders. The translation’s hierarchical relationship to the original book is derivative, but its material relationship has changed over time. Whereas translations are a material replacement that comes chronologically after the original, they were at times both simultaneous and supplementary to an original work.

Certain texts needed to be written in certain languages (Latin for religious, philosophic and scientific texts; literary genres in Galican or Arabo-Hebrew, and travel accounts such as Marco Polo’s and Christopher Columbus’ in a hodgepodge), and the idea of deliberately altering a text from one language to another was not high in priority, or even acceptable in some instances.[54] At one point in lieu of translation there was commentary, or Midrash in the case of the Torah. Such commentary was necessarily displayed alongside the original as a supplement. It complicated, but did not replace the original.

This older form of supplementarity can be linked to the current, but uncommon, practice of side-by-side translations where the original resides on one page and the translation on the other. The original and translation face each other to enable comparison. While Biblical and philosophical material is often granted side-by-side translations it is done so due to the importance of both individual words and overall sense, or because the question of just what is important is either undecided or unknowable. In the case of popular (low) cultural novels there is less reason to consider the original and so there is little reason to print the original. Other possible reasons side-by-side translations of important biblical, philosophical, and literary texts still exist, but popular novels are almost never given such a translational method are cost and size. Halving the pages printed should significantly reduce the cost and size of the book. Only important texts, or political and religious ones where price is not an issue, can justify the additional cost of the double pages. And popular, semi-disposable entertainment texts are less entertaining when enormous, bulky tomes. What is a complimentary relation between original and translation becomes a matter of replacing one with the other.

The shift from supplementary translation to replacement translation, where the translation stands on its own as a complete text, happens at the same time within modernity as the rise of translational equivalences. However, as discussed previously, it is impossible to conduct a perfect translation that conveys word, sense and all equivalences, so one element becomes the focus and under that equivalence the translation replaces the original book. In the case of the 20th to 21st century United States this equivalence is roughly what the author would have written had he or she been from the United States and writing in English. Because the industry follows a replacement strategy that supports fluency and immediacy, books can only follow a single equivalence. However, the materiality can support multiple equivalences through a translational supplementarity that supports an ethics of difference and hypermediacy.

Obviously, page-to-page translation, and the works of Derrida are an example of how books can support this form of hypermediated translation.[55] The viewer can be shown the different words that could have been used throughout the translation. While there are many possibilities for a hypermediated translation, there have been few opportunities throughout Western translation history. However, this hypermediated style might be coming back in fashion with the advent of new technologies including the digital book. These digital books also solve cost and size issues that were partial reasons against side-by-side complimentary translations.

While the digital book holds much potential, proprietary design, national based sales of content, and Digital Rights Management (DRM) issues plague current eReaders. They are simply an alternate way to read a book, which one must buy from a massive chain store in one language, and nothing more; they are monolingual devices that bring out the same trend of immediacy that I described above. However, the digital book could be programmed to show a multiplicity of versions, iterations, and translations. It could be programmed to be a truly hypermediating experience if only by linking different translations of a text. I will return to this in the final section of my paper, but a hint at this possibility is in Bible applications. YouVersion’s digital Bible application[56] has 49 translations in 21 languages, and this number increases as new versions are added. The Bible is not in copyright, but it would be possible to use a micropayment system that would allow interested patrons to buy linked versions of different book translations in a similar manner. By integrating the different variations a hypermediated experience would be created.


Film, Dubs, Subs, Remakes and Metatitles

The contentious relationship between immediacy and hypermediacy is highly visible in film translation.[57] On the one hand there is a long history of replacement/transparency with multi language versions (MLV), dubbing and remaking, but on the other hand there is an equally long history of subtitles. While the debate between subtitles and dubbing is really only solvable by referring to local preference, I argue that the rise of remakes of foreign films, especially in the United States, is a sign of the dominance of replacement and immediacy strategies. In the following section I will outline the history of language in film, then how it intersects with remediation, and finally ways that the lesser-used hypermediacy might bring out alternate forms of film translation.

When cinema was first exhibited there was no call for translation. There was no attached sound and there was no dialogue. The original ‘films’ like the Lumière Brothers’ La Sortie des Usines Lumière (1895), which depicts the workers leaving the Lumière factory, and L’Arrivée d’un train á La Ciotat (1896), which shows the train arriving at the station and people beginning to get off, are good examples of the limited structure and general ‘universality’ of the earliest films. Because there were no complicated plots or multiple scenes it was believed at the turn of the 19th century that cinema, like photography, was merely the “reproduc[tion of] external reality.”[58] At the beginning of the 20th century, cinema was considered outside of language and universal.[59] This understanding was first troubled with the inclusion of intertitles, as they required translation to move the film from one place to another, and from one language to another. However, the rest remained ‘universal.’

The late 1920s brought imbedded sound to cinema, and with it came talkies. These talkies necessitated a new level of translation, and both immediacy and hypermediacy translation styles were available: dubbing and subtitling respectively. Subtitling is both hypermediating and foreignizing. It is hypermediating in that it accentuates the fact of translation by putting the translated dialogue on top of the film. It is foreignizing because of the constant, visible disjoint between the words of the actors and the subtitles at the bottom of the screen.[60] The viewer constantly hears the foreign other, and this brings to the forefront the issue of trusting a translator to have translated properly.

In contrast, dubbing is immediate in that it erases the voices of the visible actors and replaces them with other voices in the target language. However, dubbing is not perfectly domesticating as there is a discrepancy between the bodies on screen and the dialogue. This discrepancy is partially the result of lip-syncing issues, and partially the result of differently signified bodies and voices. One of the tasks of dubbers is to forcefully make the dialogue match the lips by altering the linguistic utterances, often quite significantly.

While dubbing can alter the words and voice coming out of the body it cannot change the bodies themselves. In a realm of racialized nationalism, or as Appadurai writes, when the hyphen between the nation and state is strong,[61] this discrepancy between racially different body and local language is a problem. Because it is assumed that only those with specific bodies speak specific languages such discrepancy is highlighted.[62] Dubbing thus still has a hypermediated quality to it. A further step toward immediacy is changing the body. There have been two different methods used to make films more immediate by changing the bodies. The first was the early 20th century multi and foreign language versions, and the second was the much more long lasting remake.

The understanding of film as universal was initially challenged in the 1929-33 period, which saw the inclusion of multi and foreign language versions. Foreign language versions (FLV) are where the film was recreated after the fact in a different studio, and multi language versions (MLV) are when they were recreated in the same studio on the same set with different actors, but later in the same day.[63] The M/FLV highlights that there were people who understood that culturally specific elements are writ large on the body. Not only was national culture inscribed with language, but with bodies, clothing, and even story. It was believed that replacing the body, remaking the film into both the ‘local’ language and body, the film would be less foreign. This effort reveals the dominant trends of immediacy and domestication. By replacing both the language and body the text is made even more transparent for the audience. However, the M/FLV did not last long largely due to the high costs involved. Then as now there is a high priority given to business and the bottom line, and the cost of making multiple movies simultaneously was not economically justifiable especially when the movie could flop.

While intertitles and the MLV incorporate linguistic and human alteration what they do not consider is the cultural specifics. The content level was not translated or adapted; the stories are not altered. There were incredible numbers of stories adapted and remade again and again, but not because of cultural relativity. This oversight is rectified three decades later with films like Gojira (1954), which was reconceptualized away from the original’s atomic bomb logics. The remake, Godzilla, King of the Monsters! (1956), is reshot and reedited in order to feature an American journalist narrator and highlight the monster genre.[64] Following Godzilla, but primarily at the end of the 20th century there was a resurgence of remakes that link with cultural translation.[65]

With remaking not only do the bodies in the film change to locally recognizable ones with their own voices, but the context of the film can be changed from foreign lands to local ones. An example of this is Shall We ダンス (1996), a Japanese movie about a salary man going through a midlife crisis and learning to dance in an anti-dancing Japanese society, which was remade as Shall We Dance? (2004) with Richard Gere, Susan Sarandon and Jennifer Lopez in a Chicago context.

In one of the most important scenes in the original Mai is lectured by a possible new dance partner, Kimoto. He proposes they give a demonstration at a local dance hall (night club), but she refuses to dance with “hosts and hostesses” claiming it isn’t dancing, but cabaret.[66] Mai is obsessed with the foreign, European Blackpool competition and dance floor, which is opposed to the native dance hall with less history and lower culture. Kimoto claims not only that enjoying dance is of primary importance, but that the lowly Japanese dance hall has a history just as important as Blackpool. The opposition of high to low (hierarchical) and native to foreign (spatial) is stressed in this interchange. When Mai finally holds a party that signals the restart of her career it is on the lowly dance hall’s floor, indicating the primacy (or at least equality as she plans on returning to Europe) of the native over the foreign, and it stresses the equality of high and low. In contrast, the remake opposes Miss Mitzi’s relatively unpopular dance studio with the hip Doctor Dance studio and club. The opposition is both temporal and hierarchical: Miss Mitzi is middle aged and teaches various forms of professional dance compared to the scenes in Doctor Dance that are almost all depicted as club/entertainment moments. And when Paulina, Lopez’s adaptation of the Mai character, decides to go study in England (a rather meaningless decision in the context of the remake) her going away party takes place in an unrecognizable locale. In the original, the Japanese spirit and history is implied to be just as important and meaningful as the European one. The film is highly nationalist in its context. The remake works to erase such nationalism by placing the theme of global/universal work and the international family man/nuclear family over that of foreign and native. Such movement complies with a universalization of remaking as domestication. The foreignness of the Japanese original is rendered domestic and immediate with the remake.

A domesticating translation takes the foreign text and moves it into the native context, making the reader’s job easier by forcing the text to speak in a manner the reader is used to. In the Hollywood’s domesticating remake of Shall We Dansu, Japan’s troubled interaction with modernity and globalization are removed. The local socio-political particulars of the original films are erased in the service of “universal” generic narratives that satisfy an American audience that rarely interacts with foreign others. Hollywood’s remake process is a systematic erasure of difference and the foreign other that has been naturalized under the theory of the remake as cinematic translation, which only needs to render equivalent one essential element at the expense of all others.

So far I have discussed the current domesticating and immediate strategies of film translation. Even though I have claimed that subtitling is both foreignizing and hypermediating, it does not use the materiality of the filmic medium to really bring out the possibilities of hypermediation. So far there have been no further creations, but it is not hard to think of a type of “metatitles” that use the capacities of the digital cinematic medium to layer translations on the screen in a hypermediating translational style.

In the last few pages of “For An Abusive Subtitling,” Nornes refers to the fan subtitling of Japanese animation that took place largely between the late 1980s and early 1990s in the United States.[67] With difficult to translate terms the fan subtitlers gave extended definitions that covered the screen with words. The translation effort goes well beyond the standard translation in that it starts with a foreignizing pidgin, but also provides an incredible amount of information that works to bridge the viewer and source. While this abusive subtitling is hypermediating in that it layers the text, it could be extended to use the medium more by layering the text using DVD layers. These layers could move from the main textual layer (the visual film) and the verbal audible signs (dialogue and its subtitles), to the hypermediated translational layers: the visual audible signs (text on screen), the non-verbal audible signs (background noises that need explanation), the non-verbal visual signs (culturally derived, metaphoric camera usage), and any other semiotic layer possible.

Through such a layering commentary of the different signs the screen would quickly fill and overwhelm the viewer as a form of abusive translation, and while there is something admirable in completely disrupting visual pleasure, such disruption would never be taken up by the industry: all film layers must be visible either alternately or simultaneously, and at the control of the viewer. As home video watching is generally at the command of a single user or a small number of viewers the DVD format is a uniquely suited mode to enact metatitles. Due to the increased capacity to store information coming from DVD, Blu-Ray and future technology there is no limit to the possibilities of layering.

A layered translation uses the capacities of current technology by hovering over the text, but just as a translation can never fully encapsulate the original, metatitling would never fully acknowledge every aspect of the original text: it is a failed translation, just as all translation is failure due to being incomplete, but it does so in a foreignizing and hypermediating style that acknowledges its failings, and builds toward some ethical ‘more.’


Games and L10n[68]

While film translation retained a complex, but present relationship to translation theories and literary translation, the move to new media forms has created a chasm between theories and practice, which has resulted in new methods and industries of translating. Both translation theory and localization practice could benefit from cross-pollination, and that is the heart of my work. The shift to digital software has been accompanied by the rise of a software localization industry (of which gaming localization is an independent but related industry) with its own tools, standards committees and rhetoric. The following section begins by looking at how language intersects with games. I then consider what game localization is and how it succeeds to translate games, but also how it fails to address certain possibilities. One major element is in how localization fails to utilize the possibilities of the digital medium to bring about a hypermediated translation despite the immense amount hypermediation within the medium itself.

Like films, games have an interesting relationship with the idea of universality. The first computer/digital games such as Tennis For Two (1958) and Spacewar (1962), and even early arcade cabinet games like Pong (1972), Space Invaders (1978) and Donkey Kong (1981) were ‘language’ free. In a similar way that the early films were largely visual amazements, games were computer-programming amazements meant to show off the technology.[69] However, the programming was difficult and took up all or most of the available processing power and programming energy. This meant that early games had little processing power or programming time to spare for story. Many held (and still hold) to a universal accessibility and understanding of these games due to the technological and programming limitations coupled with a belief in the universality of play as a social phenomenon. Even now the belief in ludic universality holds despite theorists problematizing that fact in a similar way to how a previous generation of visual culture theorists problematized the universality of vision.[70] For instance, Mary Flanagan has argued, “while the phenomenon of play is universal, the experience of play is intrinsically tied to location and culture.”[71] While she is largely discussing the spatial politics of games existing in certain spaces, the theory can be expanded to indicate that any game, or instance of play, is tied to a cultural context be it Tennis for Two and the atomic age the weapons research lab in which it was created, Spacewar and masculine science fiction fantasies, Donkey Kong and the origins of the side-scroller as linked to a Japanese aesthetic, or any other game and context. Games are developed, produced and distributed in specific socio-political, temporal and spatial locations and are thus not universal.

However, this believed universality is only now coming into question, and it was completely unquestioned during the 1960s to early 1980s during the 1st and 2nd generations of computer games. There were no ‘words’ in the early computer games, just crude iconic representations. This meant that within the games themselves there was no ‘language’ needing ‘translation.’ What did need translation were the external titles and instructions. Titles were kept or changed to the desire of the producers and distributers. Pakkuman (1980) turned into Pacman instead of Puckman for fear of malicious pranksters changing the P to an F, but other titles were kept as is or were programmed in roman characters. Instructions for arcades and manuals for home consoles needed more extensive translation, but it was a very limited, technical form of translation. The first generation of computer game translation was thus both limited and little different from the roughest of technical translations, neither ‘literary’ nor ‘political.’

The second generation of game translation came about when games utilized greater processing power and storage capabilities to tell extensive stories. These were earlier adventure games like Colossal Cave Adventure (1976) and Zork (1977-80), which told 2nd person adventure narratives, and the more graphical adventure descendents of the 1980s such as Final Fantasy (1987) and King’s Quest (1987). These broke ground in games by normalizing narrative along with play. These also necessitated a new type of game translation that could address more than just the paratextual elements of title and manual.[72] This generation of game translation led to the creation of an industry for game translation.

The rise of linguistic material (stories in and surrounding the games) led to an acknowledged need of translation and the beginnings of the localization industry. Originally, the primary method was what is now called partial localization, where certain things were localized, but most others were not. Thus, the manual, title, dialogue, and menus might be translated, but the HUD might remain in the original language due to the difficulty of graphical alterations. The localization industry evolved in the 1990s to match the growing game industry, and localized elements were expanded from menus and manuals to graphics, voices and eventually even story and play elements.

While the current form of game localization is much expanded from early game translation the basics are the same. According to the Localization Industry Standards Association (LISA[73]), “Localization involves taking a product and making it linguistically and culturally appropriate to the target locale (country/region and language) where it will be used and sold.”[74] Localization is like translation in that it facilitates the movement of software between places, but it is different in that it also allows significant changes in the visual, iconographic and audio registers in addition to the linguistic alteration.

Regardless of how much is translated, game translation involves the replacement of certain strings of code with other strings of code. These strings are usually linguistic: The title The Hyrule Fantasy: Zeruda no densetsu (The Hyrule Fantasy: ゼルダの伝説) becomes ‘The Legend of Zelda,’ and within the game the line “ヒトリデハキケンジャ コレヲ サズケヨウ” [it’s dangerous by yourself, receive this] becomes the meme-worthy “It’s dangerous to go alone. Take this!” But alterations are also graphical: a Nazi swastika is changed into a blank armband for games in Germany. The first is a title, the second is a linguistic asset, and the third is a graphical asset. All assets exist as strings of text in the application code, and by altering the programmed code, each can be changed in the effort to move the game from one context to another. The ability to alter assets is an essential quality of new media.

Along with numerical representation, modularity, automation and transcoding, Lev Manovich argues that one of the primary elements of new media is their variability.[75] This idea of variability exists because new media is tied to digital code, which is adaptable, translatable and transmediatable through the alteration of specific strings. Because the strings, especially linguistic strings, are modular there is no specificity to games. With digital games this variability is combined with discourse of play as universally understandable. Because play is considered universal, the trappings of games (form, content and culture) are considered inconsequential, variable, and localized to fit into a target context in a way that does not change the game’s ludic [play] essence. Thus, any level of alteration in the localization process is fully sanctioned in order to provide the equivalent “experience” to the user. [76]

While asset alteration is possible as an essential quality of digital media, it is not simple: a hard coded application can only be changed through painstakingly altering tons of strings all throughout the program. In contrast, an application that calls up assets can change the individual assets into multiple variations and then choose which assets to call. This practice has been enabled in part by the game production industry embracing Internationalization (i18n) as a necessary and regular practice.

Internationalization is the practice of keeping as many game assets as possible untied and unmarked by cultural elements. In his guide to localization Bert Esselink provides an example of an image with a baby covered in blankets and a separate layer of undefined, localizable text.[77] Unlike pre internationalization methods the image and text are not compressed, which makes it possible and easy to switch the text. While the words are changeable the images remain the same, as there is an assumption that a smiling child is universal. Such non-universality of these particular elements is an issue. Games move beyond this by retaining almost all elements as changeable assets whether they are dialogue, images, Nazi armbands, or realistic representations of military flight simulators, but this changeability brings out other problems.[78] It does not address the elements that go assumed as universal that are not, but it also positions internationalization as a lead-in to domestication. Within the ideal of internationalization the practice of internationalization becomes domesticating translation by material and practical necessity. No matter what happens there will be an immediate, replacing, domesticating translation.

If expansive narratives opened games up to larger amounts of translation, there is a conflux of things that led to the third generation of game translation and the eventual rise of the game localization industry. These are the rise of the software localization industry with i18n standards, the understanding of variability and ability to change the games, the creation of CD technology with larger amounts of storage capacity, and finally the use of that storage capacity to enable voice acting to highlight the narratives.

While compact disk technology was created in the 1970s and has been a means of distributing music since the early 1980s, it took until the 1990s for games to be distributed on CDs. Beginning in the early 1990s CD-ROMs were attached to computers and the Playstation gaming device, and games began to be distributed on CDs. This move from floppy disks to CDs greatly expanded the size of games, and with it came the inclusion of both cinematics and digitized voices. One famous early example is Myst (1993). Both cinematics and recorded vocals take a large amount of storage capacity, which the CD provides. However, the CD does not provide enough space to enable multiple languages of vocal dialogue. There was the justified necessity to limit the included languages with a game because of the limited space available. Even when games moved to multiple disks providing multiple audio tracks would have significantly increased the disks required.

The lack of space for multiple languages forced game translators to decide between subtitling the audio and dubbing it over. While this might have led into an equal debate between dubbing and subtitling (like with film translation), the dominance of computer generated (CG) video over live action, full motion video within the games actually led to the naturalized dominance of dubbing and replacing.[79]

As CG requires that voices be added there is little understanding that localization replaces anything. There is no ‘natural’ link between the visible body and the audible voice for CG, so dubbing causes fewer problems in gaming than it does in cinema.[80] However, because of the space issues there was not enough space to provide multiple languages on the single CD, which meant that the majority of games only have one language on them. Certain European regions provide multiple languages by necessity, but this is far from the norm. Even when the storage and distribution method changed from CD to DVD there was little movement toward the inclusion of multiple languages. This lack of included languages is also partially due to the region encoding business practice.

Linguistic multiplicity within games has also been stymied by the practices of video compression for TV and different regions encodings for DVD disks. CDs and DVDs are region encoded in order to protect business interests by opposing ‘piracy,’ defined here as the unsanctioned copy, spread and use of software applications.[81] There are two general eras of this encoding. The first was the separation between NTSC (National Television System Committee) and PAL (Phase Alternate Line). These two methods were linked to the televisions distributed in different regions; the different gaming systems and disks need to operate in the same encoded manner as the televisions. This made it impossible to play European games (PAL) on an American system (NTSC), but it did not necessarily block out Japanese games (NTSC). This initial form of encoding has less to do with piracy protection than it does policing national airwaves. DVDs use a slightly different method in that they are region based between 8 different encodings: in a limited manner they are as follows: US/Canada (1), Europe/Middle East/Japan (2), Southeast Asia (3), Central/South America/Oceania (4), Russia/Africa (5), China (6), undefined (7), international venues such as airports (8). For video games these region encodings work with and against the standard PAL/NTSC distinction so that while Europe and Japan are both region 2 there are differences between the ability to play PAL and NTSC and vice versa. In contrast, while NTSC disks work easily in both Japan and the United States the region encoding limits the ability to play both disks. Both the PAL/NTSC distinction and region encoding have multiple purposes including software piracy prevention, but in terms of translation they legitimize the lack of necessity of translating for multiple regions.

As piracy is a problem to the game industry[82] and large amounts of piracy happen in certain regions (Asian regions especially due to economic disparity, gray markets and governmental bans on consoles) there is a general belief that by not supporting multiple languages there will be a block put on game piracy: if the gray market version is unintelligible due to it being in an alternate language it is possible that a user will still buy their language version. In other words, limiting the number of languages available limits the geographical range of a particular version of a game, which works against the black markets and works for the game industry. Thus, there is an interesting convergence between business interests, the technology available, the developing techniques in programming games, and the general trend toward translational domestication and immediacy. The storage capacity limitations coupled with the use of cinematics and voices and the standardized practice of dubbing and replacing dovetail perfectly with the industry practice of localization as domesticating, immediate translation.

The goal of localization is to make it ‘appropriate.’ This goal is heavily influenced by the business elements of the localization industry. Localization is about profit, the bottom line, so the goal is to fit with user desires. Game localizers identify game user desire related solely to entertainment.[83] Entertainment and appropriate translation here is identified as helping the target player to have the same experience that the source player had in the source context. Such a singular drive is quite different from literary translations that aim to abuse the user, or linguistic interpretation and political translations that deal with the problems of modern political interaction. However, at base localization is still a matter of equivalence: the equivalent experience/feeling/affect.[84]

Insofar as the localization industry is a business there is little one can say negatively about the practices enacted. Only popular games are localized, so translating them with the same money-making “experience” is better business practice. However, when one attempts to move beyond such market logics it is hard not to see the problems. Just as translation needs to be understood as important, powerful and dangerous, so too must localization be understood as a weighty practice. An industry that has globalization (g11n) as one of its prime terms must be aware that there is more to globalization than “the business issues associated with taking a product global.”[85] Just as globalization is a fraught term in the world it must be problematized from its purely business nature in localization.[86] Said simply, there is more to a game than the immediate localization of the foreign user’s experience.

One way in which localization has recently pointed toward both hypermediation and alternate forms of translation are the creation of multilingual editions to games. The switch from CDs to DVDs and the move to downloadable software there has been a move to include multiple languages. DVDs have enough storage capacity to house multiple language audio tracks and downloadable software is unlimited (if time consuming) as it solely relates to the system’s hard drive capacity. Because of this there has been some movement toward including multiple languages. One particularly interesting case is Square-Enix’s “international editions.” Particularly interesting about the international editions is that they started with only one language: Japanese, but included a few additional features (Final Fantasy VII: International Edition). They then turned into games that mixed the English and Japanese, but were released solely in Japan. The audio tracks were English and there were Japanese subtitles, but the rest of the game was in Japanese (Final Fantasy X: International Edition, Kingdom Hearts: Final Mix). Part of the difference between the early and later international editions is the move from CD to DVD, thus there was little dialogue in the early version, but even in the DVD versions there was only a replaced audio track (the Japanese was replaced with ‘international’ English). A third movement was when both English and Japanese audio tracks were available, but only after finishing the game once: the initial playthrough necessitated the player have a mixed English/Japanese experience with Japanese menus, written dialogue and subtitles, but with English audio (Kingdom Hearts II: Final Mix+). Finally, a fourth movement is the full availability between English and Japanese with various different subtitled languages (Star Ocean: Last Hope International). This progression of different styles of international edition implies what was originally a gimmick, but has changed to a marketing decision based on the knowledge that there is an audience and that this audience has spread outside of Japan.

These international editions have a tangled relationship to the concept of kokusaika [internationalization, or ‘international-transformation’] within Japan. Kokusaika itself is tied to ideas of westernization in the late Tokugawa and Meiji periods, and Americanization in the post World War II period. Kokusaika was seen as an important step of modernization in much of the discourse of the 19th and 20th centuries, but it is troubled in nationalist and essentialist discourses in particular.[87] One might also argue that the Square-Enix games both support and trouble this kokusaika discourse as they support it, but they maintain the importance of Japanese within the games. While the international edition allows multiple languages it does so from a Japanese expansionist perspective. Language is never neutral, and but putting the lingua franca with Japanese as the only choices (with the other standard gaming languages such as French, German, Spanish and Italian as subtitle options) there is a definite movement to raise the importance and reach of Japanese as a language. Kokusaika is thus maintained, but with the exception of a continued presence (and even dominance) of Japanese. While I believe the international editions are on the right track toward a layered, foreignizing style of translation they still exist in the context of Japanese politics.[88] This is similar to Venuti’s claim that Schleiermacher’s work offers a helpful corrective despite the German author’s 19th century chauvinism.

While the past thirty years has led to increased immediacy and region protections, new forms such as DRM routines and online portals such as Steam indicate a general belief that such region separations have ultimately failed to protect against piracy. Because the region encoding tactics to prevent piracy have failed, it is possible that a new era of Localization is coming, but so far it has been relatively limited. Hopefully this is only momentary and the same hypermediacy that has been blocked out since the beginning of gaming will become visible along with the existence of difference that is visible with translations and layers. I will discuss some of these possibilities in the final section of this paper.


Possible Futures

I would like to conclude this paper with a discussion of two new trends in translation. Both are postmodern, intentionally unstable, and utilize the digital materiality. One trend destabilizes the translator, and the other destabilizes the translation. However, both trends can heighten the feeling of hypermediation and foreignization, which (according to Venuti) is helpful in the current translational climate.[89]


Destabilization of the Translator

The destabilization of the translator has multiple translators, but a single translation. It has its history in the Septuagint, but its present locus is around dividing tasks and the post-Fordist assembly line form of production. Like the Septuagint, where 72 imprisoned scholar translators translated the Torah identically through the hand of God, the new trend relies on the multiplicity of translators to confirm the validity of the produced translation. However, different is that while the Septuagint produced 72 results that were the same, the new form of translation produces one result that, arguably, combines the knowledge of all translators involved. This trend of translation can be seen in various new media forms and translation schemes such as Wikis, the Lolcat Bibul, Facebook, and FLOSS Manuals.

Wikis (from the Hawaiian word for “fast”) are a form of distributed authorship. They exist due to the effort of their user base that adds and subtracts small sections to individual pages. One user might create a page and add a sentence, another might write three more paragraphs, a third may edit all of the above and subtract one of the paragraphs, and so on. No single author exists, but the belief is that the “truth” will come out of the distributed authority of the wiki.  It is a democratic form of knowledge production and authorship that certainly has issues (among these questions is whether wikis are actually democratic and neutral), but for translation it enables new possibilities.[90] While wikis are generally produced in a certain language and rarely translated (as the translation would not be able to keep track of the track changes), the chunk-by-chunk form of translation has been used in various places.

One form of wiki translation is the Lolcat Bible translation project, a web-based effort to translate the King James Bible into the meme language used to caption lolcats (amusing cat images). The “language” meme itself is a form of pidgin English where present tense and misspellings are highlighted for humorous effect. Examples are “I made you a cookie… but I eated it,” “I’z on da tbl tastn ur flarz,” and “I can haz cheeseburger?”[91] The Lolcat Bible project facilitates the translation from King James verse to lolcat meme. For example, Genesis 1:1 is translated as follows:

KING JAMES: In the beginning God created the heaven and the earth

LOLCAT: Oh hai. In teh beginnin Ceiling Cat Maded teh skiez An da Urfs, but he did not eated dem.[92]

While the effort to render the Bible is either amusing or appalling depending on your personal outlook, important is the translation method itself. The King James Bible exists on one section of the website, and in the beginning the lolcat side was blank. Slowly, individual users took individual sections and verses and translated them according to their interpretation of lolspeak, thereby filling the lolcat side. These translated sections could also be changed and adapted as users altered words and ideas. No single user could control the translation, and any individual act could be opposed by another translation. According to the homepage, the Lolcat Bible project began online in July of 2007, and a paper version was published through Ulysses Press in 2010. The belief is that if 72 translators and the hand of God can produce an authoritative Bible, surely 72 thousand translators and the paw of Ceiling Cat can produce an authoritative Bible.[93]

FLOSS (Free Libre Open Source Software) Manuals and translations are a slightly more organized version of this distributed trend.[94] FLOSS is theoretically linked to Yochai Benkler’s “peer production” where people do things for different reasons (pride, cultural interaction, economic advancement, etc), and both the manuals and translations capitalize on this distribution of personal drives.[95] Manuals are created for free and open source software through both intensive drives, where multiple people congregate in a single place and hammer out the particulars of the manual, and follow-up wiki based adaptations. The translations of these manuals are then enacted as a secondary practice in a similar manner. Key to the open translation process are the distribution of work and translation memory tools (available databases of used terms and words) that enable such distribution, but also important is the initial belief that machine translations are currently unusable. It is the problems of machine translation that causes the need for human intervention with translation, be it professional or open.

Finally, Facebook turned translation into a game by creating an applet that allowed users to voluntarily translate individual strings of linguistic code that they used on a daily basis in English. Any particular phrase such as “[user] has accepted your friend request” or “Are you sure you want to delete [object]?” were translated dozens to hundreds of times and the most recurring variations were implemented in the translated version. The translation was then subject to further adaptation and modification as “native” users joined the fray when Facebook officially expanded into alternate languages. In Japanese <LIKE> would have become <好き>, but was transformed to <いいね!> [good!]. Not only did this process produce “real” languages, such as Japanese, but it also enabled user defined “languages” such as English (Pirate) with plenty of “arrrs” and “mateys.” The open process created ‘usuable’ material, such as Facebook in Japanese, but also things that would never happen due to bottom line considerations, such as pirate, Indian, UK, and upside down ‘translations’ of English.

Wikis, FLOSS, and Facebook are translations with differing levels of user authority, but they all work on the premise that multiple translators can produce a singular, functioning translation. In the case of Facebook, functionality and user empowerment are highlighted, but profitability is always in the background; for FLOSS, user empowerment through translation and publishing are one focus, but a second focus is the movement away from machine translation; in all cases, but wikis particularly, the core belief is that truth will emerge out of the cacophony of multiple voices, and this is the key tenet of the destabilization of the translator.


Destabilization of the Translation

The other trend is the destabilization of the translation. This form of translation has roots in the post divine Septuagint where all translation is necessarily flawed or partial. Instead of the truth emerging from the average of the sum of voices, truth is the build-up, the mass turned back into a literal tower of Babel: it is footnotes, marginal writing and multiple layers. Truth here is the cacophony itself. The ultimate text is forever displaced, but the mass implies the whole. The translation is destabilized through using new media’s digital essence to bring out a hypermediating translational style.

This style of translation it is not new as it is the hypermediated translations that I discussed previously. It is side-by-side pages with marginal notes; it is Derridian translations; it is NINES and other multilayered digital scholarship; it is fan translations and metatitles; it is multilingual editions of games; it is modding. All of these exist, but not as a new methodology. The destabilization of the translation is a term for grounding these different styles as a new methodology that utilizes forms of peer production (similar to the destabilization of the translator), but fully layers things so that it is not the average that is visible to the user, but a mountain of possibilities available to the user to delve into or climb up. All of these types of translation exist, and the willing translators mentioned above are available, so the difficulty is not in making the many translations happen. Rather, the difficult task is in rendering visible the multiplicity.

The main difficulty of the destabilization of the translation is the problem of exhibiting multiple iterations at one time in a meaningful way. How can a reader read, watch, or play two things at once? Books, films, and games provide multiple examples of how to deal with such an attention issue, but in a limited way. Footnotes, side-by-side pages, and subtitles are all hypermediating layers. However, the digital form presents new possibilities in that there is no space issue and things may be revealed and hidden at the user’s command. There are interesting possibilities of how games can use their digital, programmed, form and user/peer production to bring out new levels of the application and the experience. I will review the digital book and metatitle here, but I will focus on what I see as a new form of game translation that not only uses, but truly thrives off of fan production.

Books are rather conservative. While they are in many ways open due to a lapse in copyright, there is little invention happening to bridge different versions. While resources such as Project Guttenberg have opened these thousands of texts to digital reader devices they exist as simple text forms just as the other purchasable books exist as simple, immediate, remediations of the original book form. However, a hypermediating variation would link these different versions and translations. At a click the reader can switch between Homer’s Odyssey in Greek and every single translation into English made in the 20th century. Of course, French, Japanese, German and various other translations are also available and the screen can be split to compare any of the above. With a slightly different (slightly less academic) mentality the reader to peruse Jane Austen’s Pride and Prejudice on the left hand side of the screen and the recent zombie rewrite Pride and Prejudice and Zombies on the right hand side of the screen. This does not advance the technology particularly, it simply has a different relationship with the text, the author, and the translator; the key is to link the texts and make them available even if it is through using small micropayments for each edition.

Films are interesting as there are already possibilities in play: multiple subtitle and audio tracks, and commentary tracks by stars, directors and others. Subtitles are a simple layer that has existed for almost a century. However, with the advent of digital disks the subtitle has been separated from the print itself allowing the user to choose to hide the subtitle or to choose what subtitles to view. Shortly after the introduction of DVD technology better compression algorithms enabled multiple audio tracks including commentary tracks. We are in an era that uses Blu-Ray disks with more storage capacity, and downloadable movie sites that allow the user to access as desired. These already exist. What would be a step forward is the linking of fan translation and commentary tracks to the digital artifact itself. Files that are in-sync with the film, but must be started independently exist now. Three examples are the abusive subtitling that I discussed earlier through Nornes, RiffTrax, from creators of Mystery Science Theater 3000,[96] which overdubs commentary onto various films creating a sort of meta-humor, and fan commentary from the Leaky Cauldron,[97] one of many prolific Harry Potter fan sites that exist on the Internet. All three of these are independent, fan, productions that are partially sanctioned by business. It would be highly beneficial to producers, prosummers and consumers to enable the direct inclusion of these modifications into the DVDs themselves. It would also enable a new understanding of the film where the meaning is not the surface, but the build up of meaning provided by both original creators and all others who play and add to it.

Finally, we arrive at digital games where some of the most interesting fan work has been done and partially integrated. This means that the way has been opened for a hypermediated translation, but it has, so far remained unpaved. The destabilization of the video game translation would combine the burgeoning practice of multi lingual editions, where there is a visible choice for the user between one language version or another, and the practice of allowing and integrating fan mods. Mods are game modifications, which could be additional maps, different physics protocols, alternate graphics, or a host of other types. Some of these, such as Team Fortress, have been wildly popular. However, ‘mods’ could be expanded to include alternate translations and dialogue tracks. The workers are there and available,[98] but so far these fan productions have faced nothing but cease and desist letters, virtual takedowns, and lawsuits.

With digital games the localization process has traditionally replaced one language with its library of accompanying files with another. However, as computer memory increases the choice of one language or another becomes less of an issue, and certain platforms such as the Xbox and online portal Steam, provide multiple languages with the core software. This gives rise to the language option where the game can be flipped from one language to another through an option menu. Some games put this choice in the options menu at the title screen. Examples[99] of this are Gameloft’s iPhone games (almost all of them, but including Block Breaker Deluxe, Hero of Sparta, and Dungeon Hunter) and Ubisoft’s Nintendo DS game Might and Magic: Clash of Heroes. Others have a hard switch that makes the natural language of the game correspond to the language of the computer system software, so that a computer running in English would have only English visible in the game, but if that computer’s OS switched to Japanese the game would boot with the Japanese language enabled. Square-Enix’s Song Summoner: Encore, Final Fantasy, and Final Fantasy II iPhone releases automatically switch between English and Japanese depending on which language the iPhone is set to. The Xbox 360 has a similar switch mechanism that requires the system to be switched to the desired language.[100] Between these two types are games played on the Steam system such as Valve’s Portal and Half-Life 2, which allow the user to launch the game in chosen languages, but do not require a system-wide switch. Finally, a few games allow the user to switch back and forth between languages. Square-Enix’s iPhone game Chaos Rings allows the user to switch between English and Japanese in the in-game menu allowing the rapid switch between languages at any time not currently in conversation or battle. This last example is the closest example to a destabilization of the translation as it would allow the near simultaneous visibility of multiple languages.

Integrating fan created translational mods into the software itself would further destabilize the already unstable base of multiple visible languages. This integrated form would allow the user to switch between official localization to fan translation to fan mod at their whim. The official version ceases to exist and the user is allowed to both interact with other types of users and create fully sanctioned alternative semiotic domains. The eventual ability to mix and match HUD in English, subtitles in Japanese and fan translation in Polish would be a true destabilization.[101]

Both the destabilization of the translator and the destabilization of the translation use new forms of fan and peer production and create a foreignizing, hypermediated translation. All of these things could be good in the current political moment that equates difference with terrorism, which necessitates the translational replacement of all forms difference with local variations. However, key to both destabilizations are that they are not simply utopian fantasies, but legitimately productive and ready to enact. It is my intent to build, and build upon, these possibilities for opening up new forms of translation in digital media in my dissertation project on games and localization.

[1] For an example of the lack of integration of alternate media in translation studies, see: Lawrence Venuti. The Translation Studies Reader. 2nd ed. New York: Routledge, 2004. On a particular attempt to integrate it, see: Anthony Pym. The Moving Text: Localization, Translation, and Distribution. Amsterdam; Philadelphia: John Benjamins Pub. Co., 2004. On the distinct effort to consider ‘old’ media as ‘new’ see: Lisa Gitelman and Geoffrey B. Pingree, eds. New Media, 1740-1915. Cambridge: MIT Press, 2003.

[2] Antoine Berman. “From Translation to Traduction.” Richard Sieburth trans. (unpublished): p. 11.

[3] Serge Lusignan. Parler Vulgairement. Paris/Montreal: Vrin-Presses de l’Université de Montréal, 1986: pp. 158-9. Quoted in Berman. “From Translation,” p. 9.

[4] Berman, “From Translation,” p. 11.

[5] Berman, “From Translation,” p. 11.

[6] Roland Barthes. “From Work to Text.” In The Cultural Studies Reader, edited by Simon During, Donna Jeanne Haraway and Teresa De Lauretis. London: Routledge, 2007. Rosemary J. Coombe. The Cultural Life of Intellectual Properties: Authorship, Appropriation, and the Law. Durham: Duke University Press, 1998. Néstor García Canclini. Hybrid Cultures: Strategies for Entering and Leaving Modernity. Minneapolis: University of Minnesota Press, 2005. Koichi Iwabuchi. Recentering Globalization: Popular Culture and Japanese Transnationalism. Durham: Duke University Press, 2002. Koichi Iwabuchi, Stephen Muecke, and Mandy Thomas. Rogue Flows: Trans-Asian Cultural Traffic. Aberdeen, Hong Kong: Hong Kong University Press, 2004.

[7] See: Barthes, “From Work to Text.” Michel Foucault. “What Is an Author?” In The Essential Foucault: Selections from Essential Works of Foucault, 1954-1984, edited by Paul Rabinow and Nikolas S. Rose. New York: New Press, 2003. Lesley Stern. The Scorsese Connection. Bloomington; London: Indiana University Press; British Film Institute, 1995. Mikhail Iampolski. The Memory of Tiresias: Intertextuality and Film. Berkeley: University of California Press, 1998.

[8] Berman, “From Translation,” p. 14

[9] I use literary theories due to their prevalence within academia, but also because of their political nature. While other conceptualizations of translation avoid politics and ethics (particularly practical understandings of translation) comparative literary theories of translation highlight them: my underlying belief is that translation is both politically and culturally important.

[10] Jacques Derrida. “‘Eating Well,’ or the Calculation of the Subject: An Interview with Jacques Derrida.” In Who Comes after the Subject?, edited by Eduardo Cadava, Peter Connor and Jean-Luc Nancy, 96-119. New York: Routledge, 1991.

[11] George Steiner. After Babel: Aspects of Language and Translation. 3rd ed. Oxford ; New York: Oxford University Press, 1998: p. 428.

[12] Ferdinand de Saussure, Charles Bally, Albert Sechehaye, and Albert Riedlinger. Course in General Linguistics. Translated by Roy Harris. LaSalle: Open Court, 1983 [1972]: p. 67.

[13] Saussure, Course, pp. 71-78.

[14] Saussure, Course, pp. 79-98.

[15] Jonathan D. Culler. Ferdinand De Saussure. Rev. ed. Ithaca, N.Y.: Cornell University Press, 1986: p. 132.

[16] Jacques Derrida. Of Grammatology. 1st American ed. Baltimore: Johns Hopkins University Press, 1976.

[17] Jacques Derrida. “Des Tours De Babel.” In Difference in Translation, edited by Joseph F. Graham. Ithaca: Cornell University Press, 1985: pp. 165-7.

[18] Jacques Derrida. “Living On. Border Lines.” In Deconstruction and Criticism, edited by Harold Bloom, Paul De Man, Jacques Derrida, Geoffrey H. Hartman and J. Hillis Miller. New York: Seabury Press, 1979.

[19] Jacques Derrida. “What Is a ‘Relevant’ Translation?” In The Translation Studies Reader: p. 443. (italics and brackets in text)

[20] Derrida, “‘Eating Well.’

[21] Jacques Derrida. Specters of Marx: The State of the Debt, the Work of Mourning, and the New International. New York: Routledge, 1994.

[22] Philip E. Lewis. “The Measure of Translation Effects.” In Difference in Translation.

[23] Ironically, Spivak’s Derridian translation of Derrida’s Of Grammatology was successful in its abuse, but unsuccessful in getting her further translation jobs of Derrida’s works. Derridian translations are successful when they are unsuccessful.

[24] On the relationship between task, giving up and failure see: Paul De Man. “Conclusions: Walter Benjamin’s ‘the Task of the Translator’.” In The Resistance to Theory. Minneapolis: University of Minnesota Press, 1986: p. 80. For more on Derrida, Benjamin and De Man see: Tejaswini Niranjana. Siting Translation: History, Post-Structuralism, and the Colonial Context. Berkeley: University of California Press, 1992.

[25] Walter Benjamin. “The Task of the Translator: An Introduction to the Translation of Baudelaire’s Tableaux Parisiens.” In The Translation Studies Reader: p. 81.

[26] Benjamin. “The Task of the Translator,” p. 76.

[27] Emily Apter brings this out well in her work on translation and politics. Emily S. Apter. The Translation Zone: A New Comparative Literature. Princeton: Princeton University Press, 2006.

[28] Specifically, Robinson argues for the long lasting presence of Christian asceticism (both eremitic and cenobitic) coming from religious dogma, but leading into the word/sense debate. See: Douglas Robinson. “The Ascetic Foundations of Western Translatology: Jerome and Augustine.” Translation and Literature 1 (1992): 3-25.

[29] Jerome. ”Letter to Pammachius.” Kathleen Davis trans. In The Translation Studies Reader: p. 28.

[30] John Dryden. “From the Preface to Ovid’s Epistles.” In The Translation Studies Reader, pp. 38-42.

[31] Roman Jakobson, Krystyna Pomorska, and Stephen Rudy, Language in Literature. Cambridge: Belknap Press, 1987: p. 429.

[32] Jakobson, Language in Literature, p. 434. There are interesting connections between formalism and Laura Marks’ work on digital translation. Marks argues that digitization necessarily robs things of certain qualities and this means they can be translated in interesting, new ways, but that they are forever robbed of originary elements. The digital becomes a universal language. See: Laura U. Marks. “The Task of the Digital Translator.” Journal of Neuro-Aesthetic Theory 2 (2000-02).

[33] Anton Popovič. Dictionary for the Analysis of Literary Translation. Edmonton: Department of Comparative Literature, University of Alberta, 1975: p. 6. Also see Niranjana’s discussion in Siting Translation, p. 57.

[34] I am skipping over large debates within game studies involving the question of the core of gaming: ludology and narratology. Roughly, whether the core of gaming is the ‘play’ or the ‘story.’ I skip this to save space, because it is a dead end that has been generally concluded with the answer of ‘both,’ because ludologists and narratologists are academics, but finally because ‘experience’ encapsulates both play and story.

[35] Carmen Mangiron and Minako O’Hagan. “Game Localization: Unleashing Imagination with ‘Restricted’ Translation.” Journal of Specialized Translation, no. 6 (2006): 10-21. Also see, Minako O’Hagan and Carmen Mangiron. “Games Localization: When Arigato Gets Lost in Translation.” Paper presented at the New Zealand Game Developers Conference, Otago 2004.

[36] Popovič, Dictionary, p. 11.

[37] Lawrence Venuti. ”Foundational Statements” in The Translation Studies Reader: p. 15.

[38] Schleiermacher is working with Dryden’s tripartite: metaphrase, paraphrase and imitation. In his understanding, then, word-for-word has been subsumed (since Jerome) for sense-for-sense, but imitation has been opened up as a larger (maligned) possibility.

[39] Friedrich Schleiermacher. “On the Different Methods of Translating.” In The Translation Studies Reader: p. 49.

[40] Schleiermacher. “On the Different Methods of Translating,” pp. 60-61.

[41] Antoine Berman. The Experience of the Foreign: Culture and Translation in Romantic Germany. Albany: State University of New York Press, 1992: p. 150.

[42] Berman, The Experience of the Foreign, p. 149.

[43] Lawrence Venuti. The Translator’s Invisibility: A History of Translation. 2nd ed. New York: Routledge, 2008 [1994]: p. 15.

[44] Venuti, Translator’s Invisibility, p. 86.

[45] Venuti, Translator’s Invisibility, p. 98.

[46] Venuti, Translator’s Invisibility, p. 276.

[47] Venuti, Translator’s Invisibility, p. 85.

[48] Lawrence Venuti. The Scandals of Translation: Towards an Ethics of Difference. London; New York, NY: Routledge, 1998.

[49] Venuti, Scandals of Translation, p. 87.

[50] J. David Bolter and Richard Grusin. Remediation: Understanding New Media. Cambridge: MIT Press, 1999.

[51] In this later work the metaphor has shifted to interfaces being both windows with immediacy and mirrors with reflection, but it is still connected to remediation with both immediacy and hypermediacy. Jay David Bolter and Diane Gromala. Windows and Mirrors: Interaction Design, Digital Art, and the Myth of Transparency. Cambridge: MIT Press, 2003: p. 82.

[52] Metatitles are a extended form of subtitles that I first discussed in my Master’s thesis; Jerome McGann’s work, including IVANHOE and his Rosetti work, can be found through his website <>; Mods are fan/user created game modifications.

[53]Alexander R. Galloway. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press, 2006: pp. 70-84.

[54] Berman, “From Translation,” p. 6.

[55] Jacques Derrida. Glas. Lincoln: University of Nebraska Press, 1986 [1974].

[56] This application is for various ‘smart’ phones and the iPad, but the technology is still not utilized for eReaders. My point is that this lack is not for technological reasons, but for ways that the eReader is both imagined and actualized.

[57] For a general, early look at film translation see: Dirk Delabastita. “Translation and the Mass Media.” in Susan Bassnett and Andre Lefevere eds. Translation, History and Culture. London: Pinter Publishers, 1990.

[58] Lawrence W. Levine. Highbrow/Lowbrow: The Emergence of Cultural Hierarchy in America. Cambridge: Harvard UP, 1988. Referenced in Jennifer Forrest “The ‘Personal’ Touch: The Original, the Remake, and the Dupe in Early Cinema,” In Jennifer Forrest and Leonard R. Koos eds. Dead Ringers: The Remake in Theory and Practice. Albany: State University of New York Press, 2002: p. 102.

[59] As has been stated by many people in the 20th century, there is nothing objective, or reflective, about representation, and there never was for early cinema, however, this belief has never really gone away. See: Ella Shohat and Robert Stam. “The Cinema after Babel: Language, Difference, Power.” Screen 26.3-4, 1985: 35-58.

[60] This is regardless of corruption of subtitles per Abé Mark Nornes. Cinema Babel: Translating Global Cinema. Minneapolis: University of Minnesota Press, 2007.

[61] Arjun Appadurai. Modernity at Large: Cultural Dimensions of Globalization. Minneapolis: University of Minnesota Press, 1996: particularly p. 39.

[62] For Japanese this is particularly a problem; for English this is less of a problem, especially for Americans, due to the assumption that English is a global language.

[63] On MLV see: Ginette Vincendeau. “Hollywood Babel: The Coming of Sound and the Multiple Language Version.” Screen 29.2 (1988): 24-39. On FLV see: Natasa Durovicová. “Translating America: The Hollywood Multilinguals 1929-1933.” In Sound Theory: Sound Practice, edited by Rick Altman, 138-53. New York: Routledge, 1992. Also, see: Nornes, Cinema Babel.

[64] See: Chon Noriega. “Godzilla and the Japanese Nightmare: When “Them!” is U.S.” Cinema Journal 27.1 (Autumn 1987): 63-77

[65] These are visible in the United States, to which I largely refer, but there is another history within India’s Bollywood (often illegal/unofficial) remake practices.

[66] Ironically, the actual words she uses, ホスト, ホステス and キャバレー, are all foreign loan words in katakana. Thus, even her word choice is based in an awkward schizophrenia between local and foreign.

[67] Abé Mark Nornes. “For an Abusive Subtitling.” Film Quarterly 52, no. 3 (1999): 17-34.

[68] L10n is the industry shorthand for localization. There are 10 letters between the L and the n. In addition to localization, the industry uses i18n as shorthand for internationalization and g11n for globalization..

[69] For a discussion on the demonstration and visibility of these early games, see: Van Burnham. Supercade: A Visual History of the Videogame Age 1971-1984. Cambridge: MIT Press, 2003.

[70] In particular see Michel Foucault on the new regime of power/knowledge through a new way of seeing, and Lisa Cartwright on the problems of medical imaging technologies and truth. See: Lisa Cartwright. Screening the Body: Tracing Medicine’s Visual Culture. Minneapolis: University of Minnesota Press, 1995. Michel Foucault. The Birth of the Clinic: An Archaeology of Medical Perception. New York: Vintage Books, 1975. Marita Sturken and Lisa Cartwright. Practices of Looking: An Introduction to Visual Culture. Oxford; New York: Oxford University Press, 2001.

[71] Mary Flanagan. “Locating Play and Politics: Real World Games & Activism.” Paper presented at the Digital Arts and Culture, Perth, Australia 2007: p. 3.

[72] See: Gérard Genette. Palimpsests: Literature in the Second Degree. Lincoln: University of Nebraska Press, 1997; Gérard Genette. Paratexts: Thresholds of Interpretation, Literature, Culture, Theory. Cambridge; New York, NY: Cambridge University Press, 1997.

[73] LISA is “An organization which was founded in 1990 and is made up mostly software publishers and localization service providers. LISA organizes forums, publishes a newsletter, conducts surveys, and has initiated several special-interest groups focusing on specific issues in localization.” Bert Esselink. A Practical Guide to Localization. Rev. ed. Amsterdam; Philadelphia: John Benjamins Pub. Co., 2000: p. 471.

[74] LISA quoted in Esselink, A Practical Guide to Localization, p. 3.

[75] Lev Manovich. The Language of New Media. Cambridge: MIT Press, 2001.

[76] On experience as the core equivalence see the work of Carmen Mangiron and Minako O’Hagan: Carmen Mangiron. “Video Games Localisation: Posing New Challenges to the Translator.” Perspectives: Studies in Translatology 14, no. 4 (2006): 306-23; Mangiron and O’Hagan, “Game Localization;” O’Hagan, Minako. “Conceptualizing the Future of Translation with Localization.” The International Journal of Localization (2004): 15-22; Minako O’Hagan. “Towards a Cross-Cultural Game Design: An Explorative Study in Understanding the Player Experience of a Localised Japanese Video Game.” The Journal of Specialized Translation, no. 11 (2009): 211-33; O’Hagan and Mangiron, “Games Localization.”

[77] Esselink, A Practical Guide to Localization, p. 46.

[78] Frank Dietz. “Issues in Localizing Computer Games.” In Perspectives on Localization, edited by Kieran Dunne. Amsterdam; Philidelphia: John Benjamins Publishing, 2006. Also, Mangiron and O’Hagan, “Game Localization.”

[79] The move to CG from live action might also be a contributing factor to the rise of domesticating, replacement localization. Technically, gaming started with live action cut-scenes with big budgets and famous actors in the 1990s (Wing Commander III (1994); Star Wars: Jedi Knight: Dark Forces II (1997)), but it moved to CG cut-scenes using the game engine by the late 1990s and early 2000s (Half Life (1998), Star Wars: Jedi Knight II: Jedi Outcast (2002)). In part this could be seen as a budget issue, but in part it is an immersion issue as live action cut-scenes could be considered more jarring due to difference from regular game.

[80] This is, of course, ironic as cinema often overdubs the dialogue into the film due to the difficulties of recording clear dialogue when filming.

[81] This is an incredibly rough definition especially due to how ‘piracy’ relates to fan production, modding and copyright.

[82] Piracy is rampant with PC games, due to the ease of duplicating CDs and DVDs, and only slightly better with console games where cartridges are harder to duplicate. For various views on game piracy see: Ernesto. “Modern Warfare 2 Most Pirated Game of 2009.” TorrentFreak. Posted: December 27, 2009. Accessed: June 6, 2010. <>. David Rosen. “Another View of Video Game Piracy.” Kotaku. Posted: May 7, 2010. Accessed: June 6, 2010. <>. In general, also see the blog Play No Evil: Game Security, IT Security, and Secure Game Design Services, particularly the “DRM, Game Piracy & Used Games” category: <,-Game-Piracy-Used-Games>.

[83] Mangiron and O’Hagan, “Game Localization.”

[84] That the equivalent experience comes from, and aims toward, generic cultural attributes of a presumed group, and not a complex, real group, is another problem entirely.

[85] Esselink, A Practical Guide to Localization, p. 4.

[86] Appadurai, Modernity at Large. Toby Miller, Nitin Govil, John McMurria, Richard Maxwell, and Ting Wang. Global Hollywood 2. London: BFI Publishing, 2005. John Tomlinson. Cultural Imperialism: A Critical Introduction. Baltimore: Johns Hopkins University Press, 1991.

[87] Harumi Befu. Hegemony of Homogeneity: An Anthropological Analysis Of “Nihonjinron. Melbourne: Trans Pacific Press, 2001. Stephen Vlastos. Mirror of Modernity: Invented Traditions of Modern Japan. Berkeley: University of California Press, 1998. Tomiko Yoda and Harry D. Harootunian. Japan after Japan: Social and Cultural Life from the Recessionary 1990s to the Present. Durham: Duke University Press, 2006.

[88] I have written about both the politics of Square-Enix as a Japanese company and the International Edition as a political force elsewhere. See: William Huber and Stephen Mandiberg. “Kingdom Hearts, Territoriality and Flow.” Paper presentation at the 4th Digital Games Research Association Conference. Breaking New Ground: Innovation in Games, Play, Practice and Theory. Brunel University, West London, United Kingdom. September, 2009; Stephen Mandiberg. “The International Edition and National Exoticism.” Paper presentation at Meaningful Play. Michigan State University, East Lansing. October, 2008.

[89] There are serious issues regarding labor and these two trends of translation. One is in the labor of fans to create translations. This is alleviated through micro-payments for the additional localization packages. They must receive some amount of compensation for their labor, as this situation is dangerously close to exploitation. The second issue is related to the de-skilling of professional translators and localizers due to the possible disappearance of their work to the fans. This is an issue, but micro-payments and the necessity of companies to pay localizers for the primary localizations should alleviate this possible de-skilling somewhat. These are matters that demand attention that I am not giving them in the present paper.

[90] See: Joseph Reagle. Good Faith Collaboration: The Culture of Wikipedia. Cambridge: MIT Press, 2010.

[91] Rocketboom Know Your Meme. <>; I Can Has Cheezburger. <>. Hobotopia. <>.

[92] LOLCat Bible Translation Project. <>.

[93] A slightly different translation project that utilized the masses is Fred Benenson’s Kickstarter project Emoji Dick. Benenson used Kickstarter, an online funding platform, to fund a translation of Moby Dick into Emoticons using Google’s Mechanical Turk. Thousands of individual Mechanical Turk users were paid pennies to translate individual sentences into emoticons and the results were published. See: <>.

[94] FLOSS Manuals. <>.

[95] Yochai Benkler. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven: Yale University Press, 2006.

[96] RiffTrax. <>

[97] The Leaky Cauldron. <>.

[98] Fan translations and retranslations have both existed over the past decades. For instance, see the ChronoTrigger retranslation <>, the Mother 3 fan translation <>, and the Seiken Densetsu 3 fan translation <>.

[99] There are innumerable examples of each type. I am simply listing ones that come to mind.

[100] The Xbox 360 information comes from Rolf Klischewski. IGDA LocSIG mailing list. May 31, 2010.

[101] While Dyer-Mitheford and De Peuter would likely relegate this industry-integrated solution to a form of apologist for Empire, I prefer to think of it as a dialogic solution. See: Nick Dyer-Witheford and Greig De Peuter. Games of Empire: Global Capitalism and Video Games. Minneapolis: University of Minnesota Press, 2009. Mikhail Bakhtin, The Dialogic Imagination: Four Essays. Austin: University of Texas Press, 1981.

Utopian Thought Experiment #7

1. Multilingual or omnilingual linguistic set up for game environment [1]. The more languages the better. The more subtitles the better.
2. Individual characters are tied to their various languages and subtitles.
3. Current statistics of national languages and used languages within any given nation tied to the user determined ‘locale.’

1. So here I’m going to sprinkle a bit of abuse (Derrida -> Lewis -> Nornes) on top of the utopia [2]. Not just the languages, but what one could do with languages to rob people of their safe homeness. Their belief that they are alone with their friends and family and don’t need to deal with the world.
2. The game reads the locale, as usual, and loads the appropriate localization. I’m in “United States” and my language is English. It loads appropriately. Or does it.
3. The United States of America has one [ed: de facto, and this is problematic, I know] official language: English. Language chauvinism is ripe and often linked to nationalistic/anti-foreign fervor. As a result, the fact that ~25% of all people in the United States speak a language other than English at home goes unmentioned, or at least ignored [3].
4. The game reads the current statistics of the determined locale and finds that 75% of the populace speaks English, 12% speaks Spanish, and then there are a massive host of other native, exilic, diasporic and immigrant languages. The game allocates these percentages by rounding up.
5. The player must then interact with their locale not as a safe environment, but as a unhappily statistical environment (I am loathe to say ‘real’).
6. This could work in the US as above, but it could also work elsewhere. Japanese in Japan is not as homogenous as it would like to believe, nor is Mandarin in China, or Israeli in Israel.

[1] This refers to a accessible and user increasable plethora of languages as opposed to the standard variation of one language per one locale or one language loaded as determined by the OS.
[2] Lewis, Philip E. “The Measure of Translation Effects.” In Difference in Translation, edited by Joseph F. Graham. Ithaca: Cornell University Press, 1985; Nornes, Markus. Cinema Babel: Translating Global Cinema. Minneapolis: University of Minnesota Press, 2007.
[3] This is the 2000 census as the 2010 has not yet be uploaded to the web.

The First ‘Actual’ [International Edition]

At the Tokyo Game Show Square-Enix informed the public about the release of Kingdom Hearts: Birth By Sleep Final Mix. Like the rest of the International Editions this will include English voices; unlike Kingdom Hearts II: Final Mix+ it will likely not include the theater mode with both English and Japanese cinematics; unlike all previous International Editions this one will be playable in other regions, which is to say, internationally.

The known so far is that it will be released with the North American edition’s (English) voice acting, have a sticker system, a new boss and new enemies, and possibly a secret ending. This mostly comes from the unrecordable video in the Square-Enix booth at the Tokyo Game Show, and the Famitsu page [1], both of which have been blogged across the net. Other than these details most is unknown, but a few things can be deduced/guessed.

Because Birth By Sleep is a PlayStation Portable game a few interesting things can happen. The first is that the data disk is more limited than a DVD. Therefore, the direct implementation of both voice tracks is unlikely (or impossible). This means that the theater mode from KH 2:FM+ will likely not happen, and it also means that there will not be multiple selectable vocal tracks, which only Star Ocean: The Last Hope International (for PS3) has had in the past. The most common thread across the English blogs following this line of thinking is that the game has no release date in the US and it will most likely not be brought over like the other Final Mixes. However, what they’re missing is that because Birth By Sleep is on the PSP it becomes easily playable internationally, and the recent Sony announcement of cross region sales on the PlaystationStore [2, 3] make this even more interesting.

Unlike the PS, PS2 and PS3, the PSP does not use region encoded data disks, which means that a player has almost no restrictions on what s/he can play. That which becomes a restriction is availability. However, with Sony’s cross country sales implementation this also will be less of an issue. Less because what is put up on the store is a limited selection of what actually has been released on disks. The fact that all of two games were uploaded to the store in the first update shows the problem here.

However, regardless of the PlayStation Store’s implementation people around the world will be able to play the new “International Edition,” Kingdom Hearts: Birth By Sleep Final Mix, and likely be upset with its naturalized global English. Of course, such availability/downloadability could force Square-Enix to make available truly International Editions that fully support multiple languages through downloading (after all, there is no size limit to an SD card). This is, of course, and unlikely eventuality, but I can only hope…


  • [1] ファミ通.com. “東京ケームショウ特集: 始まりへとつながる眠りの物語が再び紡がれる『キングダム ハーツ バース バイ スリープ ファイナル ミックス』.” Accessed: September 25, 2010.
  • [2] Chen, Grace. Playstation.Blog. “PlayStation Store Update.” Posted: September 20, 2010. Accessed: September 25, 2010.
  • [3] Kotaku. “The PlayStation Store to Start Selling Japanese Imports This Month.” Posted: September 16, 2010. Accessed: September 25, 2010.

Heavy Rain in Japan

Heavy Rain has recently been lauded for its adult nature and its story/narrative. What hasn’t been noted in the US game press is that the characters are very much Western. Such an element to a game released in the US is unremarkable and as such it goes unmarked.

I have not yet looked into the press reaction in Japan, but the game itself has had little localization from what I can see. Or rather, the characters, vehicles and setting are all the original, which is to say not Japan. Further, the language they speak and think is still English.  Essentially, it’s a very foreignizing translation/minimal localization.

According to the industry and most localization experts who write in English about Western localizations such a foreignizing translation is bad and will be bad for the eventual take. According to the random Japanese teenager playing the demo in Tsutaya it’s a resigned fact of life: いや、外国のゲームだから別に… And when asked if he’d rather the voices be in Japanese he didn’t have an opinion.

Obviously, the single player is hardly a good sample for anything other than a musing blog entry, but there’s something about the lack of care that’s interesting. The blunt knowledge, and lack of care, about the fact that it’s a foreign game is very different from localization’s drive to hide a game’s production home.

Do we really want games that just attempt to represent our locale? Is that good for us?

Censorship vs. Localization

There has been varied, but relatively constant noise being made by the World of Warcraft community about the Chinese release of the Wrath of the Lich King expansion. Said in one way it is simply a year late. This is normal practice for some operating systems or languages, but for an MMO expansion pack it is a bit more visible, and with angry waiting fans it’s even more visible.

The thing about WotLK is that it has been ready for release for a year, but has gotten hung up in requirements put forth by the Chinese government regarding its release. These requirements have been dubbed censorship by the fanbase (particularly those on Kotaku and MMO-Champion), but the interesting element is that these are simply localization [L10n] issues from a different angle.

The main points of contention are skeletons: skeletons under cauldrons and against walls, skulls on spikes, skulls on weapons, skeletal knees poking out of zombie bodies, giant bone animals, and I’m not sure about skeletons in armor. The claimed ideological basis for and defense of, the censorship is that ancestor veneration, signified by being good to the bones of ancestors, is difficult when you’re going around destroying those bones/skeletons/zombies or putting them on weapons or spikes. Of course, there’s a slight problem when the the the majority of the expac deals with necromancy and its problems (via the Lich King). In short, the narrative of WoW: WotLK is hard to localize to China.

And yet, it has been done. Skulls are removed, zombies have no bones, and bone dragons and bone griffons are transformed to flashy ghost dragons and griffons. Is this a sign that, indeed, narrative does not matter? Or is it a sign that millions of ravenous players will force certain hands, and this is the best the Chinese government (particularly the the ministry in charge of publications and press (GAPP) and the ministry of culture (MOC)) is going to get (the fact that other games, particularly other, more local MMOs such as Perfect World were not put through such direct censorship, but multinational Blizzard’s MMO was is, perhaps, telling)? Or, is it just a sign that L10n really is the way things work now, and like translation only becoming visible with its mistakes, L10n is only visible when it doesn’t happen ‘properly,’ which is to say when it isn’t localized enough and is thus put through additional censorship. Games that are localized enough (self censored in both the production and L10n phases) do not need censorship; games that are not localized enough get censored before release.

This logic seems to be mirrored in calls to limit indigenous exclamations in Final Fantasy XIII (Koncewicz), which would make L10n easier, or at least possible due to the extensiveness of these noises (one of many places where you can seen these unlocalized noises is in Legend of Zelda: Spirit Tracks). But what they’re asking for goes part and parcel with the L10n process as internationalization [i18n], the production level planing for L10n. Both Koncewicz and guides to L10n indicate making assets easily changeable is best practice for i18n as L10n can then more easily push the product into some parituclar locale. However, while Koncewicz indicates this was the intention of FFXIII as an internationally aimed game it seems to be opposed by the very imbeddedness of certain games into certain cultures (Subarashiki kono sekai, which is subtitled It’s A Wonderful World in Japanese, but localized as The World Ends With You in English is an interesting example). Thus, the complaints of FFXIII are less against L10n than against Square-Enix’s i18n process and the idiosyncrasies that they do not want to delete from FFXIII and other games.

However, in the case of WotLK,, the company releasing WoW in China, wants to censor, but did a poor job self censoring in the L10n process, and Blizzard in fact did not i18n ‘enough’ in the development process. One might also extend this claim by saying their recent, much lauded Starcraft II L10n is a direct step up from the failure of localizing WotLK for China. The ‘enough’ here is actually problematic for two reasons. One is that  they are being forced to change the narrative level significantly, and if such alterations are in fact part of the L10n can one even call the game a translation? If you don’t fight a Death Knight, a Lich and a Bone Dragon are you really playing Wrath of the Lich King? Is WoW: WotLK US/EU and WoW: WotLK China the same game? The second is that while WotLK was hounded by the Chinese goverment locally developed (multinational, but of Chinese origin) Perfect World Online was released with skeletons available for slaying. So how much of i18n and L10n are being enforced where they should not be, how much of cultural particularity or universality are being reinforced by political clout or business acquiescence where it is actually a nonexistent thing?


  • Koncewicz, Radek. Localizing Exclamations in FInal Fantasy XIII
  • Mickey Yang. “Pics: What’s Changed in Chinese Version Wrath of the Lich King.” Posted: 8/16/2010.

On Localization

After reading Heather Chandler’s Game Localization Handbook I’ve come to realize that what I am suggesting is not impossible and despite the LocSIG response it is not particularly problematic. It is, however, an as yet unset standard especially in the US, but also in other smaller linguistic locales and by smaller companies. However, I also cannot emphasize enough that it is not economic suicide.

Essentially, the suggestion is to enable multilingual applications in an open way. Such multilingual versions are becoming more reasonable as the international market is further acknowledged. It is not unreasonably expensive from the large American/English based developers where i18n/L10n is a viable/necessary strategy. It simply requires an extra step of planning not only for L10n-friendliness, but integration. As the companies controlling releases Sony, Nintendo and Microsoft can control standards in certain ways. One way would be to require i18n as a standard. Such a standard would be beneficial for larger companies as it would entail the greater possibility of foreign releases even as gray market releases.

Further, if integrated in a patchable model gray market becomes less sensible as games can be sold as ‘language-bare,’ then localized assets can be purchased in micro payments. This allows the fanatics to get what they want and the companies to monitor things.

In the case of smaller companies it could be seen as problematic as they must also do more work, but as things become more international fan based L10n might happen more. An example of this is Basilisk Games’ ‘languages packs’ for Eschalon Book II. Such language packs are partial localizations (if that), but they might be extended to more full localizations by changing non-linguistic elements in the future. For postcolonial/minority languages forcing internationalization is a problem in that it forces less defensible positions. However, in order to force the dominant sides to be slightly more international the international standard must be made on all sides.

The trick is in asset integration. As long as there are infinite slots for languages with the nicely named schema there should be no problem. Additional languages simply extend the list in the same way that OS language integration has the installed options visible. Other, uninstalled languages are a grayed out option: neither out of sight, nor out of mind.

The available spread of Loc Kits would also allow further translations for political and/or alternate linguistic efforts.

The fact of play is universal, but different people get their jollies in different places. As I said a few months ago some people like masocore. Well, some people like Polish audio with German subtitles, or Korean audio and English subtitles, or English subtitles and no audio. Having the option is beneficial for making money in international markets. Who knows what people really want, what they’ll use if they have, and what is best?

And of course further important is the belief that there are long term benefits to players being acculturated to non-locales. That is not happening to some (US), but is to others. Such an imbalance has global/political ramifications beyond fun.

If global disculure is really supposed to bring us together it should be in a way that is not determined by businesses decided what becomes a locale and forever separating groups based on those locales. Industry determinations are not simply natural: they affect the groups as well.

A lot of this is discussed in Anthony Pym’s Moving Text, but it isn’t much of a thing in either other translation or localization writings. Something important is to discuss this sort of thing, especially before things are standardized.

Referenced Books:

  • Chandler, Heather Maxwell. The Game Localization Handbook. Hingham, Mass.: Charles River Media, 2005.
  • Pym, Anthony. The Moving Text: Localization, Translation, and Distribution. Amsterdam; Philadelphia: John Benjamins Pub. Co., 2004.

“Space Invaders”

To who or what does “Space Invaders” appeal? It’s a simple question, yet also completely unanswerable. First, one must ask which space invaders? Are the capitals important? Do I refer to the 1978 arcade box? The individual sprites coming down eternally? The nihilistic fight that is playing a game that cannot be won? Or perhaps it’s one of the related text/objects? Perhaps its the Retro Sabotage flash game that shows this impossibility? Or one of the many web, or portable remakes, perhaps Taito’s 2009 Infinity Gene? Or might I be referring to the street/game artist of the same name who places the pixelated characters in city spaces around the world? In a simple answer to what should be a simple question, I’ll simply say I refer to all at once, because that’s how such intertextuality works. There is an original, but it may not be the important point. They all, after a certain point, refer to each other.

This meandering began when a friend mentioned photographing invaders. As she studies street art the first guess is that she was talking about the artist and said artist’s creations, but when I then went to find some sort of image to confirm this (searching for invaders without effort; catching aliens by picture). I opened an entirely different can of worms, or, to follow what soon will be an unwieldy metaphor, a new wave lining up at the top of the screen. However, at the end of this meandering I realized that it’s all the same interwoven meaning.

Invader’s website has a global listing of invaded cities. They are places where works exist, but San Diego, the city in which I live and my friend was catching aliens, is not there. One answer is that the site has not been updated, but it will be soon. Following this meaning the list becomes a sort of status. Which cities are good enough to be graced by the artist’s work.

People on the Yelp forum discussing the artwork certainly point to this: whether the work is fake or not (another answer to San Diego’s lack of appearance on the list), how San Diego has gained this honor (the street art exhibit at the San Diego Museum of Contemporary Art), and that the city has become “bona fide, betches” (Yelp user). Status is certainly tied up in the meaning of Invader’s space invaders. However, there are other meanings of the work: the game, nostalgia, migration and aliens. All of these are tied in the work and the general resurface and re-imagining of meaning.

Space Invaders holds a special place in the 20-40 year old generations as one of the early cabinet games of the golden age of gaming. Like most golden age games such as Donkey Kong, there are memorable characters, but unlike Donkey Kong‘s Jump Man, who was reborn as Mario, Space Invaders‘ player character is rather unmemorable. While Space Invaders had sequels, they are barely remembered. It’s hard to start a franchise when the plot and player are destined for death. However, Space Invaders did start a genre. Hundreds of shooter games followed with equally unmemorable player characters, but ironically these generally had forgettable enemies as well. What Space Invaders did was create a long chain of names, signifiers (1942, R-Type, Gradius, etc), that all pointed back to the original signified, Space Invaders, and its memorable, invading army.

The game has thus remained in cultural memory, to be sparked with each further generic horse beating, as the eternal good fight against an unnamed (but memorable) enemy. However, the past few years have brought a different resurgence. From genre and allusion back to direct reference. The retro/nostalgic trend of the 2000s has brought with it hosts of remakes and demakes, remixes and repositionings. André the Giant becomes a poster-boy for frat boys, Obama spells hope for the masses, beautifully relaxing Mario Clouds float by on a hacked ROM, and Space Invaders goes contemporary political commentary with its pixellated enemy sprites.

Invader’s invaders work on multiple levels. They refer back to the nostalgia of the 1970s and its memorable characters, but they also tie into fears of global migration/movement (invasion if you will) prevalent at the current moment. The invaders are aliens, the same as the “illegals” in the U.S. news and political media. They come in, attack, kill, take over the planet, and of course steal jobs, but they’re so memorable, bordering on cute. Wait, that didn’t come out right, or did it?

In the late 20th and early 21st centuries human movement over borders has reached an unprecedented high if only because the borders have become more pronounced. An equal amount of movement has always existed, small distances, long distances when borders were less national and less guarded, but not as they are now: pronounced, fenced, and racial/nationalistic. What might have been normal movement has become illegal border crossing, and those who cross become illegals. Aliens. Invaders.

Invader’s work is about merging the current fear of the illegal (in play with the original game, all of the generic follows and almost all games in general – particularly the link to Arabs/aliens in most modern FPS games is troubling and obvious) with the loving nostalgia of the past. People like these invaders, but it goes a step further. As the Yelpers demonstrated, invaders make a city. Where at one point it was a skyscraper, a sports team, or a museum, now it is an invader. A city has made it when it has been invaded.

But am I talking about space invaders or illegals right now? Are they?

  • Invader. Space Invaders. Accessed online June 17, 2010. <>
  • Museum of Contemporary Art San Diego. Viva la Revolucion: A Dialogue with Urban Landscape. Accessed online June 17, 2010. <>
  • Retro Sabotage: A Strange Kind of Love. Target: Space Invaders: Invasion. <>
  • Yelp. “Space Invader San Diego.” Accessed online June 17, 2010. <>

A Note/Warning on My Position

While I advocate for particular strategies and theories of translation, I do so in the historical context of 21st century US sociopolitical irresponsibility and dominance.

I do not speak as a minority, nor as a reader of a language fighting for survival and self determination. Rather, I write as an early 21st century US citizen who has seen ‘his’ country at war for a decade. A decade where significant backlash has resulted against people who look or act different regardless of their relationship to the ‘enemy.’

The US has fought the wars in Afghanistan and Iraq against an undefined terrorist that can best be summed up as ‘different.’ America is at war with difference: “those who oppose our way of life.” And one of the (many) ways this insane fear of, and aggression against, the cultural other has been reproduced to massive levels has been in the systematic representation of the other through and in translation.

A simple result of the discursive regime of domesticating translation (Venuti) is that everybody else – the foreign in books and other media – looks like us. As all translation, all media made by anybody else, is made to look as if it were made by us, we never see difference. All that is good looks like us. All it then takes is the mass display not only of difference, but difference that “hates us,” to spark 10 years of war.

I believe I do not overemphasize the importance of changing the way translation happens in the US.

  • Venuti, Lawrence. The Translator’s Invisibility: A History of Translation. 2nd ed. New York: Routledge, 2008 [1994].
  • —. The Scandals of Translation: Towards an Ethics of Difference. New York: Routledge, 1998.

Destabilization of the Translator | Destabilization of the Translation

There are two new trends in translation that I would like to discuss. Both are postmodern and intentionally unstable, but they have opposite instabilities. One trend destabilizes the translator, and the other destabilizes the translation.

The destabilization of the translator has multiple translators, but a single translation. It has its history in the Septuagint, but its present locus is around dividing tasks and the post Fordist assembly line form of production. Like the Septuagint, where 72 imprisoned scholar translators translated the Torah identically through the hand of God, the new trend relies on the multiplicity of translators to confirm the validity of the produced translation. However, different is that while the Septuagint produced 72 results that were the same, the new form of translation produces one result that arguably combines the knowledge of all translators involved. This trend of translation can be seen in various new media forms and translation schemes such as Wikis, the Lolcat Bibul, Facebook, and FLOSS Manuals.

Wikis (from the Hawaian word for “fast”) are a form of distributed authorship. They exist due to the effort of their user base that adds and subtracts small sections to individual pages. One user might create a page and add a sentence, another might write three more paragraphs, a third may edit all of the above and subtract one of the paragraphs, and so on. No single author exists, but the belief is that the “truth” will come out of the distributed authority of the wiki.  It’s a very democratic form of knowledge production and authorship that certainly has issues, but for translation it enables new possibilities. While wikis are generally produced in a certain language and rarely translated (as the translation would not be able to keep track of the track changes), the chunk-by-chunk form of translation has been used in various places.

The Lolcat Bibul translation project is a web-based effort to translate the King James Bible into the meme language used to caption lolcats (amusing cat images). The “language” meme itself is a form of pidgin English where present tense and misspellings are highlighted for humorous effect. Examples are “I made you a cookie… but I eated it,” “I’z on da tbl tastn ur flarz,” and “I can haz cheeseburger?”[1] The Lolcat Bibul project facilitates the translation from King James verse to lolcat meme. For example, Genesis 1:1 is translated as follows:

KING JAMES: In the beginning God created the heaven and the earth
LOLCAT: Oh hai. In teh beginnin Ceiling Cat Maded teh skiez An da Urfs, but he did not eated dem. [2]

While the effort to render the Bible is either amusing or appalling depending on your personal outlook, important is the translation method itself. The King James Bible exists on one section of the website, and in the beginning the lolcat side was blank. Slowly, individual users took individual sections and verses and translated them according to their interpretation of lolspeak, thereby filling the lolcat side. These translated sections could also be changed and adapted as users altered words and ideas. No single user could control the translation, and any individual act could be opposed by another translation. The belief is that if 72 translators and the hand of God can produce an authoritative Bible, surely 72 thousand translators and the paw of Ceiling Cat can produce an authoritative Bibul.

FLOSS (Free Libre Open Source Software) Manuals and translations are a slightly more organized version of this distributed trend [3]. FLOSS is theoretically linked to Yochai Benkler’s “peer production” where people do things for different reasons (pride, cultural interaction, economic advancement, etc), and both the manuals and translations capitalize on this distribution of personal drives. Manuals are created for free and open source software through both intensive drives where multiple people congregate in a single place and hammer out the particulars of the manual, and follow-up wiki based adaptations. The translations of these manuals are then enacted as a secondary practice in a similar manner. Key to this open translation process are the distribution of work and translation memory tools (available databases of used terms and words) that enable such distribution, but also important is the initial belief that machine translations are currently unusable, which causes the necessity of such open translations.

Finally, Facebook turned translation into a game by creating an applet that allowed users to voluntarily translate individual strings of linguistic code that they used on a daily basis in English. Any particular phrase such as “[user] has accepted your friend request” or “Are you sure you want to delete [object]?” were translated dozens to hundreds of times and the most recurring variations were implemented in the translated version. The translation was then subject to further adaptation and modification as “native” users joined the fray as Facebook officially expanded into alternate languages. Thus, <LIKE> would have become <好き>, but was transformed to <いいね!> (good!). Not only did this process produce “real” languages, such as Japanese, but it also enabled user defined “languages” such as English (Pirate) with plenty of “arrrs” and “mateys.”

Wikis, FLOSS, and Facebook are translations with differing levels of user authority, but they all work on the premise that multiple translators can produce a singular, functioning translation. In the case of Facebook this functionality and user empowerment is highlighted; for FLOSS, user empowerment through translation and publishing are one focus, but a second focus is the movement away from machine translation; in all cases, but wikis particularly, the core belief is that truth will emerge out of the cacophony of multiple voices, and this is the key tenet of the destabilization of the translator [4].

The other trend is the destabilization of the translation. This form of translation has roots in the post divine Septuagint where all translation is necessarily flawed or partial. Instead of the truth emerging from the average of the sum of voices, truth is the build up: it is footnotes, marginal writing and multiple layers. Truth here is the cacophony itself. The ultimate text is forever displaced, but the mass intends to eventually lead to the whole (whether it gets there or not is separate matter for Benjamin, Derrida and the like).
While this style of translation is less enacted at present it is not completely new. Side by side pages with notes about choices is one variation centuries old (Tyndale’s Biblical notations, Midrash, and side by side poetry translations), the DVD language menu coming from multiple subtitle tracks is another variation, and finally this leads to new possibilities for multi-language software translations.

While the Septuagint leads to the creation of a single text in the myth, 72 translators translating a single text would produce 72 different translations in reality. The attempt to stabilize this inherent failure of translation argues that one of those translations is better and used, but it can be altered if a better translation comes around. The Bible translation is always singular, but it changes. Similarly, the Odyssey is translated quite often, but the translations are always presented alone. They are authoritative. In contrast, Roland Barthes comparison of modern works and postmodern texts and Foucault’s discussion of the authorial function both lead toward this destabilization of the author [5]. This discussion can be linked into translation studies’ discussions of author and translator intellectual production. The destabilization of translators and translations build off of both of these postmodern traditions, but the latter trend attempts to avoid weighing in on the issue by simultaneously exhibiting the conflicting iterations.

The main difficulty of the destabilization of the translation is the problem of exhibiting multiple iterations at one time in a meaningful way. How can a reader read two things at once, or with film, how can a viewer understand two soundtracks at once? Books and films provide multiple examples of how to deal with such an attention issue. With literary works endnotes are a minimal example of such attention divergence. Endnotes do not immediately compete for the reader’s attention, but the note markers indicate the possibility of voluntary switching. Footnotes are a slightly more aggressive form of attention management s they tell the reader to switch focus to the bottom of the page, a smaller distance that is more likely to happen.

For film, subtitles, which layer the filmic text with both original dialogue and the authorial translation, are a close equivalent to endnotes as they split the viewer’s attention, but do not force the attention toward a particular place. It is entirely possible to ignore subtitles regardless of complaints against them (much harder to ignore would be intertitles filling the screen). Finally, the benshi, a simultaneous live translator/explainer, is an early to mid 20th century Japanese movie theater tradition that most resembles the more aggressive footnotes as the benshi’s explanative voice competes with the film’s soundtrack for the audience’s aural attention [6].

Unlike websites such as Amazon, which have language dedicated pages (.com,, and block orders from addresses outside of their national coverage, or services such as the Sony PSPGo Store, which disallows the purchase of alternate region software, some sites utilize pull down language options that change the language while remaining on the same page, or provide multiple linguistic versions for purchase.

With digital games the localization process has traditionally replaced one language with its library of accompanying files with another. However, as computer memory increases the choice of one language or another becomes less of an issue and multiple languages are provided with the core software. This gives rise to the language option where the game can be flipped from one language to another through an option menu. Most games put this choice in the options menu at the title screen, but a few allow the user to switch back and forth. The simultaneous visibility of multiple languages or a language switch button would be further advancements toward the destabilization of translations.


[1] Rocketboom Know Your Meme. <>; I Can Has Cheezburger. <>; Hobotopia. <>.

[2] LOLCat Bible Translation Project. <>.

[3] FLOSS Manuals.

[4] This conceptualization relates to Bolter and Grusin’s hypermediacy. Bolter, J. David, and Richard Grusin. Remediation: Understanding New Media. Cambridge, Mass.: MIT Press, 1999.

[5] Barthes, Roland. “From Work to Text.” In The Cultural Studies Reader, edited by Simon During, Donna Jeanne Haraway and Teresa De Lauretis. London: Routledge, 2007; Foucault, Michel. “What Is an Author?” In The Essential Foucault: Selections from Essential Works of Foucault, 1954-1984, edited by Paul Rabinow and Nikolas S. Rose. New York: New Press, 2003.

[6] Nornes, Markus. Cinema Babel: Translating Global Cinema. Minneapolis: University of Minnesota Press, 2007.