A Note/Warning on My Position

While I advocate for particular strategies and theories of translation, I do so in the historical context of 21st century US sociopolitical irresponsibility and dominance.

I do not speak as a minority, nor as a reader of a language fighting for survival and self determination. Rather, I write as an early 21st century US citizen who has seen ‘his’ country at war for a decade. A decade where significant backlash has resulted against people who look or act different regardless of their relationship to the ‘enemy.’

The US has fought the wars in Afghanistan and Iraq against an undefined terrorist that can best be summed up as ‘different.’ America is at war with difference: “those who oppose our way of life.” And one of the (many) ways this insane fear of, and aggression against, the cultural other has been reproduced to massive levels has been in the systematic representation of the other through and in translation.

A simple result of the discursive regime of domesticating translation (Venuti) is that everybody else – the foreign in books and other media – looks like us. As all translation, all media made by anybody else, is made to look as if it were made by us, we never see difference. All that is good looks like us. All it then takes is the mass display not only of difference, but difference that “hates us,” to spark 10 years of war.

I believe I do not overemphasize the importance of changing the way translation happens in the US.

  • Venuti, Lawrence. The Translator’s Invisibility: A History of Translation. 2nd ed. New York: Routledge, 2008 [1994].
  • —. The Scandals of Translation: Towards an Ethics of Difference. New York: Routledge, 1998.

Destabilization of the Translator | Destabilization of the Translation

There are two new trends in translation that I would like to discuss. Both are postmodern and intentionally unstable, but they have opposite instabilities. One trend destabilizes the translator, and the other destabilizes the translation.

The destabilization of the translator has multiple translators, but a single translation. It has its history in the Septuagint, but its present locus is around dividing tasks and the post Fordist assembly line form of production. Like the Septuagint, where 72 imprisoned scholar translators translated the Torah identically through the hand of God, the new trend relies on the multiplicity of translators to confirm the validity of the produced translation. However, different is that while the Septuagint produced 72 results that were the same, the new form of translation produces one result that arguably combines the knowledge of all translators involved. This trend of translation can be seen in various new media forms and translation schemes such as Wikis, the Lolcat Bibul, Facebook, and FLOSS Manuals.

Wikis (from the Hawaian word for “fast”) are a form of distributed authorship. They exist due to the effort of their user base that adds and subtracts small sections to individual pages. One user might create a page and add a sentence, another might write three more paragraphs, a third may edit all of the above and subtract one of the paragraphs, and so on. No single author exists, but the belief is that the “truth” will come out of the distributed authority of the wiki.  It’s a very democratic form of knowledge production and authorship that certainly has issues, but for translation it enables new possibilities. While wikis are generally produced in a certain language and rarely translated (as the translation would not be able to keep track of the track changes), the chunk-by-chunk form of translation has been used in various places.

The Lolcat Bibul translation project is a web-based effort to translate the King James Bible into the meme language used to caption lolcats (amusing cat images). The “language” meme itself is a form of pidgin English where present tense and misspellings are highlighted for humorous effect. Examples are “I made you a cookie… but I eated it,” “I’z on da tbl tastn ur flarz,” and “I can haz cheeseburger?”[1] The Lolcat Bibul project facilitates the translation from King James verse to lolcat meme. For example, Genesis 1:1 is translated as follows:

KING JAMES: In the beginning God created the heaven and the earth
LOLCAT: Oh hai. In teh beginnin Ceiling Cat Maded teh skiez An da Urfs, but he did not eated dem. [2]

While the effort to render the Bible is either amusing or appalling depending on your personal outlook, important is the translation method itself. The King James Bible exists on one section of the website, and in the beginning the lolcat side was blank. Slowly, individual users took individual sections and verses and translated them according to their interpretation of lolspeak, thereby filling the lolcat side. These translated sections could also be changed and adapted as users altered words and ideas. No single user could control the translation, and any individual act could be opposed by another translation. The belief is that if 72 translators and the hand of God can produce an authoritative Bible, surely 72 thousand translators and the paw of Ceiling Cat can produce an authoritative Bibul.

FLOSS (Free Libre Open Source Software) Manuals and translations are a slightly more organized version of this distributed trend [3]. FLOSS is theoretically linked to Yochai Benkler’s “peer production” where people do things for different reasons (pride, cultural interaction, economic advancement, etc), and both the manuals and translations capitalize on this distribution of personal drives. Manuals are created for free and open source software through both intensive drives where multiple people congregate in a single place and hammer out the particulars of the manual, and follow-up wiki based adaptations. The translations of these manuals are then enacted as a secondary practice in a similar manner. Key to this open translation process are the distribution of work and translation memory tools (available databases of used terms and words) that enable such distribution, but also important is the initial belief that machine translations are currently unusable, which causes the necessity of such open translations.

Finally, Facebook turned translation into a game by creating an applet that allowed users to voluntarily translate individual strings of linguistic code that they used on a daily basis in English. Any particular phrase such as “[user] has accepted your friend request” or “Are you sure you want to delete [object]?” were translated dozens to hundreds of times and the most recurring variations were implemented in the translated version. The translation was then subject to further adaptation and modification as “native” users joined the fray as Facebook officially expanded into alternate languages. Thus, <LIKE> would have become <好き>, but was transformed to <いいね!> (good!). Not only did this process produce “real” languages, such as Japanese, but it also enabled user defined “languages” such as English (Pirate) with plenty of “arrrs” and “mateys.”

Wikis, FLOSS, and Facebook are translations with differing levels of user authority, but they all work on the premise that multiple translators can produce a singular, functioning translation. In the case of Facebook this functionality and user empowerment is highlighted; for FLOSS, user empowerment through translation and publishing are one focus, but a second focus is the movement away from machine translation; in all cases, but wikis particularly, the core belief is that truth will emerge out of the cacophony of multiple voices, and this is the key tenet of the destabilization of the translator [4].

The other trend is the destabilization of the translation. This form of translation has roots in the post divine Septuagint where all translation is necessarily flawed or partial. Instead of the truth emerging from the average of the sum of voices, truth is the build up: it is footnotes, marginal writing and multiple layers. Truth here is the cacophony itself. The ultimate text is forever displaced, but the mass intends to eventually lead to the whole (whether it gets there or not is separate matter for Benjamin, Derrida and the like).
While this style of translation is less enacted at present it is not completely new. Side by side pages with notes about choices is one variation centuries old (Tyndale’s Biblical notations, Midrash, and side by side poetry translations), the DVD language menu coming from multiple subtitle tracks is another variation, and finally this leads to new possibilities for multi-language software translations.

While the Septuagint leads to the creation of a single text in the myth, 72 translators translating a single text would produce 72 different translations in reality. The attempt to stabilize this inherent failure of translation argues that one of those translations is better and used, but it can be altered if a better translation comes around. The Bible translation is always singular, but it changes. Similarly, the Odyssey is translated quite often, but the translations are always presented alone. They are authoritative. In contrast, Roland Barthes comparison of modern works and postmodern texts and Foucault’s discussion of the authorial function both lead toward this destabilization of the author [5]. This discussion can be linked into translation studies’ discussions of author and translator intellectual production. The destabilization of translators and translations build off of both of these postmodern traditions, but the latter trend attempts to avoid weighing in on the issue by simultaneously exhibiting the conflicting iterations.

The main difficulty of the destabilization of the translation is the problem of exhibiting multiple iterations at one time in a meaningful way. How can a reader read two things at once, or with film, how can a viewer understand two soundtracks at once? Books and films provide multiple examples of how to deal with such an attention issue. With literary works endnotes are a minimal example of such attention divergence. Endnotes do not immediately compete for the reader’s attention, but the note markers indicate the possibility of voluntary switching. Footnotes are a slightly more aggressive form of attention management s they tell the reader to switch focus to the bottom of the page, a smaller distance that is more likely to happen.

For film, subtitles, which layer the filmic text with both original dialogue and the authorial translation, are a close equivalent to endnotes as they split the viewer’s attention, but do not force the attention toward a particular place. It is entirely possible to ignore subtitles regardless of complaints against them (much harder to ignore would be intertitles filling the screen). Finally, the benshi, a simultaneous live translator/explainer, is an early to mid 20th century Japanese movie theater tradition that most resembles the more aggressive footnotes as the benshi’s explanative voice competes with the film’s soundtrack for the audience’s aural attention [6].

Unlike websites such as Amazon, which have language dedicated pages (.com, .co.jp, .co.de) and block orders from addresses outside of their national coverage, or services such as the Sony PSPGo Store, which disallows the purchase of alternate region software, some sites utilize pull down language options that change the language while remaining on the same page, or provide multiple linguistic versions for purchase.

With digital games the localization process has traditionally replaced one language with its library of accompanying files with another. However, as computer memory increases the choice of one language or another becomes less of an issue and multiple languages are provided with the core software. This gives rise to the language option where the game can be flipped from one language to another through an option menu. Most games put this choice in the options menu at the title screen, but a few allow the user to switch back and forth. The simultaneous visibility of multiple languages or a language switch button would be further advancements toward the destabilization of translations.

Notes:

[1] Rocketboom Know Your Meme. <http://knowyourmeme.com/memes/lolcats>; I Can Has Cheezburger. <http://icanhascheezburger.com/>; Hobotopia. <http://apelad.blogspot.com/>.

[2] LOLCat Bible Translation Project. <http://www.lolcatbible.com/index.php?title=Genesis_1>.

[3] FLOSS Manuals. http://en.flossmanuals.net/

[4] This conceptualization relates to Bolter and Grusin’s hypermediacy. Bolter, J. David, and Richard Grusin. Remediation: Understanding New Media. Cambridge, Mass.: MIT Press, 1999.

[5] Barthes, Roland. “From Work to Text.” In The Cultural Studies Reader, edited by Simon During, Donna Jeanne Haraway and Teresa De Lauretis. London: Routledge, 2007; Foucault, Michel. “What Is an Author?” In The Essential Foucault: Selections from Essential Works of Foucault, 1954-1984, edited by Paul Rabinow and Nikolas S. Rose. New York: New Press, 2003.

[6] Nornes, Markus. Cinema Babel: Translating Global Cinema. Minneapolis: University of Minnesota Press, 2007.

Masochistic Translation

Painful Differance

I recently had a taste of a truly alienating translation: a translation that made me cry from lack of comprehension, and said comprehension was intentional in the author’s method and theory as well as the translator’s. This text, if you haven’t guessed, is Jacques Derrida’s Of Grammatology, translated by Gayatri Chakravorty Spivak.

I am told that Of Grammatology is forever deferred both in fact and meaning. Nobody gets it enough to fully summarize, but individual chunks might be worked through, as can be terms such as ‘trace,’ ‘sous rature,’ ‘differance’ et cetera. Writing exists in a particular relationship to language and to speech, and this relationship is opposite to that believed by the formalists, structuralists and logocentrists. We cannot get to meaning and the signified; we can only slide around in trace relationships between various signifiers in one time, place, language: one moment. What can be made present is only a partial presence, the trace; what is lost, the arche-trace, can be slide back and around, but never regained.

Spivak furthers this theoretical endeavor by sliding around in her translation, by making a 90 page translator’s preface that forces particular readings of the following 300 pages and challenges the relationship of original and translation through such placement. The preface, which comes after Derrida’s de la Grammatologie, is placed before Of Grammatology and thereby becomes first. Derrida’s text is not signfied to her translation’s signifier, rather there are only signifiers of signifiers, translations of translations, versions of versions. Spivak notes how related all of this is to translation in passing implications on lxxvii then straight out on lxxxv-lxxxvii.

All of this taken as is, reading Of Grammatology is a painful experience of slippery wordplay and neverending deferral of understanding. Reading Spivak’s translation is just that much more painful.

The Derridian (and de Man and Spivak) translational project would lead to very unpleasant translations: Spivak’s case is a prime example. However, she got away with it as she is not writing for entertainment and pleasure. Only for the masochistically inclined is Derrida fun.

Masochism

Speaking of mascochism, there are such things as masocore games (a term coming from Anna Anthropy’s blog entry on Auntie Pixelante). Not everybody likes or plays them, but they do exist. Said simply, masocore are games that revel in mistreating the player.

Giantbomb  notes masocore is “a postmodern indie game genre in which the designer intentionally frustrates the player. This frustration is typically accomplished by restructuring a preexisting game genre to place it in in one of three categories of frustration.”

“Trial and Error” is the necessity of following an exact path and figuring out that path. This is easily seen in platformers that necessitate exact jumps, or adventure games that require an exact path where alteration of such leads to the inability to complete the game (such as an item that you needed to pick up in the opening scenes without which the game cannot be completed)

“Confusion” is where generic conventions are broken (often resulting in the player having to relearn generic boundaries through Trial and Error). An example of this from Auntie Pixelante is “you jump over the apple, and the apple falls up and kills you. the apple falls up and kills you.” Auntie Pixelante goes on to reject the “merely super-hard” moniker and sides with the belief that masocore games are those that “[play] with the player’s expectations, the conventions of the genre that the player thinks she knows. they’re mindfucks.”

“Play,” Giantbomb’s third category, is the removal of play motivation (end, death, etc) in order to force the player to focus on (uncomfortable) play mechanics.

As Anna Anthropy states in the conclusion of her piece, masocore is visible now because of the intersections of independent gaming and free and easy distribution methods. She writes: “most of these games are simply unmarketable. which is why the masocore game, twenty years later, is starting to come into its own: now there are avenues for freeware games to reach wide audiences. these games have no need to sell themselves to the player, which allows them to be among the most interesting game experiences being crafted right now.”

Key to her statement in my mind is the how the gaming aesthetic of masochism has been enabled by the early 21st century game industry that has expanded beyond the generic as marketable to the niche as marketable.

Difficult(ies)

Masocore, is certainly a recently dubbed generic name, but it has persistent links to previous forms of the past decades. While the third form of masocore frustration (Play) might be unique, the other two forms can be seen in earlier methods of differentiated difficulties (and in general it can be traced back much further to such “games” as gladiatorial combat, martial arts, war, et cetera).

Game difficulty exists for multiple reasons, only one of which is enjoyment. (The relationship between difficulty and profit where arcade games necessitated difficulty to garner maximal profit, but video/computer games necessitated ease to enable the completion and further purchase of another game are ignored here.)

Due to the belief that difficulty is good for some reason (Flow, or any other theory), games have had various levels of difficulty and different methods of implementing said difficulty. Some games were simply really, really hard such as Donkey Kong and Ghost’n Goblins, some included the use of continues to enable the completion of a game (Teenage Mutant Ninja Turtles, Street Fighter), some offered different difficulty levels (Atari’s difficulty switch; The standardized Easy, Normal, Hard; Doom‘s I’m too Young to Die, Hey, Not too Rough, Hurt Me Plenty, Ultra-Violence, Nightmare!; Marathon‘s Kindergarten, Easy, Normal, Major Damage, Total Carnage; Halo‘s Easy, Normal, Heroic, Legendary; etc), some went the full opposite direction and made it impossible to lose by re-spawning the player at one point or another through some diegetic method (Prey, Bioshock). All of these are based around the idea that there is some benefit in difficulty, but just what that benefit is, and what level of difficulty is good, is unsure.

One new variation is the use of achievements to create a masocore element to an otherwise reasonable game. For instance, one of Mega Man 10‘s 12 achievements is Mr. Perfect, which requires the player “Clear the game without getting damaged.” In a Megaman style platformer this is nearly impossible and both a new proof of hardcore’ness and an implementation of masocore’ness.

Difficulty changes (as do implementations), but the tendency is neither to bow down to the masocores nor the casuals. Instead, the game industry has increasingly attempted to provide access to both. Difficulty, even masochistic pleasure in the extremely difficult, is increasingly deemed acceptable. The inclusion of the masochistic Mr. Perfect achievement between Mega Man 9 (2008) and Mega Man 10 (2010) and its correspondence to Anna Anthropy’s post in 2008 and the present 2010 point to this process of incorporation. Translation should learn a lesson from this, especially when localization’s main defense for its problematic translational method is that games need to be fun, to be entertainment. Some people like masocore games; some people like Derridian translations. Let’s start having masochistic translations.

Sources:

Anthropy, Anna. “Masocore Games.” Auntie Pixelante. Posted: April 6, 2008. Accessed: February 14, 2010. <http://www.auntiepixelante.com/?p=11>

Derrida, Jacques. Of Grammatology. Gayatri Chakravorty Spivak trans. Baltimore: Johns Hopkins University Press, 1976.

Mega Man 10 Achievement List. X-Box 360 Achievements. Accessed: February 14, 2010. <http://www.xbox360achievements.org/game/mega-man-10/achievements/>

TheDustin. “Masocore: Mr. Gimmick: The Best NES Platformer You Haven’t Heard Of (and Sadly Haven’t Played).” Play This Thing. Posted: Thursday, January 28, 2010. Accessed: Sunday, February 14, 2010. <http://playthisthing.com/game-taxonomy/masocore>

Various Authors. “Masocore (video game concept).” Giant Bomb. Accessed: February 14, 2010. <http://www.giantbomb.com/masocore/92-1165/>.

The Task of the Translator; The Location of Localization

I’ve been reading a lot of Walter Benjamin’s “Die Aufgabe des Ãœbersetzers” lately in reference. So much so that I also went back and (re)read the original. The question of course for everybody, or at least as I understand decades later and after Paul de Man, is whether the focus is on ‘translation’ as the ‘failure’ or the ‘task’ of the translator, both of which are built into the German. This comes down to whether the translator tries to translate the ‘what’ or the ‘why’ of the original, the idea of touching and either deflecting or reforming the ‘vessel,’ et cetera. The voice in my head then asks what the relationship between localizations is?

There’s an interesting thing that happens when I read translation work: I don’t feel like I’m barking up a crazy tree. This is nice. However, the other thing that happens is that I wonder exactly how I’m trying to tie things together, which doesn’t exactly work. Too many partial overlaps at once.

Things that are important here are, of course, the failure of the translation process, but also some of the other basics such as translation being not just ontological and spatial, and not just historical and temporal (which Bermann and Wood try to point to, rightfully, in Nation, Language, and the Ethics of Translation, but also specifically NOT temporal or spatial for localization.

Or rather, that is of course what is the intent with localization.

Translation is a post-production effect. It is written and then it’s translated. Even if I’m going to be difficult, or pomo, and say that repetition, adaptation and the like are also forms of translation there is still a key difference and that is the temporal aspect. However, localization specifically abuses that location of the translation. Game translation (localization) is increasingly moved from the post-production to the central production point. This follows through with the central claims of games as new media: that they have no original and are variable. This moving (temporal) position of localization also justifies the claim that games are not actually translated as it was never officially in one place or another. And more, for the case of simultaneous releases (and better yet releases with multiple languages) they are able to claim a full disabling of the temporal element of translation.

What Difference an M Makes

One of my pleasures is reading. It is also one of my guilty pleasures as I tend to read books of a speculative nature. My thoughts have always dwelled near the question why would I want to read about the world I live in? Where’s the fun in that? Where’s the escape? Yes, I’m an escapist, and that has included worlds of alternative reality, fantastic worlds, futuristic worlds, and even alternatively represented worlds such as animation. With that (probably unsurprising) admission out of the way I can get to a topic that has bothered me for quite a while, which has also had a new development (new if only in the case that I recently noticed it).

Authors, genres, sorting and status.

An author I’m rather fond of is Iain Banks. he writes fiction. Most of it could be in this world although some of it is a bit iffy, or at least somewhat psychotic. Okay, that describes most fiction as how “real” is the illuminati in comparison to Area 51 and extra terrestrials? I first read Banks’ Dead Air, which I borrowed from a friend in 2004. I loved it, but I couldn’t remember who had written it after I gave it back and didn’t read anything else of his for half a decade. When I finally did figure out who that Scottish writer my Scottish friend loaned me was I was confronted with two things. The first was Iain Banks. I proceeded to read The Steep Approach to Garbadale, The Business, and Whit. The second thing I found was Iain M Banks, the author of the Culture series of science fiction and various one offs. Those of you who might have bothered to guess will probably realize that Iain Banks and Iain M. Banks are the same person.

Average logic seems to hold that people cannot write for multiple genres at once. Or that audiences don’t shop for multiple genres.

But maybe logic should think of all the pseudonyms out there. And then maybe question the purpose of those alternate pen names: Banks wrote 3 books as Banks then got his publishers to publish a sci-fi book. it came out under M. Banks so as not to confuse audiences (or so holds the WIkipedia entry). Maybe it’s for the readers. It’s definitely not because Banks cannot write for both genres as he does well and has done will for over two decades and twenty books.

So why is it that the United States publisher (Orbit) has chosen to publish Banks’ latest novel, Transitions, by Iain M. Banks? It was published in the United Kingdom as a book by Iain Banks and the two book covers are visible, unproblematically, on Banks’ website showing the different covers and different names.

Banks has no problem with his name separation (and integration). So why do I care? What is it that I see as troubling and annoying about both the separation and integration of a science fiction identity and a fiction identity? Mainly status.
Salman Rushdie is a good, similar example. Rushdie’s works are fantastic. They question reality. But they’re “Fiction.” Even one of his earliest works, Grimus, a very “science fiction and fantasy” novel if ever there were one, is happily labeled “Fiction” and sorted alongside Rushdie’s other, “serious” books. While it is labeled “Fantasy novel/Science Fiction” on Wikipedia the Amazon entry (as well as most other booksellers) has ignored this and simply lists it as “Literature & Fiction.”

In bookstores’ entry systems especially of 20 years ago, when both the M and Rushdie’s singular straying happened, Fiction was the high genre and anything more “generic,” anything that needed a modifier, be it fantasy, science fiction, thriller, romance, was the low move toward rubbish, or at least special audiences (where special has all of its connotations, good and bad).

Rushdie rode his barely (and yet very) “Fiction” style out to be one of the most influential writers of the late 20th century. This has many parts to do with his status as a post colonial, and yet British, subject as well as the politico-religious issues surrounding Satanic Verses. However, as his work was “serious” it brought in the very not serious early novel. This preserved the singular location of an author within a store, and essentially, the analogue archive.

In contrast, Neal Stephenson, a second prime example, whose early work was in fiction (The Big-U, Zodiac and a few disavowed co-written works) before he smashed onto the scene with Snow Crash and Diamond Age, two cyberpunk highlights. Stephenson is located in the science fiction section. Again, this is in contrast to his incredibly popular (alternative) historical fiction Cryptonomicon and Baroque Cycle. Because his original hits were in science fiction he has remained in that area. This has not prevented him from garnering support and sales, but it has prevented him from winning awards other than those in science fiction, which his popular historical fiction novels do not fit. It has placed him, marked him, classified him, as a science fiction author.

The placement within the archive, one’s labeling/identificying denotes the status of the author. Rushdie is respected as he is in Fiction. Stephenson is less respected as he is in Science-Fiction. Banks avoided this very possibility with the little M., which separated identities, forced his presence into both places of the archive (and store). With the doubled name Banks broke the status game.

But that is exactly where I see the problem now. My guess is that within the United States, where sci-fi is low, but popular, M. Banks and the Culture novels sell better. This might be switched in teh UK where Banks is known as a Scottish author and gets additional sales because of that and the brogue of his Fiction novels.

The collapse of Banks to M. Banks within the US does a few things. It attempts to ride M. Banks’ greater popularity so as to increase the Fiction sales. This is fine as far as anything Capitalistic goes. However, it also will problematize the location, and therefore status of Banks in the Ficiton section. As M. Banks his previous Fiction books stand to be reissued as M. Banks and relocated to the sci-fi section. In some ways this makes no sense, in others it’s good business, but I see it simply as the denigration and codification of generic borders.

(New Media) Translation After Pound

The 20th century turn toward domestication essentially stems form Ezra Pound’s translations, but impurely, through the modern emphasis of the author mixed with the business of selling books.

According to Ronnie Apter in Digging for the Treasure: Translation After Pound, Pound influenced translation theory and practice in three major ways. First, was the move from “Victorian pseudo-archaic translation diction” to modern style. Second, is by arguing for a criticism of the original in some form: not simply the objective transfer (an acknowledged impossibility by the Victorians as well), but to focus on some particular element and thereby “criticize.” And finally, the creation of a new poem: not just something derivative.

These three were all essential breaks with both the Victorian practice, which focused on three criteria: paraphrase with no additions (subtractions were inevitable, but additions were taboo), the reproduction of the author’s traits (just what the traits were was, however, up for grabs), and the reproduction of the overall effect of the text (whether the “effect was of the original on the original’s original audience, or the original on the modern audience who can read the original text is unknown). It was also an adaptation with the contemporaneous translation theory professed by Matthew Arnold and F. W. Newman.

However, while Pound was translating against the Victorian grain, we have come full circle to a new norm. The fashion of the times has changed to one that embraces Pound’s basics, but not the depths. If “great translators transcend the fashion of their times [and] minor ones merely manipulate it” Pound was a great translator, many minor figures have manipulated his transcendence, but Pound himself would simply be one of any in the current fashion. As Lawrence Venuti has argued, the times and dominant style have changed and another transcendental shift is called for.

What I want to argue is that this shift is called for by the media itself. The move from literary page translation to multimedia and digital forms leads into new possibilities for and understandings of translation. In an interesting way, however, it is Pound’s logopoeia, his style of meta-translation, that can still lead the way. Whereas Pound focused on the meaning of words to bring into focus both the older era and the present, a type of dialectical juxtaposition, the move toward searchable, digital data in opposition to static, analogue data allows the simultaneous existence of both data sets and a new type of logopoeia. This new form of meta-translation involves the layering of translational tracks. Instead of juxtaposition, there is the coexistence of both tracks/languages/cultures.

This is similar to the possibilities evoked by subtitles and abuse (Nornes), but it considers the issue in relation to digital, new media and not simply film considered in an analogue manner. Instead of the ability to simply choose one or another track/language, it gives all, or switches between languages. It renders the possibilities of putting three real languages into a game such as Command and Conquer: Red Alert 3 (English, Russian and Japanese to use the fictive world), but more meaningfully (and less deliberately/offensively stereotypically), of switching them on the fly so that one game has the US speaking English, the Russians Russian and the Japanese Japanese, but another switches so that the US speaks Japanese, the Russians English and the Japanese Russian. The media uses its ability to draw from the swappable data files not to simply replace one with another, thereby changing one representation into another, but to abuse the user with a constant active experience that questions the submerged normativity of language that exists with translated entertainment products (games in particular) at present.

Somebody out there must like Alice…

I’ve been doing work with William Huber recently on Kingdom Hearts, transmediation and adaptation. An amusing example related to Alice in Wonderland:

The claim that unbirth from the upcoming PSP version is a mistranslation is, I believe, rather false. Why? Because somebody out there likes Alice rather a lot.

=

= Unbirth

Editors and Translators – On Saussure

As I read Jonathan Culler’s Ferdinand de Saussure, the difference between editors and translators is striking. Or rather, their similarity and yet complete perceived difference is striking.

In chapter one Culler notes, “Most teachers would shudder at the thought of having their views handed on in this way, and it is indeed extraordinary that this unpromising procedure, fraught with possibilities of misunderstanding and compromise, should have produced a major work” (Culler p. 25).  He then ends the chapter with the claim that he “shall not hesitate to rectify the original editors’ occasional lapses” (Culler p. 26).

Saussure is the origination point of the Course in General Linguistics. Nobody questions this. However, what of the status of Charles Bally and Albert Sechehaye, the “editors” to which Culler refers? Saussure’s colleagues, Bally and Sechehaye, gathered three terms’ worth of students’ notes, combined them, ordered them, limited them and in so doing made the Course. Culler calls them editors, and yet could he not also call them translators? What really is the difference?

Bally and Sechehaye are forgiven for their mistakes and counted as a problematic (but necessary) medium from Saussure to air to students to notes to editors. As such, Culler has no qualms about correcting their “lapses.” But how is this any different in the case of Wade Baskin, the English translator of the French original, or any other translator?

An editor is somebody that changes something for the benefit of general understanding. Editors are hated and partially antagonistic to the idea of the author, but are also considered a necessary part of the process. A translator is somebody that changes a text into a different form: it is not for general benefit, but specific benefit to a particular audience, whether that be temporally or spatially separated from the origin. In this case, Saussure in French… and in his classrooms between 1907 and 1911.

So, the Course is a translation of Saussure’s lectures just as the English translation of the Course In General Linguistics is a translation of a translation. And yet they aren’t considered as such, there is an assumed difference between the editors and the translators. Editing is different from translating.

But this makes little sense. They all move the text on, allow it to live, but necessarily alter it.

Protest + Game = ?

At base, protest and play are in opposition. Play is the interaction with rules. There are rules, people break them and form emergent properties that become the new rules and game. Play is working with rules. In opposition, a protest works directly against the rules. A protest means to destroy rules or the system, not to adapt them (despite the possibility that this is all that might eventually happen). Protest and game run against each other but are they combinable?

State of Emergency is about rampaging. It’s about things related to protests, but while it has tie ins with the Seattle WTO protests it really isn’t related. There are also various Sim/Civ-esque games based around revolution, but again these are not exactly about protest, but the reinstitution of order.

Would it be possible to create a game that systematically broke the idea of rules by causing the constant creation of new rules through their breaking? It would maintain entertainment/pleasure by giving out points, awards, or achievements for disruption and for changing the system itself? Instead of giving people the ban hammer for cracking the code, breaking the rules, or finding interesting use of mechanics, it would reward the players. I suppose this is really called “life” and “hacking,” but what if, what if…

Thoughts on DAC – Faux 8 Bit

I got a pleasant surprise on the first day of the recent Digital Arts and Culture conference at Irvine when I attended the Brett Camper’s paper talk “Fake Bit: Imitation and Limitation.” You see, it’s the first time I’ve encountered somebody else dealing with these new/old games.

What I’ve been discussing as remakes and demakes and framing around repetition, nostalgia, and history he discussed in terms of camp, revivalism while focusing on faux 8 bit game production and in particular, La Mulana, which was initially an imitation MSX game for the PC and is now being remade as WiiWare.