Blog Post

Reading the Transborder Immigrant Tool (MLA '11) [cross-post]

[re-posted from the Critical Code Studies blog for easier inclusion in the Critical Code Studies HASTACscholars forum]


Critical Code Studies made its return to MLA 2011 on a tidal wave of Digital Humanities panels. But it was "Close Reading the Digital" that offered the most explicit connections. Organized by Jeremy Douglass and Matt Kirschenbaum, who served as respondent, the panel featured Jim Brown, Mark Sample, and myself. We will post related materials here. Here are the slide in .pdf form: Transborder Immigrant Tool Talk MLA. Below I offer a few notions from the presentation and an extension of the conversation. I welcome suggestions particularly on characterizing the java.

You Say T.B.T., I say T.I.T.

Critics of CCS (not the practitioners but the skeptical) argue that CCS deals with the “arbitrary” or insignificant aspects of code, such as variable names or even comments. This point is nicely media-specific, attentive to the difference between source code and other sign systems; however, it misses a few key points. First, these aspects are only 2 of the elements CCS interrogates. Second, this notion of "arbitrary" while accurately characterizing the nature of the signs in one dimension of their existence neglects a very important human dimension, one that gave rise to the MLA itself.

This sense of these choices as “arbitrary” comes from a very literal application of that term, "arbitary" from the POV of the computer, a POV programmers perhaps try to take on as they imagine how their code is being processed. To not acknowledge it would be to ignore a core tenet of programming. It is important to acknowledge that a variable called ofGrammatology could just as easily be called sZ. However, the difference between these two names speaks, well, volumes. This dismissal of the "arbitrary" short circuits interpretation by denying the discussion of the natural language affinities of the code for the programmer and anyone else who reads it (see Jeremy Douglass' Week 2 of the CCSWG -- forthcoming in electronic book review).

During our panel, the three presenters and two chairs, started calling each other (and ourselves) by different names. Some laughed at this play; others groaned. (See the Twitter chatter). Another cohort were just confused because they neither knew us by sight nor the owners of the names we were invoking. As with many (every?) instance of naming, those who knew their relevance experienced the interplay completely differently from those who did not know them. Nonetheless, just because one part of the audience did not know their significance of the names did not make their value arbitrary -- arbitrary in that other sense, implying that the nature of their selection was without (intended) significance or purpose. The panelists referenced the names "Ian" and "Nick." While an outsider reporting on the scene (or someone who was not aware of the meaning of these names) might waste a little time arguing that we meant "Ian McKellen" or "Ian Somerhalder," he or she could still, without knowing their intended reference, correctly note the predominance of male names (on the panel and in the play) and might have begun to question the gender dynamics/imbalance/inequity of either the game or this particular group of presenters. Of course, that doesn't mean the person shouldn't ask, "Hey, who's Ian?"

As a scholar who examines language and semiotics, I deal in the analysis of arbitrary choices from a paradigm within the context of a syntagm adhering to a langue, to borrow some terms from semiotics. As a scholar of code, code that is processed by computers but read by humans, I see significance in the way the signs appear. However, since higher level computer languages have so many natural language affinities, the significance reverberates across multiple registers. When I read code, my goal is first to find out what the code does. My favorite way to read it, though, is with someone intimately connected with the project, though unaffiliated programmers offer interesting insights. As we trace through the code, I listen for reactions that are unconnected to the function of the code. If the unaffiliated programmer laughs at a line of code, like a foreigner, I ask them to explain what they were laughing at. In their laughter, I find something else that is being communicated through that code. Just to offer one example.

At MLA 2011, I applied CCS to the Transborder Immigrant Tool. Interestingly many news accounts of the Transborder Immigrant Tool refer to it under the initials TBT. I find this acronym to be highly unlikely and that its use, like much of the project, calls attention (with ironic force) to one of the themes of the project as it attempts to sustain life at the most fundamental level, as it reaches toward material not digital connection, not hegemonic power to subjugated, mother and infant, not satellite and receiver, but knowing branch and hidden water.


T.I.T. is a mobile phone application that helps border-crossers who are in danger of dehydration find caches of water as they cross from Mexico to the United States. Along their journey it also provides them with poetry, that contains within it additional hints for survival along with the sustaining force of its prose. The tool has been the subject of media coverage which stirred up a bit of reader fury, leading ultimately to a UC investigation of the project's directors, Ricardo Dominguez,
Rocardo Dominguez
though investigating for his previous work with the Electronic Disturbance Theater, work that earned him tenure in the same system, btw.

T.I.T. team

The team includes Micha Cárdenas, Amy Sara Carroll, Elle Mehrmand , and Brett Stalbaum (who wrote and has posted the code openly), an eclectic group of artists and activists who collaborate in the collectives, b.a.n.g. lab and Electronic Disturbance Theater.

What do you get when you look at the code?

My own path through the code narrates the potential experiences of border crossers. Walking through it with others helps me imagine the border issue from a different perspective. This may have pedagogical implications but also theoretical implications on how we make sense of code -- how we narrate effects of the code as we read through it. As Brett Stalbaum walked me through the code, I had to construct a mental model of the potential user, their dehyrdration, their response to the tool and its signals, and the user's search for water.


The code is java drawing on Java2ME libraries to be used on a Nokia IDEN platform. Java being open source serves well. In my presentation, I mentioned that I was given pause to think of JAVA developed by SUN now a part of ORACLE. This lead me to one of my more contentious, though at MLA fairly conventional, moves of CCS where I pause to reflect on the allusions within those names -- thinking about framing metaphors, George Lakoff, et cetera. Their choice of Java stems from the mobile-phone platform they chose in order to keep the cost of T.I.T. as low as possible. JAVA relies on a particular naming convention that comes from the folder structure of the piece. In the case of importing classes, the convention is to set up the folders so that the namespace location is the reverse of the domain name. As a result, the domains get written into the code, but those are no arbitrary signifiers.

import javax.microedition.midlet.*;
import javax.microedition.lcdui.*;
import javax.microedition.location.*;
import java.util.*;
import net.walkingtools.javame.util.AudioArrayPlayer;

import edu.ucsd.calit2.TransborderTool;
>import net.walkingtools.javame.util.AudioArrayPlayer;
The institutions are thus literally written into the code: UCSD, JAVA, WalkingTools. UCSD, the institution that both tenured and investigated Ricardo is here. Sun is also here in the Java library, though Java's open source life is perhaps more relevant to the ethos of the project. The last source,, however, is a separate library, also developed by Stalbaum and others. It does not live under the flag of UCSD or CalIT2. Importing from this source, marks the conscious choice of the designers to build and implement a separate library outside of the UC institutional infrastructure.


Dowsing & Witching

Here is a 21st-century phone application. It uses GPS and modern day, though inexpensive hardware. It calculates distances based on maps downloaded into these phones before deployment. And yet the code frames itself not in this GPS narrative but within another paradigm: dowsing. That metaphor is written into the code -- or perhaps the source code is encoded with the metaphor. When the widget discovers a water cache within range it calls a function in the following line: dowsingListener.witchingEvent (mc); Here is more of the context:


It calls the witchingEvent function that is part of the dowsingListener class. (Although this gets a bit more complicated through the use of an interface I will discuss later.) That central function plays the alerts and presents visuals on the display to lead the dehydrated traveler to water. So here, at its core, the discovery of water nearby is framed as "witching," a witching that it has been listening for. Dowsing and witching appear throughout the lines of code: in the names of files, variables, and interfaces. Nonetheless, the programmers had to work to make this line read as it did. A small example is that the variable dowsingListener is assigned the dowsingCompassListener, shortening the name, removing the compass metaphor from the expression in the code.


Dowsing and witching have been written into the code. "Witching" is dowsing or divining water using a stick of witch hazel. So those who are interacting with these lines of code, as they are working their way through the desert of the IDEN platform, are thinking about map coordinates through the metaphor of dowsing, a practice that puts nature in quest of nature, the stick, the wood, the plant that knows where the water is, it feels it in its bones. A return to the paradigm brings all of the other metaphors that could be used in the code here: detection, identification, discovery, completion, mining, scoping, harvesting, pinpointing... not to mention the (infinite) range of nonsense or completely un-related character combinations that could have been used.

Nonetheless, this code has been written with reference to a conceptual framework of a magical folk practice, where nature is called to nature, the dead stick is drawn to the water that would have saved it, depending, of course, on one's view of dowsing.... In many cases, we have the software without access to the code, but since this software is installed only on the phones of those who are building the software or making this perilous trek, reading the code is a primary way of giving people the experience of the software.

Q & A

Matt Kirschenbaum served as the respondent to the panel and he raised a few points that I'd like to address here.


  • CCS should not take code as the ultimate ends. This point came up in CCSWG and is best addressed IMHO in Wendy Chun's forthcoming book. There are other layers of the code, the code is compiled into something else, the code is just an abstraction. All of these concerns should be taken into consideration when discussing code.
  • CCS should not take code separate from the system, the hardware. And this tied in well with a concern from Richard Grusin that we not talk about the code separate from its effects. Certainly, we do not want to consider the code as a standalone object, yet that does not mean we should not look very closely at the code itself. Keep in mind that CCS picks up where discussions of the processes and systems leave off.
For my part, I may seem to be focusing exclusively, but really primarily, on the code because I know there are other scholars working on ways of reading hardware and software, though in my longer explorations I try not to omit these aspects. I am in search of ways of reading source code because I have not found many models. Fortunately, the CCS Working Group and the forthcoming HASTAC Scholars forum have brought together incredible talents to tackle these questions and their code critiques generating more ways of reading and more code critiques. Richard Grusin also expressed concern that my reading was "New Critical." I replied that by taking into account the social context in which the code was created, distributed, and functioned, my reading could not be aligned with the closed or blindered approaches of the New Critics. However, as you can see above, one of my goals is to see the meaning that is layered onto code, and we can learn a lot about "close reading" from the New Critics that to me is intimately tied to the act of deconstruction as well. So my readings rarely stop at what the thing does but instead ask how does the way this code is written, how do these paradigmatic choices within the langue of the language, operating system, hardware, and other constraints, layer meaning upon the code which resonates with or is perhaps intention with the way that it operates. (We discussed this inside/outside or means/ends, form/structure tension in the CCSWG.) In that way, I do spend a good deal of time looking closely at the signs, not to be a Scuttle parodied by a Sokal, but to look closely for resonance, a resonance I have been trained to find when I close read sign systems of all sorts, be they textual, visual, haptic, ludic, legal...


[p.s.] During Q&A I mentioned that CCS is often like photography. The photographer might capture the chance pairing, the wrinkled hand on the arch, the sleeves of the homeless indigenous woman on the steps of the embassy. Photography notices, it finds as Mark Sample mentioned (referencing Barthes) a punctum even when the standard way of framing that vista might have missed it. In CCS we do a lot of reframing, and that often involves putting that code in context of its circulation and creation. (I have to stop here as this is rapidly increasing into a much longer project. Should be sufficient content for discussion here. )


1 comment

Hey Mark,

I find your investigation of the Transborder Immigrant Tool quite fascinating. Yet, let me play the devil's advocate a little bit. Although I did not see the entire presentation and there might be quite a bit more analysis that I am missing, it seems that the primary outcome of your reading is to detect the notions of witching and dowsing within the code—intriguing ideas which might not be readily apparent to the person using the tool without looking at the source code. (Although, this is not to say that one could not come up with these framing metaphors simply using the tool without seeing the code; that is, if the user is holding out the cell phone as it directs them to a source of water, well, the idea of dowsing is not a huge leap). In terms of CCS, the argument seems to be that there are "arbitrary" moments in written code that are actually quite significant (variable names, method names, comments, etc.) and we can detect these and use them in our interpretation of the digital object. But, from here, I am left wondering if you are making a more robust argument that I am missing.

As I see it, there are two primary moves one could then make. First, one can then exit the code into other layers of significance, thus into the experience of using the software (you do mention that you talk about the interface at some point), into a wider interpretation of witching and dowsing, the history of the folk practice, its relationship to the artwork, its relationship to the contemporary situation of finding water during transborder crossings, etc.. This might ultimately lead to an argument about the Transborder Immigrant Tool itself (instead of an argument about CCS). Off the top of my head, one might imagine someone actually critiquing these word choices in the tool, since these "folk practices" (which might stand in for folk knowledge of reading natural signs from the landscape to find water, more traditional knowledge of mapping water locations, etc.) are displaced by the power of science and technological sophistication, etc.. Or perhaps some one might do a "deconstruction" and say, well, this helps reveal that technology itself is a form of "magical thinking," something we put our faith in which can also fail us... (I don't know, wherever you want to take it.) In any event, looking at the code would simply be a preliminary, "less-significant" step to a larger, more significant cultural critique. So, just to be clear, I do think you are always trying to make this larger move. As you say above, you "rarely stop" just at the code...but move out into the social, historical, etc...

Nevertheless, I think you are trying to make another argument that is more CCS specific, and which is still vague to me. So the second potential move one could make would be to apply this meaningful discovery of witching and dowsing to make an argument about the code itself. That is, the "resonances" of which you often speak would not be meaningful ripples that move out into other "extra-code" layers (the interface, the social, the historical, etc.) but would be found within other parts of the source code, its structure, its methods, etc.. The aim of your presentation would not only be an argument about the tool, but to teach us something new about code (something that would even go beyond the basic notion that variable names, method names, etc., harbor meaning). Maybe I am just asking if you could clarify this...that is, if it is one of your goals. Personally I am led to believe that this might have something to do with the part of the excerpt that discusses importing libraries? I am a little confused why that is there and what its relationship to witching and drowsing would be. Are you making a play on the notion of finding "sources" (like discovering sources of water), maybe claiming that code itself always draws on many sources for its functionality and survival? Perhaps that there is something of "magical thinking" here as well, perhaps dovetailing with Wendy Chun's notion of sourcery?

At the end of your excerpt you make the claim that "reading the code is a primary way of giving people the experience of the software." Yes, I agree, and I think most folks interested in CCS would agree. But (playing the devil's advocate), doesn't this just boil down to saying that spending time looking at the code can be helpful for understanding a piece of software, an artwork, etc.? Yes, witching and drowsing is an interesting frame through which to further investigate the tool, but does not the argument need to be larger than this? Again, my apologies if a lot (or all) of these questions would be answered by seeing the entire presentation and not just reading an excerpt. Honestly I find your work super-fascinating, always generating a ton of thoughts and ideas, hermeneutic strands of possibility. Thanks for sharing with us!