Critical Code Studies

At the recent MLA 2011, one of the largest conferences for humanities in the US, the question arose: 

Do Digital Humanities scholars need to know how to code? (Sample MLA11, Brown MLA11, Ramsay MLA11)

While that question raises anxieties in many humanities scholars, it is not an overstatement to argue that computer source code presents a sign system, a discourse environment, that holds tremendous influence over our daily lives -- and that for the humanities not to be able to address it, not to be able to use their methodologies to critique this cultural milieu, is the equivalent to unplugging from the Internet permanently or, as has been tweeted, to live in the Roman Empire without knowing how to speak Latin. While perhaps not every DH practitioner need code or know how to code, if we cannot collaborate with our colleagues in computer science to apply our methodologies to the study of source code (and hardware and software), we will be confined to cultural critique of the surface effects of a digital culture which functions within in a black box.

What does it mean to look at the code not just from the perspective of what it "does" computationally, but how it works as a semiotic system, a cultural object, as medium for communication? How does it organize itself, understand itself, think about its own representations, its own capacities and workarounds? Critical Code Studies is the practice of looking at the code that produces and imagines those digital realities, from a humanistic perspective. Or more formally:

Critical Code Studies applies hermeneutics to the interpretation of the extra-functional significance of computer source code, "extra" not in the sense of "outside of" but "growing from" the functionality.

Code offers an important arena of discourse with its own particular affordances and affinities, full of nuance and rhetoric, circulated, extended, and re-purposed, forming and shaping communities, built of programming paradigms and predilections, political divisions and institutional actors. In short, code offers quite a bit for the humanities to talk about.

Critical Code Studies began with trying to solve a problem: now that N. Katherine Hayles had directed attention to Media Specific Analysis and Lev Manovich and Matthew Fuller had called for software studies, how could humanities scholars analyze the digital objects we work with, play with, think with, and create social networks with? While some took on the challenge of looking at software processes and hardware, there were few models of analyses of computer source code.

With momentum from the Spring 2010 online Working Group, the Critical Code Studies forum at HASTAC attempts to develop and practice the reading methods and interpretive moves that can be used to read code.

The forum consists of two separate but inextricably linked elements: A theoretical discussion below, and a space for performing group analyses of user-submitted code snippets. The theoretical component surveys general issues of how CCS practitioners can approach code and produce meaningful readings of digital objects, and the code critique section is available for scholars to engage collectively in CCS. Investigating the theory and application of CCS:

  • What frameworks can the Humanities disciplines and their theoretical approaches provide for reading and interpreting code?  
  • What insights does the code offer the cultural critique of a digital object and what are our tools and methods for doing so? 
  • How does the programming paradigm affect the interpretive approach? Travis Brown has pointed toward the "hegemony" of imperative languages in current CCS examples. 
  • How can we use critical code studies in the classroom? What exercises have proven fruitful? What texts can we add to the bibliography? 
  • How do issues of race, class, gender and sexuality emerge in the study of source code? 
  • What is the relationship between Critical Code Studies and disciplines that center on building code-based objects? For example, computer scientists and programmers, other kinds of scientists, digital artists, et cetera.

We have also created a sub-forum to discuss actual Code Critiques.

    Please join us! All guests are invited to register at HASTAC and join the conversation.

    Hosted by HASTAC Scholars:

    Max Feinstein (University of Southern California) @crimestein 
    Clarissa Lee (Duke University) @normasalim
    Jarah Moesch (University of Maryland) @jarahmoesch
    Jean Bauer (University of Virginia) @jean_bauer
    Peter Likarish (University of Iowa)
    Richard Mehlinger (UC Riverside)

    Thanks to these guests who will be joining us:

    Stephanie August (Loyola Marymount University)
    David M. Berry (Swansea University)
    Wendy Chun (Brown University)
    Matthew Kirschenbaum (University of Maryland)
    Mark C. Marino (University of Southern California)
    Tara McPherson (University of Southern California)
    Todd Millstein (University of California, Los Angeles)
    Mark Sample (George Mason University)
    Jeremy Douglass (UCSD)
    Nick Montfort (MIT and Electronic Literature Organization)

    Twitter hashtag: #critcode. Follow us @HASTACScholars!

     

    Richard Mehlinger

    I approach Critical Code

    I approach Critical Code Studies from two directions. First, as someone who majored in computer science as an undergraduate, I have an obvious technical interest as someone who has read and written a good deal of code myself. In this sense, I'm mostly interested in how we judge the aesthetics of code, how we understand code as we read it, and how it is written--in short, the more technical aspects of programming. These are extremely important questions given the increasing role of code in our everyday lives, and I believe that CCS holds out the possibility of being genuinely helpful to programmers. This is, to be sure, a fairly narrow, technical perspective, but it is one which I believe is important.

    I also approach CCS as an historian. I also approach CCS as an historian. I believe that methodological developments in the field of history since the so-called cultural or linguistic turn offer extremely valuable frameworks for understanding CCS. There is by now a well-established tradition within the field of history of treating non-traditional sources as texts--a practice which is itself drawn from cultural anthropology. This is exemplified in works like anthropologist Clifford Geertz's "Deep Play: Notes on the Balinese Cockfight" and historian Robert Darnton's The Great Cat Massacre, in which the authors unpack cultural symbolism inherent in a local tradition and an unusual event, respectively, using what Geertz termed "thick description". I believe that this sort of hermeneutic analysis is highly relevant to Critical Code Studies. To understand the code, we must analyze the culture, traditions, practices, and discourses surrounding it. By the same token, CCS may allow us to reach new insights about the broader culture, not only that of programmers but of society itself.

    Finally, I am interested in CCS because it potentially offers fruitful methodologies for my own historical research. CCS is about analyzing texts and discourses which are constrained by rigid rulesets (the rulesets themselves, of course, are also texts). Code is not the only discourse where this is the case. Games and mathematical proofs both come to mind, as, to a somewhat lesser extent, does science in general. As an historian focused on games, the most pressing question I have is whether CCS be generalized to other constrained discourses, and if so, how?

    jayyc24

    also interested in ccs

    also interested in ccs because of mainly reasons above. strong background in computer science will do that to you i suppose.

    corner computer desk

    Jarah

    embodiment

    My interest in CCS is at a meta level - particularly in the programmer's embodiment of the code. How does the person writing the code bring notions of the world into the code itself? How does a team of programmers impact everyday life? What happens to the body once the code is written? executed?

    Code for everyday digital technologies is (usually) written by a human, or a group of humans, who have preconceived notions about the world, their own common senses and knowledges, their own understandings of categories and values that enable them to do their work according to best practices and web-standardizations. The categories, coding and programmer together function as surveillance, displacing those on the margins, rendering them invisible. 

    I would like to begin a conversation around these questions- how does embedded normativity within the code itself render the content inaccessible? or doesn't it? How are our physical bodies rendered in code? How do raced and gendered and sexualized inequities get carried over wholesale to digital spaces? What do we learn when we read the code that 'represents' us?

    What does it mean to read the code for the Facebook user profile? And what part do you read- the original code, the executed code, or the css? For example:  

    < body class="ego_page home hasLeftCol fbx safari4 mac Locale_en_US" >

    The Facebook user profile has a body class title of 'ego_page' - which describes the essence of the page's meaning- the profile you create is all about your own ego- you decide what information goes on there to describe your identity/ies - you portray yourself the way you would like to be seen. Or so it seems.

    BUT… the developers trust the user to write-in their current city and hometown, political views, religion and even a bio into a blank text box. Sex is an option, but male and female are the only (problematic) drop-down choices. There is no gender option. Additionally, a user can only be interested in women and/or men, and can be ‘looking for’ friendship, dating, a relationship or networking. There are no choices for one-night-stands or S&M, no choice of being gendered queer, or sexed as intersex.

    Facebook also gives users interesting choices for relationship status: standard ones such as married, single, in a relationship, engaged, widowed, separated, and divorced, plus ‘its complicated,’ and ‘in an open relationship.’ These last two, presumably, are to allow people to define themselves outside the mainstream of sexual relationships.

    What I find interesting is that the ‘open’ relationship is a values-based relationship; there are many people in this country who would find this reprehensible, yet it exists. So why then is the sex category locked down as a binary, and why aren’t there any gender choices? Additionally, why is the primary relationship status based only on sexual relationships? While there is a ‘family’ category, it is not given the same visual opportunities for placement in the user profile, thereby reducing other non-sexual relationships to a lesser status. The contradictions between implied primary sexuality, and the enforced heteronormativity reduce the Facebook user profile to the historically representational instead of lived, actual embodiments.

    Would it matter if the body tag was called something else? What meanings are gained and lost through the word choice in the body class?

    Facebook developers also make choices on what code elements to include and what to exclude, in this case, choosing to code drop-down menus instead of text boxes, thereby making choices about how information is collected. (see my post here

    The new open source social networking site Diaspora has chosen to user a text box instead of a drop-down menu to collect information about gender. By simply changing the input for gender into a text box, the individual developer made a space for those who are not normatively gendered. Her reasoning and the arguments against it are at the level of the code, through the common senses of the programmers themselves.

    Interestingly, the arguments against gender as a text box are directly related to Mark Merino's explanation of operational code vs. data in Hello World

    "the computer does not understand what it says. Literally speaking, the computer does not even interpret that code. When the function is called, the computer will print (output) the list of the two atoms (as symbolic units are called in Lisp) "Hello" and "World." The single quotation marks tell the computer not to interpret the words "Hello" and "World" (as the double quotation marks do in this sentence). With this distinction, language becomes divided between the operational code and data. The computer here merely shuffles the words as so many strings of data. It does not interpret, only uses those strings. However, those words in quotation marks are significant to us, the humans who read the code. "Hello" and "World" have significance, just as the function name "print" has a significance that goes far beyond its instructions to the computer and gestures toward a material culture of ink and writing surfaces." 

    In the case of the Diaspora gender text box and the Facebook drop-down sex menu, the information collected is in quotes, and only interpreted by the humans who read it, not the code that collects it. The code itself is always already socially interpreted by the humans who write it. What agency does this give the reader of the original code? the reader of the page after code execution? How are other 'common senses' and knowledges carried over into the code via the programmers and developers? How does this inform the actions and inactions of people using the site?

    plikarish

    obfuscation and elegance

    My academic background more or less precludes my participation in the theoretical side of things so I'll delve right into my primary interests: obfuscated code and the production of pseudo-texts. Usually, the obfuscated code does something incredibly simple, like redirecting from one website to another but takes an incredibly round-about way to do so. The "obfuscated" nature of code is obvious to any half-decent programmer: it lacks any sort of "elegance" (believe it or not, this is an important concern for many computer scientists!). But how do you sum up elegance? You can only *really* tell by truly understanding how well the code accomplishes its task.

    So if elegance is all about how the code accomplishes the task, what do we make of code designed to do something really irritating? While brainstorming for the forum, I happened to mention that a lot of content-based spam filters can be avoided by creating "pseudo-texts" that resemble English but make little or no sense (I'm sure everyone's seen plenty of this in their inbox). What one can do is take a whole bunch of examples of "good" text (text that passes through the spam filter) and use a statistical model to string words together in a way that looks legitimate but is almost pure nonsense. This text is used to disguise the spam payload. Below is some text generated by a simple trigram-based language model trained on the text generated by the co-hosts of this forum and I (plus some well known CCS texts):

    "instead of elegance. lev manovich in the conversations in order to do the conversion. been re-reading old programming primers to get started on your hastac blog discussing your interest in critical code scholars. matthew fuller generously shared software studies collection forthcoming. code can never be executed. within ccs, and extending them. cultivating procedural literacy the obama campaign‚ who do modelling should be included. here is an understanding that the lisp list processing languages developed for artificial intelligence software does and how it's a new kind of contextual."

    What do you think? To me, it's almost a simulacrum of the actual conversation... Hopefully I've thrown enough out there to garner some interest. I think it's essential to remember that code is process not product... Can you analyze it in isolation from its product? Does it matter?

    As the forum progresses, I can go into some detail about how I constructed the language model, how it can be improved and I'll also provide some obfuscated code examples in the code analysis sub-forum.

    markcmarino

    collaborative dialogue

    Thank you for putting this wonderful forum together. Special thanks to Max, Clarissa, and all the co-hosts and guests and Fiona for your dedication.

    This is a wonderful opportunity to discuss Critical Code Studies within the community of HASTAC.  CCS' goal of building collaborative dialogue been the humanities and computer science fits well within HASTAC's mission, and I very impressed by the work of those in the scholars program particularly in these forums. The CCS Working Group showed me the power of these venues and I am excited to see the conversation conducted even more in the open!

    To start the discussion, I submit an extension of the reading of the code the Transborder Immigrant Tool, which I presented at MLA 2011 on a panel with Mark Sample and Jim Brown, called together by Jeremy Douglass and Matt Kirschenbaum.  While working on the post, I found myself reflecting on the "arbitrary" in code studies.

    The "arbitrary" aspects of the code, from the perspective of the computer, include elements like variable names, function names, and in some cases even the order in which the code is presented.  People say, the computer doesn't care what those elements are called, and so raise an objection to critical code readings that overemphasis on these "arbitrary" elements can create readings that are purely subjective and not connected enough to the nature and context of the code.  As the introduction mentions, this is an area derived from Media Specific Analysis.

    While these elements constitute just one aspect of what we talk about when we talk about code, but I suspect, however, that this objection, at its core, is positioned against the act of interpretation itself.  It grows out of a fear that the code is being treated like poetry (which we try not to do, unless it is poetry as in the case of codework).  But it also comes out of a fear that unrelated or perhaps irrelevant meaning systems are being applied to the code. 

    While I understand and appreciate the thinking behind this fear, I do not share it because I see the work as developing tools for semiotic analysis that are attentive to difference while seeking out resonance, those meanings that are "encoded" in sign systems.  This resonance exists in the gap between code as functioning units and code as abstracted communication medium that accrues additional significance through its use and circulation, for example coding styles, beauty, even humor.  So while I hear and head the "don't treat my code as poetry or even text" comment, that does not keep me from examining the many meanings of the signs. 

    But, if it's all right, I'll let my post on the Transborder Immigrant Tool develop the rest and we can discuss here.

    Cathy Davidson

    Proud and happy . . .

    HASTAC was founded for many, many reasons but, I swear, one of the big ones was to be able for the field(s) to get to a place where such a forum could happen.  Thank you, everyone, for your hard work, even in getting around the problem of writing about code without sending our rickety website into hysterics.   I think this is going to be a spectacular Forum and want to thank you all early on for the hard work that went into planning it.  

     

    Finally, a warning:   as everyone in HASTAC knows, we are very unhappy with this site.  The backend is a disaster.  It's a very complicated site in any event, with lots of content (5000 content pages), and lots of interactivity was planned---but our developer didn't realize those potentials and, worse, the back end is deficient.  We're building a new site.   So, we went into a Critical Code forum (as with the music forum too) with some trepidation, knowing there would be problems.  And there have been.  Special thanks to Fiona Barnett for her patience, for Ruby Sinreich for her wisdom and counsel, and for all of the Forum participants for patience with a lot of wonky stuff.   I promise you, it will continue to happen, no matter what kinds of workarounds (code JPegs so we don't perturb the system??) we come up with.      Thanks to all for your spirit of adventure against odds. 

    Michael Widner

    Perl Poetry

    I'm excited by this forum, in particular because I had never even heard of critical code studies until the idea for it was proposed... and I'm a programmer. One object of study that I think might be particularly interesting is the phenomenon of code poetry. Perl has a particularly rich history, with annual competitions, of people either translating famous poems into working code or writing poetry in code.

    The Perl Poetry Contest

    Perl Poetry on Perlmonks

    Zen and the Art of Perl Poetry

    On that last page, Zen and the Art of Perl Poetry, the author provides 7 or 8 different ways to write the "same" thing in Perl, that is, to express to the computer the same operation. It's in praise of the variability of Perl's syntax, the multiple ways of "saying the same thing" that, to our human eyes, will read far differently. This view makes clearer, to me, the possibilities of reading code not as simply functional commands for the computer to process, but as signifying language, albeit one with some stricter constraints than languages for communication among humans.

    For the Perl Poetry Contest, one entrant translated Yeats's "The Coming of Wisdom with Time" into valid Perl (apologies for the formatting that follows):

    "Though leaves are many, the root is one;


    Through all the lying days of my youth


    I swayed my leaves and flowers in the sun;


    Now I may wither into the truth"

    Becomes

    while ($leaves > 1) {

          $root = 1;

    }

    foreach($lyingdays{'myyouth'}) {

          sway($leaves, $flowers);

    }

    while ($i > $truth) {

          $i--;

    }

    sub sway {

          my ($leaves, $flowers) = @_;

          die unless $^O =~ /sun/i;

    }

    Perhaps this code snippet might be better in the sub-forum, but I think having the original text makes this legible. The main thing to know is that "sub" declares a subroutine, which is called by the "foreach($lyingdays{'myyouth'})" loop, which itself implies a sequential array of numbered days (coders: I'm assuming its an array reference with a named key in the hash). It also, oddly enough, adds another meaning. Whereas Yeats's words make "lying days" and "days of my youth" a compound that implies equivalence, the fact that here, "myyouth" references but one type of "%lyingdays" suggests there could be many others. The code, then, adds an interesting ambiguity to the line absent in Yeats's original. Of course, when we think of translation, this sort of subtle shift in meaning is one we can predict to occur.

    Yet we're still left with a question: why write poetry in code, at all? Why do coders think this is fun to write and fun to read? Hackers like to make things do what they weren't designed to do in the first place. Coding languages were designed to communicate with machines, not with other humans (that's what code comments and other forms of documentation are for, supposedly). But, as the existence of this forum makes clear, code *does* communicate among humans, both surreptitiously as it structures our interactions and explicitly among the coding communities. Perl poetry, then, takes this communicative act and seeks to make art of it, just as with any other language. After all, Robert Frost famously wrote, "I'd sooner write free verse as play tennis with the net down." The rules, to him, made much of the beauty; code has its rules, so why not its own beauty?

    Which leads to the idea of "elegant code", code so concisely, ingeniously, and beautifully written that once its purpose is understood, the reader-coder receives a shock of pleasure at the discovery. But this is a difficult and rare beauty to find, as the xkcd cartoon in the forum prompt suggests. There is something pleasurable about taking code, languages designed for purely functional purposes, and subverting them and redirecting them to a different primary audience. The computer doesn't care what it does (and often Perl poetry does nothing, outputs nothing, just spins cycles in the CPU), but for the coders who read it, that purposeful lack of functionality is what makes it beautiful. Perhaps, then, rather than only reading code critically, we can also use coding to help us read language in different ways, too.

    marksample

    Remembrance of Code Past

    @Michael - Thanks for bringing up the examples of Perl Poetry. As my critical code talk at the MLA showed, I'm committed to exploring what code does but does not say, and what it says but does not do. But I'm also fascinated by the less tangible, affective reactions code can produce. Yes, code provoking emotions. I'm not talking about the output of the program---say Janet Murray's famous example of the death of Floyd in Infocom's Planetfall, which made her cry. I'm talking about the code itself. I'm talking about lines of code as a kind of Proustian madeleine. Snippets of code that send us spiraling back into the past, fragments of code etched in our memory, a single token that we never can forget. If there is any doubt remaining that humanists should study code, let the subjective power of half-remembered residual code clear it away.

    In my own case, I'm thinking of a Perl Haiku generator that a dear, dear friend of mine wrote in 1997. I knew even less about code back then, but it was thrilling to watch as the program developed before my eyes. I'd sit around and brainstorm vocabulary words to add to the program's database, and then we'd stay up past dawn, hitting reload as the CGI script called up haiku after haiku, each one seemingly more absurd, beautiful, or perfectly sensible than the ones before. For a while I had both the Perl script and the word bank data file on my Windows 95 Pentium computer, and though I could never reproduce the word bank even if I tried for years, I would recognize it in a moment if I saw it again.

    1997 was years ago, though, and many computers ago as well. At some point I lost the program. It's gone. At some point I lost the friend too, who's around, but not around here. But my friend is forever linked in my mind to Perl, to Perl poetry, to that specific piece of code. Maybe, like my dear friend, the code is around somewhere, just not here. I tremble to think that on some abandoned VAX server there's a Perl program that, if I could ever find it again, would both ease my sorrow and break my spirit.

    maxf

    Mark, you said, "..I'm

    Mark, you said, "..I'm committed to exploring what code does but does not say, and what it says but does not do."

    I cannot help but to think of Netwurker Mez, whose primary coding language, Mezangelle, is used to create code-poetry that typically does not compile. The plethora of literary techniques invoked in Mezangelle make her work particularly accessible to people whose backgrounds tend to be less on the programming side of CCS. I like to think of her coding practices as closely related to obfuscation, though more in the literary sense than functional sense, if you subscribe to that dichotomy. 

    For non-coders who are intimidated by the second C in CCS, or anyone unfamiliar with Mez's work, I suggest testing the waters with some examples of interpretive readings that have been performed on Mez's code-poetry. Rita Raley has performed a very insightful reading on Mez's work. Another interesting reading of Mezangelle comes from Talon Memmet, and can be found here. Lastly, Mez is the primary researcher and contributor to augmentology.com

    clarissal

    Code generating texts generating code

    Michael's comment here, and also one later by Mark Sample. got me thinking about a project I did recently and also of what Peter Likarish tried to do as a showcase for our forum here: creation of random texts generator. But what I am even more interested, is in understanding how code can be made to generate independent, elegant and meaningful phrases following a particular order (diachronic order) and which does not read like a 'spam' sentence. I've experimented with that to a minimal extent when I tried to build the storyboard for a graphic novel that I am continuing to work on (see www.duke/edu/~cal33) when I tried to envision, in a very limited way, how and what a machinic consciousness would articulate (if you visit that site, you will see a lot of problems with my way of demonstration, but think of it as a work in progress and continuous critique). I used javascript (that I patched from other sites) to perform those generation of random phrases that do still shape into a narrative flow. I picked sentences that are synchronically rather than diachronically related. But if we are interested in enlarging the machinic role of 'digital' story-telling, what sort of algorithm can one then create to enable the construction of narratives that could be both diachronic and synchronic without needing micro-managing controls inserted into the code all the way? Is it even logical to think about an algorithm that can create 'original' narrative' through the use of training networks, or I am actually stretching beyond the limits of what code can do? My caveat is that I have a very limited knowledge of forms of programming languages (I particularly haven't used later generation languages, especially those that deal with textual lexicons) so I probably will resituate my thoughts after I've done so. But I would love to hear from digital artists and writers who have tried to use code to showcase their creative work, while also envisioning providing a degree of 'freedom' for their work do self-generate concepts by learning from the original material inputted by its human creator, and how code helps one do that.

    Going off cadence a little, I would like to bring "A Thousand Plateaus" into the picture, especially in terms of territoriality, deterritorialization, the rhizome and faciality. When I think about faciality as described by Delezue and Guattar and how code is buillt and can be built to perform the  differential recognition of bodies and entities (after all, we do write code to identify something, pull it out of a register of array, and then use or manipulate that data), of the codification and decodification of organic bodies. Codes can be said to fall under the category of bodies without organs- BwO as a field of immanence desire, where the desire is a process of production without reference to any exterior agency. Can we consider the code to be "formal multiplicity of substantial attributes that...constitutes the ontological unity of substance (Deleuze and Guattari, 1987, A Thousand Plateaus 170)? I am sure there are other more expert in this area of philosophy who may have much to say about how code can reterritorialize a terrain (the simplest example being switching between a 'visual' or 'html' look in a blog entry.

    plikarish

    Code generating code, exactly

    Clarissa,

    I'm with you, this is a big interest of mine as well. I think the question you've asked, "Is it even logical to think about an algorithm that can create 'original' narrative' through the use of training networks, or I am actually stretching beyond the limits of what code can do?" is key.

    This may seem like a non-sequitur but I think Google is actually all about exploring this question. Interestingly, they're going about it in a very different way. Typically, computational linguistics was about building a better model of language and the way it works. My limited historical knowledge of the studies of computational linguistics suggest that people felt that if they could create a sufficiently advanced algorithm, they could achieve originality. Google seems to be taking another approach, throwing what would've been an absurd number of resources at the problem (not that they aren't making use of the absurd number of PhDs they've hired!).

    It remains to be seen if this is the way forward but they're predictive search technology is downright eerie and their vocal word recognition is state of the art. Could they produce an original text? I doubt it, at this point. But I think Google's commercialized limited forms of AI in a way the field struggled to do for nearly half a century, so there's something to be said for it!

     

    Peter

    clarissal

    coding search engines

    Now that you mention Google, yesterday, I decided to do a comparative search on Yahoo and Google of my own name to see the sort of results that each of these search engines come up with. Yahoo, I believe, has improved the way their search engine work, because of its ability to begin to search recursively (but I haven't tried investigating Yahoo's architecture further). I think that studying the code of search engines can help us understand meta-narratives through the form of meta-data they each aggregate while also deconstructing the grand narratives of individual searches provided by these search engines. I probably need to, like it or not, to pick up Perl proper sometime soon, in order to sift through text-oriented codes such as that of these open source search engines. http://www.searchtools.com/tools/tools-opensource.html. However, I suspect that in the meantime, I can use my existing knowledge of other programming languages to help me.  This links closely to the idea of the rhizome, faciality and territorialization in D&G. Now, this brings us back to the question of faciality. I'll try looking through some interesting snippets and paste them on the code critique section as soon as I am able. In the same way that literary scholars write papers sifting through the multiplicity provided by a 'literary' text, I think we can possibly advance new philosophical arguments through the comparative analyses of different codes. :) This forum is going to open up a real Pandaro Box!

    steveklabnik

    Perl monks refer to this

    Perl monks refer to this principle as TMTOWTDI, or "There's more than one way to do it."

    Interestingly enough, this principle has become one of the larger portions of conflict between one of Perl's children and its competitors: Ruby and Python.

    Ruby embraces this concept of TMTOWTDI, and Python rejects it. These two languages are almost identical to each other in many, many ways, yet culturally, the two groups do not get along. This divergence in opinion over what makes for 'good code' is one of them. From 'the Zen of Python:'

    There should be one-- and preferably only one --obvious way to do it.

    A small disagreement ends up being a large divergence.

    maxf

    Zen of Python

    Thanks for that link, Steve. Here are some snippets from the manifesto that I find particularly interesting:


    Beautiful is better than ugly.

    Readability counts.

    In the face of ambiguity, refuse the temptation to guess.


    I think the question of "what code is good code" plays a potentially important role in performing code readings. For example, if we're examining a snippet of Python code and we notice a particularly disorderly couple of lines, we might speculate as to why the programmer(s) struggled at this particular place, but not others. On the other hand, a close reading of TMTOWTDI-influenced code might reveal a programmer's ingenuity, artistry, or other characteristics that may peak the reader's interest.

    The Zen of Python, IMO, boils the language down to a science, which is of course unsurprising since it's computerscience anyways. What I find interesting is the contrast here between the attitude in the manifesto, and TMTOWTDI. This open-ended principle gestures towards an artistic approach to coding, a paradigm that begs for refinement our definition of good code. So I'm wondering, can Steve or anyone else enlighten us as to what makes code 'good' in the face of TMTOWTDI?

     

    steveklabnik

    I'd say that 'good' code is

    I'd say that 'good' code is code that communicates effectively. The message and intent of the code should be clear. TMTOWTDI allows for the freedom to express exactly what you wish, to give code connotations and subtle meanings.

    plikarish

    TMTOWTDI

    I've been fascinated by this schism for some time.

    I have to admit, I'm firmly on Python's side. It's more a practical matter for me... I can read and understand Python almost intuitively. I have the hardest time understanding even the simplest PERL script. That said, I don't think of myself as a devout adherent to the Pythonic way of doing things...

    davidberry

    A Provocation

     

    There are immediate problems posed by an object, computer code, that is at once both <i>literary</i>and <i>machinic</i>. The first difficulty of understanding code, then, is in the interpretation of code as a textual artefact.  It forms the first part of the development process which is written on the computer and details the functions and processes that a computer is to follow in order to achieve a particular computational goal. This is then compiled to produce executable code that the computer can understand and run. The second difficulty is the problem is studying this <i>something</i> in process, as it executes or ‘runs’ on a computer, and so code has a second articulation as a running program distinct from the textual form. 

    The textual is the <i>literary</i> side of computer source code, and connects to the importance of <i>reading</i> as part of the practices of understanding source code. Indeed, it is important to note that programmers have very specific and formal syntactical rules that guide the layout, that is the writing, of code, a style that was noted in the memorable phrase ‘literate programming’ (Black 2002: 131-137). As Donald Knuth explained in his book <i>Literate Programming</i> published in 1992:

    The practitioner of literate programming can be regarded as an essayist, whose main concern is with exposition and excellence of style. Such an author, with thesaurus in hand, chooses the names of variables carefully and explains what each variable means. He or she strives for a program that is comprehensible because its concepts have been introduced in an order that is best for human understanding, using a mixture of formal and informal methods that nicely reinforce each other (Knuth, quoted in Black 2002: 131).

    Knuth is also pointing towards the aesthetic dimension of coding that strives for elegance and readability of the final code – ‘good’ code is meant to be read. Good code, of course does not imply good software, nor does bad code imply bad software. For example, we might speculate the extent to which there is an implied reader of 'bad' code. Indeed, in some cases, the reader of bad code is only intended to be the computer (which will have a generally neutral position on style) and the writer/programmer herself - which may be linked to job security (good code means the programmer is easily replaced by another!). Further the question of 'good' or 'bad' code is one that is interesting to deconstruct in its relationship both with a disciplinary ethic of the 'good' programmer, but also in the assumption that good code <i>might</i> lead to good software. 

    The second side, the <i>machinic</i>, connects to <i>using</i> or <i>doing</i> and to study requires the ability to capture and study a time-based medium, something that is processual, agentic and constantly in a mode of flow. This is the world opened up by breakpoints, probes, debuggers, injection tests, sandboxes, and alpha and beta testing. It is also the world of the hackers who are able to transform software back into code in order to subvert the function of the software in particular ways. 

    I therefore I want to suggest, by way of a provocation, that it is useful to analytically distinguish between 'software' and 'code', such that we can think of code as the ‘internal’ form and software as the ‘external’ form of applications and software systems (for now I am bracketing out the hybrid form code/software such as interpreted code etc.). Or to put it slightly differently, code implies a close reading of technical systems and software implies a form of distant reading. Perhaps the most important point of this distinction is to highlight that code and software are two sides of the same coin, code is the static textual form of software, and software is the processual operating form of code. Of course, these are ideal types, which I suggest may be useful tools to think with, but I also want to argue they capture something distinctive about code/software, that treating them as synonyms fails to do. I also wonder if they point to interesting differences between critical code studies (close readings) and software studies (distant readings). Code and software are, perhaps, two <i>modes of existence</i>, in the language of Étienne Souriau, and perhaps digital humanists will use the appropriate mode, depending on their research object and method?

     

     

    Michael Widner

    Fun with Breakpoints

    This is a good distinction as anyone who's ever tried to code only to have the computer do what it was told instead of what you thought you told it will agree. The distinction makes more visible issues of open source vs proprietary, intellectual property, and hacking. The constant war between companies like Apple, which is notoriously secretive and closed, and the users who assume that, since they paid hundreds of dollars for a gadget, that they actually own it and its contents is waged by the hackers adept at reading hex, tinkering, and reverse engineering software until they can force open what Apple et al. want to remain closed and under their control: the code.

    Further, when we think of software, we cannot neglect the hardware aspect, the computer's embodiment of the code that becomes open to inspection for those with the knowledge to probe and understand what they find. I like the idea of thinking of computers as embodied; they obviously are, just not in ways immediately recognizable as embodiment to us.

    A lot of reverse engineering, moreover, derives from a trial and error process of probing motherboards and input/output ports to find out what the responses will be to different commands. The hardware, then, becomes a way to uncover the code, the leak through which the software, in its necessity of interacting with us in a physical way, makes itself available. (And this is why copyright protection, encryption, etc. will NEVER stop piracy, but only stop ordinary users from fully exploiting the abilities of their devices; if you eventually have to show an image/play a sound that humans can interpret, dedicated crackers can get at it, too, and disseminate the information.)

    markcmarino

    Code/Software distinction

    David,

    There's something very appealing about this dichotomy and, of course, like all dichotomies it deconstructs itself ultimately.

    I'd like to append a bit to this notion of software because there are two versions of the code in action:  the moment-to-moment effects on the state of the machine and the collection of effects that the user experiences.  How does situating software as a series of effects match with other theories of process expresed in software studies?

    Also, I'm guessing that some of the folks here would want to frame code and software in additional layers.  What are other reactions?

    steveklabnik

    Knuth's concept of literate

    Knuth's concept of literate programming has started becoming popular again, in a new form. This is largely due to the announcement of "Docco," which has sparked a revival of interest in hacker circles about literate programming and its worth.

    Interestingly, Docco is not actually a proper literate programming system, it's missing a proper system of 'macros' that Knuth considered vital. Macros allow for the re-ordering of the code so that the person reading a literate program can be exposed to the code in the order the writer felt was best, and not the order the compiler dictates. I wrote an article about Docco, literate programming, and this missing feature. In the ensuing discussion on Hacker News, Jeremy (Docco's author) stated that he believes that this is no longer a necessary feature, given that modern languages tend to be more relaxed about the ordering of code, and that the minor disruption in the exposition was worth the simplicity of not having the feature. You can see the effect this has on flow in my own article, where the "Some Setup" section has to interrupt the beginning of the text, due to requirements about the underlying Ruby.

    mkirschenbaum

    &lt;!--opening thoughts--&gt;

    If, as Emily Dickinson once wrote, there is no frigate like a book, then we might do worse than to start by saying there is no submarine like source code. (Okay, we *might* do worse than that, but it would be hard.)

    But like the fear of that silent menace beneath the surface waters, code is at this moment the source of enormous anxieties within digital humanities and humanities disciplines more generally. Should humanists learn to code? Do I have to code to be a digital humanist? Can Javascript count for my graduate program's language requirement? Is my campus courseware system open source? All of these are localizations of what I take to be a great deal of free floating anxiety around the humanities' engagement with computer code, a class of textual object alluring yet so foreboding.

    I start with these wider framing issues because critical code studies, it seems to me, must be about more than close readings of source code. Put another way, close readings of source code are necessary but not sufficient for critical code studies to find a purchase in humanistic discourse. Critical code studies is also the discipline | sub-field | modality best suited to guide our engagements with code as an instrument of critical and rhetorical expression, not to mention as a professional vocation. Critical code studies is, or should be, *precisely* positioned to intervene in a debate like what it means for a Ph.D. student in English to claim proficiency in C++ as opposed to French or Latin. (Not because a CCS scholar necessarily writes code, but because she's thought about code on something akin to its own terms.)

    My own first exposure to code came in middle school, where, as I’ve written elsewhere, as a child of privilege I attended a school district that had a classroom with Apples and IBMs. I learned BASIC and PASCAL, but even then it should have been clear that code was engaging a wider spectrum of textual practices than ex cathedra composition on the command line. There was copying and transcribing, for example, whether from a popular computer magazine or a textbook or the chalkboard. There was debugging, which sometimes took the form of proofing dot matrix printouts. There were exams, where we would demonstrate our knowledge of syntax by handwriting code and submitting it for a grade. Similar practices attend code-writing today, whether in the form of pseudo-code on the back of a napkin, sketches on a whiteboard, or code cut and pasted from the Web. Critical code studies has not, in my view, yet refined a sophisticated enough model of the thick *textualities* of code, preferring instead to treat code as an immaculate object on the screen, as when we are presented with a sample or snippet, its stark and unforgiving typography a latticework for critical exegesis.

    As respondent at the MLA session earlier this month on “Close Reading the Digital” organized by Jeremy Douglass, I remarked on the ease with which the idea of close reading had transmogrified into CCS. I also remarked on the tendency to naturalize or essentialize code as *the* authentic manifestation of the digital, problematic not only because critical code studies tends to confine itself to high-level languages that read out as closer to English than binary, but also because even source code relies on a layer of often sophisticated applications for its reading and editing. When we look at code in Emacs, or Eclipse, are we doing critical code studies or are we back to software studies after all? Kittler, of course, famously tells us there *is* no software, but by that stricture there is also never, ever, any code. Only circuits. And juice.

    I prefer to say that there is never *just* software, that software (and code) always exists within a human lifeworld that encompasses everything from those coffee-stained napkins to carpal tunnel braces to real estate prices in New Delhi. Not to mention the programmer, the coder, who has a body (Ellen Ullman's non-fiction essays in Close to the Machine still hold up very well here).  Code, I suggested at the session, was not "close"--micro--so much as precisely *macro,* an actant (after Latour) that exists only amid the layers of formal materialities marking complete computational environments, von Neumann’s or otherwise.

    So critical code studies, in my view, has a number of challenges and responsibilities before it. It must engage not only with the code “itself” (an empty formation if ever there was one), but with the anxieties around code, disciplinary and institutional not the least of them. It must expose the composition of code, and make unassailable its understanding of code as *writing,* a particular kind of writing to be sure, but an embodied and material practice as much as the scribe bent double in the scriptorium. Nor should basic bibliographical necessities such as establishing conventions for formatting and citing code, for example, or notating variants in code be beyond the purview of its practitioners. Perhaps above all critical code studies must attend to the heterogeneity of code, and deflect and dispel the tendencies to speak as though an interpreted language is the same as a compiled one, or a high level language the same as machine code, or markup the same as scripting. Or any of these as if they were also any of the myriad other encodings that govern our each and every interaction with digital processes that otherwise exist beyond the capabilities of the human sensorium.

     

    Richard Mehlinger

    IDEs and "Real Programmers"

    I'd just like to expand a little on something you wrote, Matthew: "I also remarked on the tendency to naturalize or essentialize code as *the* authentic manifestation of the digital, problematic not only because critical code studies tends to confine itself to high-level languages that read out as closer to English than binary, but also because even source code relies on a layer of often sophisticated applications for its reading and editing. When we look at code in Emacs, or Eclipse, are we doing critical code studies or are we back to software studies after all?" I think this raises important questions. First and foremost of these is whether we should read code in the environment in which it was written. On the one hand, we don't insist that English students read books in manuscript form. On the other hand, code is not the same thing as prose. I think there's another, slightly more problematic aspect with this, however, which is what appears to be the underlying notion that higher-level programming languages or IDE's are not, as it were, "Real Programming" (see also The Story of Mel, below).

    Michael Widner

    IDEs, libraries, collaboration, and the human factor

    As you mentioned elsewhere, claims of "real programming" are mostly ego-games. The best programmer I know does all his work in highly sophisticated IDEs and uses complex testing procedures and design patterns to build his code. He does this because he has long since passed the point that he can hold the entire structure of his works in his mind at once, which is what "real programming" requires. As the processing, interface, and other demands on software have grown, so have the tools. We, unlike computers, have a fairly hard memory limit, necessitating our reliance on good IDEs that make the relationships among the parts clear and on high-level languages that leverage powerful libraries and syntax to do a lot with only a few lines of code. I think one interesting thing these facts point to is that the limits are ultimately based on human cognition, not the computer, so again the human enters the equation. Increasingly this young forum is making me aware of what should have been an obvious fact: code is about people more than it's about computers. This forum has really made critical code studies seem far more important and meaningful than I would have imagined. But to return to your question about manuscripts (which as a good medievalist I have to answer "yes, we *should* read manuscripts, at least sometimes"), we don't even think about the printing press and paper as technologies these days; they've become invisible technology that, when we do literary criticism, almost never comes into play unless we're specifically doing book history or material culture. Even then, those approaches have their own labels because we need something to distinguish them.

    mkirschenbaum

    I fear my reference to IDEs

    I fear my reference to IDEs has been misunderstood. I mentioned them not as a foil for "real" programming but to underscore that there are diverse and heterogeneous environments in which code is written, and that writing code always also involves an applications layer. I'm dubious one can separate CCS from software studies, as someone suggests above, and Michael, your description of a particular programmer's work practices shows just how hard it would be. But above all, my point is to ask us to reexamine any latent assumptions we may still hold that code, whether high-level or machine, whether authored in an IDE or a shell environment, gets us "closer" to any particular digital essence or even, as Ullman has it in the title of her book, any closer to the machine.

    markcmarino

    Getting Closer

    Matt,

    I feel like I appreciate your intervention on a theoretical level but not on a practical level.   I (obviously) want people to be examining digital objects from as many sides as they can and, again, I have yet to fine a good set of examples of critics looking at the code, which is why we keep adding these "code critiques" forums to places like this list and CCSWG.

    All of these approaches (software, platform, code studies), are developing interconnected methodologies for critiquing these digital objects.

    For now, I'm content to replace the question, "Does looking at the code bring us closer?" with "What do we learn when we look at the code?" with the understanding that, as you and others have noted, to look at the code is to look at as it sits in the file, compiles, circulates, acts upon the hardware, interacts with the operating system, produces the set of effects David is calling "software" etc. etc. 

    So it's not about getting closer to essence, though I have used the "looking under the hood" metaphor, but it is about looking at another relavant component of the digital object.  (I don't mean to reduce the subtelty of the point here.)

    My recent collaboration with Jessica Pressman and Jeremy Douglass reading William Poundstones' "Project for Tachistoscope: Bottomless Pit" is about reading digital objects by looking at various aspects including: the text and produced object on the screen, the code, and visuals analyzed through what Jeremy and "Lev Manovich" have called "cultural analytics." 

    We believe it's time for more collaborative readings of digital objects using an assemblage of approaches.

     

    mkirschenbaum

    The "intervention" is all

    The "intervention" is all about opening up the kinds of questions critical code studies addresses itself to, whereas (and I think this was R. Grusin's point at the MLA session too) it currently seems to come to rest largely on close reading modalities.

    Beyond that, I don't think we have any disagreement. Certainly not in terms of "collaborative readings of digital objects using an assemblage of approaches."

     

    markcmarino

    close reading modalities

    Yes, and as I mentioned to him and have commented here, some of my work focuses on developing very particular kinds of moves on the code itself and these do grow out of close analysis of particular signs in relation to their context. (The panel was called "Close Reading the Digital," after all.) I suppose I don't see these as limited because I understand them to be emerging methodologies that could be employed in larger readings.

    I think problems arise when those particular techniques that I experiment with in very particular sections of my critical writing come to stand in for CCS more generally.  It's in these examinations where I keep asking myself: what can be said about this line of code, a question maginified on the multi-authored 10-Print project, where we are examining a program that is one line long?

    But this is a forum for performing CCS (perhaps most clearly in the "code critiques" section) and also scoping out CCS more generally. So these comments are quite helpful in directing our focus toward the larger subject area of CCS.

    I wonder, by your estimation, does framing the analysis as "code critiques" shift the focus too much toward these kinds of close reading?  If so, how might alternative threads (that examine particular encoded objects) be framed?  Or rather, how can the act of examining particular examples of code be framed so that it doesn't fall prey to what you are characterizing as "close reading modalities"?  Are there good examples of critical works that attempt this beyond the ones we've mentioned (specifically Fuller's collection) ?

     

    markcmarino

    How to read code

    I will move to other topics, soon, but I keep coming back to some of these questions raised by Matt.  The theories of code and computing behind them are fascinating.

    Can we start to build/model readings that follow this call for addressing these contingencies and the situatedness and the partiality of code in the "code critiques" section?"

    Patsy Baudoin

    Yes, thanks, Mark. I would

    Yes, thanks, Mark. I would love to actually see this in practice, too. I think Matt's right on about contextualizing the production of code, for instance, but I'm wondering how this would spell itself out.

    mkirschenbaum

    Just have time for a quick

    Just have time for a quick comment right now, but here's a concrete suggestion: make a subversion archive the basis of a code critique, rather than the one-off code snippets that are typically the centerpiece of the activity.

    In literary textual editing, tha anlog would be dispensing with the so-called copy text, the "authoritative" text to which other versions stand in relation as mere variants. This move was accomplished theoretically by the late 1980s, and has since been practically realized in the form of digital text analysis and collation tools.

     

    davidberry

    Political Economy of Code

     

    I think that one way of doing this would be through a political economy of code. We could consider particular kinds of 'Regimes of Code' that have existed, allayed with the social and economic context which have acted as a condition of possibility for certain kinds of regime. For example, before IBM was forced by the US government to unbundle its hardware and software in the 1960s, there was no software industry to speak of. Equally, the same issue arises with intellectual property being applied to code, particularly copyright, and later business processes (in the US). We might also reflect on the Microsoft unbundling action, again by the US government but also by the European Union, which forced Microsoft to move onto the back foot. 

    In the interests of debate I offer these very underdeveloped 'regimes' which might be a starting point for theoretical development:


    Pre-1969 The Regime of Machine Calculation (code was subservient to hardware and large corporations) Paradigmatic Language: Assembly, COBOL

    1970-1983 The Regime of Machine Logic (Unix, Vax VMS, Servers, etc) Paradigmatic Language: COBOL, ADA, Pascal, C++

    1984-1995 The Regime of Personal Computing (rise of PCs, the killer app, Microsoft, Apple, game consoles etc) Paradigmatic Language: BASIC, VB, Pascal

    1996-2005  The Regime of the Internet (rise of the browser, HTML, etc) Paradigmatic Language: Java, Perl, PHP, Coldfusion, Flash (ActionScript)

    2006-present The Regime of Social Computing (rise of Web 2.0, social graph, real-time streams etc) Paradigmatic Language: Asynchronous Javascript, Java, C++, Perl, Flash

     

    (this is just a starting point, but it would be useful to tie in data (XML/JSON) but also frameworks (Ruby-on-rails/JQuery) etc.)

     

     

     

     

    Patsy Baudoin

    That seems like one

    That seems like one productive angle, David. Thanks. Are there any good historical treatments of any of these "regimes," as you call them, that I could delve into more deeply? Or even a history of the political economy of the computer/computer software industry? Thx.

    davidberry

    Starting point..

     

    I've borrowed the concept of Regimes from Parisian Regulation Theory, using the concept of 'Growth Regimes' and here adapting them to code. There are a few good history texts around but a good starting point is Campbell-Kelly:

     

    Campbell-Kelly, M. (1995) ‘Development and Structure of the International Software Industry, 1950–1990.’ Business and Economic History 24 (2): 73–110.

    —— (2004) From Airline Reservations to Sonic the Hedgehog: A History of the Software Industry. London: MIT Press.

    Richard Mehlinger

    Periodization

    While I find your periodization interesting, I'd be really careful with trying to assign each period paradigmatic programming languages. I think we'd be better off simply focusing on the software being written--and being written about--during those eras. In short I really am not sure how helpful this periodization is for CCS; it seems more focused on software studies. I do think, however, that the rough divisions make sense. I'd also suggest that the first and second browser wars and the rise of Mozilla represent important milestones.

    davidberry

    Code or Software

     

    Firstly, these Regimes are ideal types that allow us to look at the structures that facilitate different kinds of writing (and running) of code. The allow us to explore the conditions of possibility for certain kinds of written code and not that different to having some attention on the materiality of certain kinds of printed works. Writing on VI or VED and writing in an IDE, like Visual Studio, is a dramatically different experience. Indeed, the sheer amount and complexity of code that can be written is greatly increased by the Fordist mass-engineering support later IDEs provide.

    Secondly, I think it is clear that there are paradigmatic languages in certain periods, these can effect other languages. For example, C++ changed many procedural languages such as Visual Basic which had 'object'-like features added to them. And Visual Basic was a language that although many thought messy and typically Microsoft, was very easy to learn and was the first language many people experienced. Indeed, often these paradigmatic languages are taught in schools and have a certain longevity in terms of how people think through code. 

    Its common when drawing up ideal types to criticise them for their seeming simplification, but that abstraction is their function, to draw attention to a higher level of analysis when making comparisons. We might think of the advantages of a hermeneutic reading of code that pays attention to the parts (close reading of fragments) with the whole (Regimes that provide the conditions of possibility and help explain why certain coding conventions are being followed). After all, we should be aware that coders are not hyper-individualists, authors that are creating code ex nihilo. Rather, they emerge and follow practices within a particular historical, social and economic context. This, I would argue, is an important part of what critical code studies should be concerned with. 

    Software studies, on the other hand, is much more divorced from the particularities of the code that made it. For example, comparing Photoshop and Illustrator, it doesn't matter (at least to some extent) with language they were written in, if the focus is on, say the human user interface, or the functionality of the software. That is not to say code has no relation to understanding software, indeed, EMACs is in some senses better understood with reference to Lisp. But if we follow Manovich's notion that software studies the 'softwarization' of culture, and it is his own ideal types, that is the five principles of new media, that he uses to do the analysis, then code as textual *source* is backgrounded. Again though, these are analytic distinctions to help us study code and software, not ontological categories set into the the fabric of the universe :-)

     

     

    davidberry

    Code all the way down. ;-)

     

    I completely agree with we should 'replace the question, "Does looking at the code bring us closer?" with "What do we learn when we look at the code?"'.

    I think this is especially pertinent when it is emphasised that there are multiple levels to studying code, as code, whereby one might continually open the layer below in order to understand code. But that doesn't mean you are getting 'lower' or 'closer' to the metal, indeed, you may actually be moving into different levels of code abstraction or through code libraries. Rather confusingly the way in which code is structured and can #include other code can mean that multiple libraries, frameworks and namespaces are being used in a very networked/bricolage way. 

    We might say that we should not just be looking just at the 'effects' of code, but also asking how do we use code. Code, of course, can also be a tool for thought, as much as a functioning mechanism in its own right. Here I am thinking particularly of design patterns, and object-oriented design. 

    lanxle

    Pedagogy

    This is an amazing forum, so thank you to the hosts.  I look forward to learning a lot here.  This forum feels incredibly intimidating to someone like me who does not work on code.  I hesitated to contribute at all, but realized that perhaps as a former learning game designer, I did have something to contribute as a person who once had to learn how code worked in order to do her job.

    I took a workshop course with Nick Montfort at MIT, where he attempted to get a group of students who knew almost nothing about code to a point where we could walk into a discussion like this one and follow the conversation in a meaningful way.  The moment when lightning struck for me was an exercise where we learned BASIC.  He introduced the history of BASIC as a textual program that early gamers could take home, type into their computers and play.  Then Nick passed out to each of us a unique program, had us attempt to understand what it would do, and check our hypothesis by typing in and executing the program.  I found this exercise immensely eye-opening, because the syntax(?) of BASIC allowed me to finally see that code was a very concise series of instructions to be executed in response to a variety of inputs.  For me, the language of "arrays" and "conditional loops" and "compilers" obscured the textual nature of code. Moreso than jumping straight into a format like Processing, the BASIC exercise made me finally confident enough to attempt writing my own code, no matter how simple. 

    I would definitely recommend beginning with something like BASIC, which you first encounter on paper, as a way to introduce the concept of procedural thinking.  Fear, for me, was a bigger obstacle than anything else in this process.

    jabauer

    You've written a great

    You've written a great corrective to how we often approach teaching people programming.  We really don't want this forum to be intimidating so please keep coming back!

    markcmarino

    Welcome

    Thank you, thank you, for joining the conversation.

    I hope that others who are out there feeling similarly intimidated by the long posts or the ramparts of code-speak might brave the waters.

     We have various calls to look at MORE than just the source code, but we also want to talk about how to introduce the practices of CCS into classrooms, and in this case particulary humaniteis classrooms of non-programmers.  Mark Sample talked about having his students explore the code of  SimCity/Micropolis in his MLA 11 paper.

    Here's another exercise people can try:

    Exercise in examing a digital object with some CCS tools:

    Take a look at some source code even of a scripting language (HTML, XML) of a web page. See what you notice that might not have been apparent through your experience of the rendered page.  See what catches your attention.  See if there are traces of the historical, material context out of which this page was developed, such as previous versions, browser specifications, keywords, lines that have been "commented" out, and of course, any comments. Look at the meta-data.  See how the content has been distributed across various files, including a Cascading Style Sheet, JavaScript Files, et cetera.  What content is static and what content is dynamically generated?  Look at how the same code is rendered differently in different browsers, on different platforms.  See what happens when you swap out the content  of the page for other content.  Does this page appear to be made "by hand" or with a program such as Dreamweaver? 

    Although you are performing this exercise on a "mere web page," this is a good beginner exercise in looking at another dimension of a digital artifact.

    Then, for Discussion:

    What is the relation between the form of the code and the content of the page?  What signs of the historical composition of this code appear and how does that affect our understanding of the page? What choices represent conventions of the moment when the page was made and what choices represent divergences?

    It might help to choose a web page that has obvious cultural relevance, such as a political website.  And you could pair such an exercise with complementary readings from a book such as Liz Losh's Virtualpolitik.

    I sometimes do this exercise as a comparison between early static web pages and dynamically generated web pages. Using the Inernet Archive, you can also compare two pages used for the same site at different times.  Then you can begin to ask about the changes, why they were made, how they affect the overall website.

    Again, there can be many qualifiers here for the purists that this is not really code, software, etc., but I find this exercise quite useful as a starter for non-programmers.

    Question:  What similar or other exercise have you tried in your classes?  How could such an exercise be modified or developed to include some of the concerns that David, Matt, and others are raising?

     

    SEAugust

    Do computer scientists need to know how to study the humanities?

    In response to the question “Do Digital Humanities scholars need to know how to code? “ I pose two other questions as a computer scientist:

    - What do computer scientists need to know to collaborate with colleagues in the digital humanities?

    - What do digital humanists need to know about code?

    The ability to build a strong bridge between digital humanities and computer science depends upon efforts of each discipline to understand the other.  Yet conferences on critical code studies are attended primarily by large numbers of digital humanists eagerly examining software and a small number of computer scientists who often seem uncomfortable immersed in discussions of humanities.

    It is relatively straightforward to answer my second question and build the bridge from the DH end. My question echoes Mark Marino’s call to talk about how to introduce the practices of CCS into classrooms. Program design, IDEs, program structure, syntax, semantics -- the more digital humanists know, the more perspectives they can use to analyze code. The first question, however, is equally important and far more challenging.  What does it take to build the bridge from the computer science end? Why should we be interested in CCS, what can we contribute, and what tools or knowledge do we need to decode the language of humanists?   

    The idea of studying software the way I used to analyze poems by Pushkin or novels by Dostoevsky as an undergraduate major in Slavic Languages is intriguing.  Looking back on how or what data is represented in programs can provide insight into our beliefs. But without insight into issues that concern humanists and the paradigms they use to analyze literature, much of the CSS discussion will remain abstruse.  How do we address that problem?

    Richard Mehlinger

    Obfuscation and the Story of Mel

    "The Story of Mel, a Real Programmer" remains one of my favorite short stories, and I think it's worth a read for anyone interested in CCS. I also think it raises interesting points about elegance, minimalism, and obfuscation. Mel's code is so minimalist as to be incomprehensible by mere mortals, but whereas we normally think of elegant code as that which is extraordinarily, lucidly comprehensible, Mel's code is elegant not simply in spite of but because of its painstaking, incredulity-inducing hand optimization. 

    Furthermore, I think it suggests why things like the IOCCC exists in the first place: it offers a way for programmers to show off, to demonstrate their cleverness and, by extension, their superiority. Being able to write code like this, code which is simultaneously elegant yet incomprehensible, becomes a form of peacocking. I wonder what other people's thoughts are on The Story of Mel, and the reason why some programmers seem to go to such great lengths in pursuit of elegance.

    Post-script: Amazingly, Mel was (is?) in fact a real programmer: http://en.wikipedia.org/wiki/Mel_Kaye.

    jabauer

    In Pursuit of Elegance

    Ok.  That was always going to be my title, even before I read Richard's post directly above.  I almost changed it, but what the heck.  Here we go.

    One of the older jokes about programming states that every great programmer suffers from the following three sins: laziness, impatience, and hubris.  Laziness makes you write the fewest lines of code necessary to accomplish a given task.  Impatience means that your program will run as quickly as possible.  And hubris compels you to create code that is as beautiful as you can make it.  These three criteria - length, speed, and elegance - are the benchmark for evaluating code. 

    But what makes code elegant?  One of the first things you learn in a programming class is that (in most languages) the computer will completely disregard any white space beyond the single space required to differentiate one part of the statement from another.  However, in the next breath, your instructor adjures you to follow indentation guidelines and fill the eyespace of your code with enough blank spaces to make scandinavian graphics designer drool.  So your code ends up looking rather like an ee cummings poem with lots of random space, oddly placed capitalization, and sporadic punctuation. 

    Of course that is the persepective of someone who is not used to looking at code.  The indentations draw the eye to nested components (loops, subroutines, etc), the capitalization signifies variables or other important components of the program, and the punctuation stands in for the myriad of mathematical and logical operators absent from a QWERTY keyboard. 

    I believe that fear Matt Kirschenbaum discusses above comes in part from the visual strangeness of code.  It just looks weird and impenetrable.  The mantra embraced by too many programmers of "It was hard to write, it should be hard to read" doesn't help the situation either.  Academics don't like feeling stupid (especially once they've left their graduate student days behind them) and the seeming impenetrability of programming syntax makes them feel that way.

    Of course it's not the academic who is stupid, it's the computer.  People who have little experience with how computers actually work often miss this critical distinction.  The "thinking machine" does not think.  Like Mark Sample's now lost haiku generator, the computer has no vocabulary we do not give it.  And as Mark Marino points out, as far as the computer is concerned, even those words are completely devoid of meaning.  This gives the programmer an extraordinary amount of power, but within the constraints that everything must be broken down into components so simple even a computer can work with them.

    My hope for Critical Code Studies, a field I have only just become acquainted while helping to create this forum, is that by analyzing the thick textuality of code and the highly social, highly contingent environments in which code is generated, we can find better ways of explaining code to those who are afraid of it.

    As a historian of Early American Diplomacy who spends much of her day designing and building databases, websites, and data visualizations I find myself constantly trying to allay the fears of my less technically trained colleagues.  However, there are crucial connections between the work of programmers and humanists.  I think the link may lie with aesthetics. 

    This brings us back to laziness, impatience, and hubris.  Speed and brevity were virtues of necessity in the early days of computer science.  Early computers had very little memory or processing power.  Even an efficient program could take hours, an inefficient one weeks.  Also if the program was too long it could not be stored on the hard drive.  The vast amount of memory and processing power on even a budget home computer have made these restrictions all but obsolete except in the case of very small devices or very large data sets.  Yet these criteria continue to have great psychological power, not unlike a great professor's ability to reduce the complexity of a historical event to the essential points her students will remember, or the identification of previously unrecognized leitmotifs which draws an author's body of work into a new stylistic whole.

    The virtue of elegance comes straight from mathematics, which to me suggests that it is built into the very fabric of the universe.  We all recognize beauty in some form.  Sometimes the best way to understand a foreign culture is to determine what they value as beautiful and find in it the beauty that they perceive.  The elegance of code is bound up in structure, process, and product.  The better we can explain it, the more accessible code will become.

    jabauer

    I'm really young

    So I got a call from my father this morning after he read my post.  He lovingly reminded me that he has been building databases since before I was born and is thus in a position to remember exactly why brevity was such a virtue.  It pre-dates hard drives and goes back to the days of punch cards when you had an upper limit of 80 characters to a line.  I stand (well actually I'm sitting right now) corrected.

    Richard Mehlinger

    A Most Darke and Terrible Vision, or, What Code Should We Study?

    I think one of the questions CCS will need to grapple with is: what code is worth studying? As I pondered this problem this, a nightmarish vision opened before my eyes. I saw thousands of digital humanities graduate students, wasting away in the prime of their youth as they poured, day after day, over thousands, nay, millions of lines of poorly written code for things like sound card drivers, heat sensors for toasters, and digital circuit placement algorithms in CAD software.* I saw their souls flagging as each churned out hundreds of pages that no one save their dissertation committee would ever read. Verily, I saw a Great Beast of shadow and flame rise from his pestilential abyss, and watched as it devoured their pathetic souls, sentencing them to wander the wastelands in sack cloth and ashes for all eternity, hoping against hope that someone might recognize the crucial importance of the Epson NX 410 printer drivers to understanding the postmodern condition. I heard the screams of the damned, the wailing and gnashing of teeth, saw the the pitiless, eternal fire, smelled the stench of the brimstone. And then--darkness.

    I exaggerate, but only slightly. Probably trillions of lines of code have been written in human history, and most of that code is frankly not going to be that interesting. That is not to say that mundane code can't be worth studying--as any historian could tell you, "mundane" documents can be vitally important sources. But again, given the extraordinary amount of code that has been written, CCS needs to have some criteria of determining what code is worth looking at. But what? I'm very curious to hear others' thoughts on this.

    *My hardware-specialist roommate suggested this last one.

    **Boo, it won't let me post a long s. :(

    markcmarino

    Needle in a HASTAC

    Love this apocodecalypse!  Don't worry, I'll be assigned to that level of Hell.  We can hang out.

    One answer that I use, in light of these conversations: look at the code of the digital objects you already want to study.

    Another answer: Look at the code that computer scientists find worth talking about.

    The 10-Print answer is look at the code that fascinates you.

    At MLA, Jeremy likened this problem to Critical Legal Studies -- where the scholar stands before the abyss of laws.

    What are other answers?

    jabauer

    Define Interesting

    As the person heading up the Code Critique section, I particulary appreciate this concern.  I spent an incredibly long time looking for snippets that I thought might lead to interesting conversations; we'll see how well that worked . . .

    My initial thought was to look for code that programmers found beautiful.  There is even a book out called "Beautiful Code" edited by Andy Oram and Brad Wilson http://www.amazon.com/Beautiful-Code-Leading-Programmers-Practice/dp/0596510047

    Unfortunately, the examples I looked at did not strike me as immediately, visually interesting.  I emphasize the visual component, because I wanted code that would appeal to people even if they don't know how to code (I threw out some very elegant solutions written by a friend of mine for the same reason).

    Of course, if you are writing a paper then you can unpack your code example any way you want.  But I chose my two offerings (and many thanks to Richard, Peter, and Clarissa for helping me out with such great snippets of their own) the following way.  Perhaps this will serve as a warning for others:

    I wanted to start a conversation about code as art, which lead me to Processing, a language I know and have found fascinating for a while now.  There is a website called OpenProcessing where artists post their digital sketches along with the code used to generate them.  I know from past experience that this website is shot through with tree generators.  So I looked at about 30 sketches until I found the one I wanted, one where the code was as clean and stark as the trees it created.

    Then I wanted to do something with Perl because, perhaps of all the languages I have worked with, it places the highest value on brevity over clarity.  I looked at books and websites on Perl until I came across a single line that redirects a website.  I immediately started thinking about all the little ways that mini-programs like that shape our experiences behind the scenes.  So I threw that one up as well.

    This is a much longer response than I meant to write, so my apologies.  Any thoughts?

    WendyChun

    software art--perl interpretations, many questions...

    hi all,

    perl poetry was brought up earlier, so i wanted to share this wonderful example:  london.pl by mongrel

    http://www.mongrel.org.uk/?q=londonpl

    although software art precedes CCS (mez etc.), one question i have is this: to what extent does critical code studies also create its object?  does critique of code enable a space for critical interventions within code?  (this, i think, is a more productive variant of: should CCS engage working on non-working code).

    also, agreeing with matt and with mark at the same time, to what extent should the mission of CCS also be an engagement with the limits of code?  that is, with what makes code possible and with the changes in the meaning, definition, and use of codes?  should critical code studies also engage hardware algorithms and hardwired logic?

     

     

    markcmarino

    Calling forth code consciousness

    although software art precedes CCS (mez etc.), one question i have is this: to what extent does critical code studies also create its object?  does critique of code enable a space for critical interventions within code?  (this, i think, is a more productive variant of: should CCS engage working on non-working code).

    I'm not sure, but I suspect that as programmers become aware of the broader audiences for their code, audiences that Jeremy Douglass was identifying in Week 2 of CCSWG -- lawyers arguing cases about copyright, pundits discussing source code and comments in climategate, and critics examining code in CCS -- I can only imagine their code writing will be impacted.  Such attention is already given their code both internally by project managers, other programmers, and the higher-level architects, but also in such unintended forums The Daily WTF.  On this forum, programmers post the headaches they run into on a regular basis trying to deal with other people's code, posting with a sense of ironic, despairing disbelief. (Allow me to go grab an example)

    In the case of the Transborder Immigrant Tool, I know that the collective is made up of artists attuned to the effects of their signs.  That their code might reflect some of this artistry is not surprising to me, and I beleive those aspects were in the code even before they released it. 

    So I do think, Wendy, you are right that critical interventions in reading code could ultimately have an impact.  From my conversations with comuter science faculty, I have found that this self-consciousness, this sense of an audience beyond themselves, is a large part of CS curricula.

    markcmarino

    CODESOD

    Apologies for the long except here.

    Again, this speaks to several questions: Who is the audience for code?  How do programmers encounter each other's code?  How does source code become an occasion for more discourse about and critiques of culture, style, methodology, the work force.

    So, the CodeSOD section is the one you should checkout on The Daily WTF. It's fascinating.  The forum features these snippets of code submitted from alleged real code.  The genre for the setup looks more like the Reader's Digest joke sections than a programming manual -- although those can be lighthearted, too.  In fact, the structure of reader-submitted "true life" stories from the programming trenches lines up with Reader's Digest quite nicely. There's almost always a set up about a character who either creates or happens across the code written in narrative form.  However, unlike Reader's Digest, comments on the pages I've glanced at seemed to average 100-200 comments where reader's evaluate the WTF-worthiness, notice other mistakes or infelicities of the code, defend the code, relate their own stories, et cetera. Very "Inside Baseball."

    Take a look at this post entitled "Feng Shui": (I chose one that should be relatively easy to appreciate without much explanation.  It does again deal with variables but there are plenty of other code examples....)

    When Mike's manager asked if he'd like to take a stab at doing some maintenance on the Freight Calculator, naturally, he agreed.

    After all, like many manager requests, it's not  like he really had a choice in the matter, though Mike sorely wished that he did.  The Freight Calculator, a MS Access/VBA "app", was previously maintained by a former fellow developer named Trent who had moved on to supposedly greener pastures a few weeks earlier.  Like many in the department, Mike often heard Trent cursing under his breath any time he walked past so his departure was not at all unexpected.  Though Trent was the only developer who had delved into the inner workings of the Freight Calculator, it already carried a reputation throughout the department as being a nightmare application to support.  A reputation that Mike soon witnessed as being well earned. 

    Absent of any outside oversight or input, its functionality seemed to more closely resemble a cup of noodles than a "calculator".  Though knowing the history second-handed, Mike didn't necessarily blame Trent for how the logic flowed, but one detail that did stick out.  The variables were often given incoherent and inconsistent names, and in many cases, vowels were dropped from names where they should have belonged.  Maybe he felt that by not using vowels it would be a cost savings for the company Mike chuckled to himself

    Mike was all set to do a global Find-and-Replace on the scripts until he opened a module named simply "Variables".  Now, the fact that this file was overflowing with literally hundreds of global variables is a true WTF, but instead, much like finding a beautiful field of flowers growing near Chernobyl, what he saw was actually...beautiful.

    [Here's a link to the post where you can see what Mike saw.]

     

    Comments in the forum reacted with a toungue-in-cheek awe of the beauty of this file or whether the code was a duck or a vase.  Just another (and hundreds more) example of the human audiences of computer source code and how they receive it, circulate it, and respond.  When Richard calls for anthropoligical work, I'd say, look here, too...

     

    marksample

    What Code Is Worth Studying?

    The question of what code is worth studying is not a very interesting one to me, except in the inverse. That is, coming up with "criteria" for what code is worthy of scholarly attention would tell us more about our own technological and ideological idiosyncrasies than anything about the code itself. Indeed, establishing a criteria for determining inclusion in code studies would be an anathema to both the humanistic and scientific pursuit of knowledge.

    In short, whatever code we decide is worth studying is worth studying.

    Patsy Baudoin

    The Politics of Code

    Thanks to everyone for this opportunity to learn and paritcipate. My code-reading skills are elementary; I don't program. I come to this to understand better the sorts of variables I need to think about when I question the complexities of what goes on under the hood.

    I want to apprehend the political fabric and the layering of normative features, human factors, histories, aesthetics, humor, pleasure, etc. that all of you are discussing here. I don't think looking under the hood works very well unless you can differentiate the carburator from the spark plugs from the dip-stick and what interdependencies exist among them (or not), and so I appreciate even the theoretical distinctions that are made to help tease things apart even when they must be brought back together to make sense. This forum is a wonderful window on the workings, effects, and identities of code.

    To my colleagues and friends who wonder why I care, I remind them that I've learned about many cultures without knowing their langauges (haven't we all?), even though, as a former language teacher, I'd certainly advocate for learning the language as a direct means of entry into any culture.

    I'm looking forward to the rest of the convo over the next weeks. Thanks, again.

     

    Richard Mehlinger

    It's the economy?

    I've already suggested cultural anthropology and thick description as one possible framework for looking at CCS. There are others, though, that come to mind. The first one is Marxism. I am not myself a Marxist--the teleology is just too much--but I think a Marxist approach might provide one valuable framework for looking at code. The economic conditions of programmers, and more specifically the nature of code's mode of production, as well as any conflict between that mode of production and the social superstructure. Most code, after all, is written as economic activity rather than as a hobby or for leisure--though there seems to be a curious slippage between the two.

    At the same time, any attempt to employ Marxist methodology needs to be tempered. With increasing amounts of code being written in non-European countries--most notably India and China--we should be very wary of naively accepting Marx's claims that his theory was a universal framework for all human societies. Dipesh Chakrabarty's Provincializing Europe has some excellent discussions on the problems of applying Marxism as a universal, to cultures outside of post-Enlightenment Europe, even though Chakrabarty himself remains committed to the Marxist critique of capitalism. Furthermore, we need to recognize that most Marxist thought developed in industrial societies, well before computers were either powerful or prevalent.

    Jarah

    situated knowledges?

    As I read through the comments in this forum, I am struck by the varied knowledges, viewpoints and 'thinkings' that comprise our current understanding of CCS. 

    And, I have found what seems to me, at least, like a central theme…  which 3 different thoughts from 3 different people (whose names all begin with 'M' - but this is not the theme!) will perhaps begin to help identify:

     

    Matt  Kirschenbaum says: 

    "Critical code studies has not, in my view, yet refined a sophisticated enough model of the thick *textualities* of code, preferring instead to treat code as an immaculate object on the screen, as when we are presented with a sample or snippet, its stark and unforgiving typography a latticework for critical exegesis."

     

    Michael Widner says: 

    "code is about people more than it's about computers"

     

    Mark C Marino says: 

    "For now, I'm content to replace the question, "Does looking at the code bring us closer?" with "What do we learn when we look at the code?" with the understanding that, as you and others have noted, to look at the code is to look at as it sits in the file, compiles, circulates, acts upon the hardware, interacts with the operating system, produces the set of effects David is calling "software" etc. etc. "

     

    All of these seem to point to the same place-  that we cannot do a close reading of code without taking its relationship to to the world into serious consideration.  

     

    Which part of the world- whether it is the IDE, the actions/reactions of the software/code, or the people and histories that place that code in a 'common sense' standardization/best practice- depends on where you come from as a humanist.

     

    So (obviously) what you do outside of the code matters in terms of how you treat the code. Therefore I can bring my own tools (for me, queer theory, art and feminisms) as well as use the ones provided by CCS.  This, I think points to, as Mark called it, an assemblage of approaches.

     

    So my question is, even if you are new to CCS-  what tools can/do you bring to CCS? How can they help us to understand and read code better, or differently? How are you situated in relation to the code?

    clarissal

    A smart-ass response

    How can and do we situate code in a Cyborg? If we are thinking of the bionic man or woman of the 1970s (or was it the 80s), imagine coding malware to cause a malfunction of the inorganic parts melded to the organic flesh, or one that can communicate with the biogenetic cells (and this is where Parisi's abstract sex, and the organic codeme of the DNA, comes into play) to create a hybrid entity that breaks the boundary of artificial and natural. We think of code as artificially constructed, but as many sci-fiction films and TV like to speculate, what if we can program using the language of 'Life'?

    I am thinking also of Sadie Plant's Zeroes + Ones: Digital Women + The New Technoculture

    http://vodpod.com/watch/5311167-tedtalks-amber-case-we-are-all-cyborgs-n... :)

    markcmarino

    Digital Objects of Study

    Excellent question, Jarah,

    Yes, what theoretical approaches do you use and suspect might engage well with code? And let me tack on another question:

    I'd like to open up this conversation for those who do not read computer source code or don't feel comfortable looking at code:

    Who among you critiques a cultural object that has accessible code that you haven't brought into the anlysis yet but would like to? Perhaps we can take a look.

    alenda

    Hand Code Only: These Shorts are Not Machine Readable

    I hate to pull out the old etymology standby, but I think it might allow us to get at some of the complexities that get subsumed when we use the unitary term “code” or, for that matter, “procedure.” Non-programmers and programmers alike are probably too quick to lump together the hundreds, if not thousands, of programming languages in use out there today (though clearly, a handful dominate, as evidenced by these O’Reilly graphs).

    As Mark Sample suggested in his MLA presentation, depending on the language, or more accurately the level at which the language is expected to operate (closer to the machine or closer to “human legibility”), code may reveal some things and obscure others. The folks here seem content to include scripting languages like Perl or an artist-friendly language like Processing, so hypothetical purists beware (though I doubt we’d find them here).

    I imagine that, like everything else in our lives, the term “code” conjures different images and associations depending on your background. Even if we limit ourselves to those associations strictly related to computers, some of us might imagine the garbled concatenations of letters, symbols, and numbers popularized in Hollywood tech scenes—the ropy, green strings of The Matrix, cluttered, rapidly flashing screens of impressive-looking text blocks—while others might be conditioned to think in terms of their first coding languages. Berkeley computer science undergrads, raised on a diet of LISP before tackling Java and C++, inevitably moan about the horrors of parenthetical usage, and to this day, I handle semicolons somewhat gingerly, remembering the dire consequences of omitting them as line terminators in Turbo Pascal.

    But enough about code. Let’s take the term “procedure,” since I think that many of us here, myself included, have used Ian Bogost’s notion of procedural rhetoric as a platform for the study of digital objects (this is more apparent in the MLA presentations linked in the initial forum description than in the CCS discussion here). The OED gives this general definition first:

    The fact or manner of proceeding with any action, or in any circumstance or situation; the performance of particular actions, esp. considered in regard to method; practice, conduct. Also: the established or prescribed way of doing something.

    Grossly paraphrased: doing things, how we do things, or how we usually do things. Notice the relatively minimal emphasis on procedure as generating a result, something that has been key to its usage as the basis for a methodology. Then there’s the computing-specific definition later in the entry:

    A set of instructions for performing a specific task, which may be invoked in the course of a program; a subroutine.

    Again, a task may or may not be performed, and the focus is less on the task than on the performance. In this definition, we also shed the linkage with custom characteristic of the term’s long history in other disciplines, including the law, medicine, and politics. And to be unfairly nitpicky, we are usefully reminded that procedure and program are not coterminous entities; many programming languages support both procedures and functions, but “functional rhetoric” would have us bolting for the doors.

    So to belabor what may be obvious, a procedure is only a small part of an overall program. Complex programs contain numerous procedures, and the only defining features of the procedure are that it begins, ends, has parameters, and does not return a value (I leave it to better minds than mine to read into that).

    I grant you, procedure can refer to the entire code system as an instructional document, but as Mark Sample has pointed out, some of the most interesting features of code are not even those that are machine readable, for instance, documentation that exists outside of officially executable code that reflects the metacommentary of the programmer or programmers, often a rather human effort to render one’s code intelligible to colleagues assigned to other aspects of the overarching structure.

    So perhaps we are using the term “procedure” too loosely. Big deal, right? Procedure, for most of us, still successfully invokes the sense of repetition, recursion, and incremental progress that seems to accurately characterize our experience of the digital. But opening out to the broader senses of procedure as well as narrowing focus to the discipline-specific definition allows us to reconnect the overly tidy concept of procedurality with the mess of bodies (“She’s going in for a pulmonary procedure!”) and sociocultural institutions (“Follow these eighteen procedures if you want your tax refund.”), while returning some of the nuance to code as a multilayered, embedded, and let’s face it—extremely dependent entity. As my housemate quips, if you run lines of code through a copier, you get a piece of paper.

    Sorry to shift the discussion from “code” to “procedure”—but the latter has been something of a personal bugbear. While I find “procedural rhetoric” very useful, I’ve had my reservations about using the term to distinguish something like video games from other media, because clearly, procedurality exists all around us, and as enticing as it is to brand code as uniquely performative, code cannot be the only language that “does what it says” (if we are to believe C.S. Peirce… though I can’t remember the last time I christened a ship). At any rate, to relinquish performance and execution to code with hardly a whimper would relegate most of us humanist-types to paddling around in vapid circles on our own little representational ponds, and that’s no fun at all.

    Some very interesting case studies here. Good work on the forum!

    Patsy Baudoin

    CODE: etymology

    The etymology of code (in English) is codex (= Latin, book of laws), via the French code. That's according to the OED.

    Not a big leap at all to instructions for performance in the computer realm.

    davidberry

    Where does 'code' come from?

    Cramer argues:

    In computer programming and computer science, “code” is often understood as with a synonym of computer programming language or as a text written in such a language… The translation that occurs when a text in a programming language gets compiled into machine instructions is not an encoding… because the process is not one-to-one reversible. That is why proprietary software companies can keep their source “code” secret. It is likely that the computer cultural understanding of “code” is historically derived from the name of the first high-level computer programming language, “Short code” from 1950 (Cramer, quoted in Fuller 2008: 172).

     

    http://en.wikipedia.org/wiki/Short_Code_(computer_language)

    alenda

    code and secrecy, necessary bedfellows?

    Patsy and David, thank you for addressing the etymology of "code," which I never did get around to... I'm fascinated by the Cramer, for going against most standard notions of code, e.g. its evolution from Morse code and Braille in Charles Petzold's Code. Petzold is happy to use the American Heritage Dictionary definitions:

    3.a. A system of signals used to represent letters or numbers in transmitting messages.

       b. A system of symbols, letters, or words given certain arbitrary meanings, used for transmitting messages requiring secrecy and brevity.

    4. A system of symbols and rules used to represent instructions to a computer...

    Richard Mehlinger

    Cryptography and Computers

    If we're looking at the etymology of code, it's worth remembering that computers were developed as part of military projects during the Second World War to crack the Germans' Enigma code, a project with which Turing himself was deeply involved.

    markcmarino

    code enigmas

    Richard, how might that affect the way code gets situated? Can you trace out the implications of this connection?

    Richard Mehlinger

    Some speculation

    Well, first let me say that I only know the basic history of the Allied computer projects during and after World War II; I'm not an expert in that, and I'm certainly not an expert on the way the word "code" has been used over the past 200 years. Still, I think that the connection is certainly interesting, and I think we can at least make some constructive hypotheses.

    The first and most obvious connection is the double-meaning of the word "code". Formulated as simply as possible, a code either represents or obscures data. Generally speaking, computer code is code in the former sense.

    Second, the code developed for the first computers--code of the first variety--was developed specifically for breaking codes of the second variety, namely the allegedly unbreakable German Enigma machines.

    Third, computers and by extension computer code were first developed in an atmosphere of extreme secrecy. The Bletchley Park project was of the most vital importance to the Allied war effort, on a comparable level to the Manhattan Project.

    Fourth, the enemy secrets uncovered by these computers in Bletchley Park could only be used with the most extreme caution; use information from Enigma too freely, by doing things like, say, sinking every one of Rommel's supply convoys in the Mediterranean, the Germans would deduce that Enigma had been cracked, switch to a different code system, and then the Allies wouldn't have been able to read Hitler's mail. Neal Stephenson's novel Cryptonomicon gives a rather amusing albeit somewhat fictionalized discussion of this particular problem.

    Fifth, for years, decades aftew the war, the government, and in particular the national security sector, were the primary source of funding for computer research, which was primarily done by large corporations (and possibly universities, I'm not sure when they started to get in on computer research--late 60s?). The greatest example of this, of course, is DARPANET, which if I recall correctly was originally intended to be a secure, nuke-proof communications systems--more secrets.

    Sixth, for decades, the code itself was obnoxiously difficult to read, write, and debug, adding yet one more layer of obscurity onto a discourse already deeply shrouded in layers of secrecy, which were only gradually pulled away.

    So, what do we have here? Computer software was developed in an atmosphere of extreme secrecy, primarily for the purposes of revealing other people's secrets, the knowledge of which itself had to remain secret, in a language which was itself incredibly obscure. 

    Now, again, I don't have the expertise to prove anything, but I can certainly suggest plenty. I think all of this secrecy may have had a major impact on programmer culture. I think the most obvious place to look for this is in obfuscated code. But there's also the old-school, macho programmer attitude (thankfully dying out) that "If it's hard to write, it should be hard to read." There's the interest in the open-source movement--which we might interpret as a reaction against the secrecy in which the computer was born. Speaking more broadly, I think the double-meaning of code, as that which represents and that which hides could well be an idea that influences programming in a diffuse, uneven manner, popping up in the oddest places.

    Again, this is basically just speculation on my part. But it feels like there may be something to it.

    PS: I just saw your second comment after I finished this one... I think there's definitely something there too, but I'll need to sleep on it.

    davidberry

    Code and Conflict

    Mehlinger wrote: I think all of this secrecy may have had a major impact on programmer culture. I think the most obvious place to look for this is in obfuscated code. But there's also the old-school, macho programmer attitude (thankfully dying out) that "If it's hard to write, it should be hard to read.

     

    It seems to me that this has less to do with 'macho attitude' and everything to do with a culture that is continually under pressure from management and the logic of profit to become deskilled. This is a common refrain both by 'crafts'-like programmers, and management theorists alike. The term 'code-monkey' reflects this. After all, there are both men and women who program in highly obfuscated ways and I don't think it is something that will be going away soon. 

    I think it is therefore perfectly rational for employees (Labour) to seek to obfuscate their code, remove comments and use complicated structures to safeguard their job security. It is also perfectly rational for management (i.e. Capital) to seek to unpick these by the use of formal methods (UML, Z, etc.), project planning, peer-review (e.g. agile programming) and other techniques of control and power. 

    This conflict lies at the nexus of code production in corporations. We can, of course, speculate the extent to which open-source software, which is generally complicated to work with, works from source code, and values the normative principles of freelance, high-pay, short-contract consultancy, is actually the language of an elite within the programming industry (the Uber-programmers).  Berry and Evans (2005) discuss this notion of an aristocratic class of programmers who seek to universalise their claims here

    Jarah

    and...bringing the programmer into the matrix

    Thanks for such a great historical emplacement Richard. War, military, secrecy and macho culture really help to situate code in time and space, which for me, is a necessary layer in reading the code itself. 

     

    While I agree that in this history, the code is either obscuring or representing data, if we expand the emplacement, as Mark does in his comment to your post, to include the 'programmers' -Alan Turing specifically-  the code becomes both imitative and tongue-in-cheek, changing not only the history, but potentially how we read code.

     

    Alan Turing played many roles in the histories of code:  as a decorated war hero (for his work decoding ), as a convicted 'criminal' for his 'homosexual activity,' for his government-enforced estrogen 'therapy' which caused physical changes to his body, and of course, the Turing Tests which have been foundational in artificial / machine intelligence.

     

    His gender test was coded code-  one gender 'passing' for another (through imitation and replication) - as a control for the intelligence tests-  which then turns into a machine 'passing' for human.   Judith J. Halberstam (1) says it best:

     

    "Gender, we might argue, like computer intelligence, is a learned, imitative behavior that can be processed so well that it comes to look natural. Indeed, the work of culture in the former and of science in the latter is perhaps to transform the artificial into a function so smooth that it seems organic" (Halberstam,443).

     

    So, the very act of detecting authenticity in code requires an always already queering of the history, the programmers, and the concept of machine intelligence itself.

     


    Notes:

     

    (1) Halberstam, Judith. Automating Gender: Postmodern Feminism in the Age of the Intelligent Machine. Feminist Studies, Vol. 17, No. 3. (Autumn, 1991), pp. 439-460.

    changed

    @Jarah et al: I'm a little

    @Jarah et al: I'm a little out of my element here, but I want to extend the thought threads here.  I do think it really important to highlight the various "assemblages" (to borrow the running metaphor from above) that inform, deform, and conform the specific case of Turing.  I would like to take up both @richard and @davidbarry in their address of "macho programmer attitude" to sift through the various cultural, political, and material conditions, for lack of a better word, in which Turing lived, worked, programmed, created code, and became 'encoded' himself.  Granted it may not have been "macho" per se but I cannot help but insist that technology of a certain sort, particularly engineering and all things military, has been gendered as are those that practice it and make it.  And granted the manipulations of code certainly had to do with 'security' or pragmatics but these too are impinged upon by gendered (and by extension sexualized) and national logics.  I like the direction of this particular thread, though, because it does invite thinking about code as not neutral and about the many ways code assembles a person or a body or a way of knowing.  Turing's imitation game plays with cultural and philosophical codes (the Turing Test, which I have been working on, bears intimations of more than just gender and humanness but racialized and sexualized understandings of what counts of humanness, too).  Turing's sexuality and arrest point up social and legal and national codes about public/private/civic behavior, morality, and national security.  And Turing's subsequent "treatment" raises the ugly spectre of pathology as code, biology as code, and cure as "changing" the code.  Granted, these are all glosses but it would be interesting to consider how these conditions "queer" our understanding of code (and vice versa).  Thanks for letting me splash a little in the pond. 

    Richard Mehlinger

    Some Cautions, and a question

    You make excellent points, Edmond, but I'd like to posit two cautions. First, as I said I am not an expert on Bletchley Park and the Enigma-cracking project, but I seem to recall that it was a fairly atypical military environment. Certainly, given what it was doing and who worked there I certainly wouldn't be surprised if it was. So, without researching it further, I can't speak to exactly what that environment may have been like or exactly how it may have been gendered. Second, I've talked a bit about the "Real Programmer", machismo, &c. That is hardly representative of all programmers or all programming though--it's just one particularly distinctive strain. My guess is that it emerged later, perhaps as late as the 70s, as advances in technology began to make programming computers simpler.

    I also have a question for you: How would you tie Turing's work, persecution, and death to code in general and CCS?

    changed

    More on Turing

    The biographical accounts of Turing's time at Bletchley do seem to paint a generally convivial (if that's possible during war) atmospher.  I was speaking, with risk, of a generalized attitude toward technology/engineering then and now that have predominantly tracked men into these domains.  Granted there were a number of women who worked at Bletchley (see http://findingada.com/2009/08/the-women-of-bletchley-park/), it went against the norm.  I was also speaking about ways even terms like "programmer" and more obviously "hacker" have been gendered as masculine.  Even the distinction of "real" versus "not real" programmer as measured by difficulty of the programming language or technology is (potentially) a gendered metric.  Turing's legacy -- though not to overdetermine him as a figure -- does provide a way to consider the technocultural interventions we can make into domains like CCS.  In other words, computer science or "code" or technology are not separate from or immune to culture, politics, even bodies and desires (and as I said too patly above, vice versa). 

     

    markcmarino

    code enigmas

    Since this comment got posted twice, allow me to Flesh out the question:

    Turing's work with the code breakers of Blechley Park and his provocations in Computing Machinery and Intelligence have always struck me as having affinities. The judges in both versions of the imitation game in the Turing Test, both the gender and human imitation games, try to detect authenticity in code -- and hence (re)produce the conditions of authenticity. Decoding here reinscribes cultural norms.

    I can't help but see our readings of code as taking place behind a similar curtain.

    ebuswell

    Secrecy and representation, necessary bedfellows?

    Just a quick and completely speculative reply---I don't know this history as well as I should like.

    So it seems that the order of things is something like:

    1. An idea of a collected body of writing (a codex)

    2. Applying that idea to the transition from oral to written law (the codex of Justinian)

    3. Repeating during the renaissance the idea of the codex in the idea of the French code.

    4. [Something---apparently, either we study the beginning or we study modernity]

    5. As a metaphor for the body of legal work, we have the idea of code as substitution ciphers.

    6. The idea of substitution is reapplied in information theory, where code means representation.

    7. At the same time, the idea of a body of legal work is transferred to the instructions inside a computer.

    When we take a longer view of things, I think that we'll find the idea of code as cipher is more off the main track of development than code as computer instructions.  But you can always see #4 for details on that :-).

    The idea that runs throughout all of these is the self-containment of text; text that is all comprehensive and no longer needs to reference the world outside of it.  I'm thinking of Zizek's interpretation of Hegel in the preface to *Sublime Object*: Hegel, Zizek says, far from concluding the section on Absolute Spirit with the Idea ultimately consuming and retaining its other, wants the Idea to begin to leave the other alone (I'll leave Zizek's metaphor out of this...).  The Idea finally is complete in itself, and so it no longer needs the other at all.  This development is in many ways what is common to all of these forms of "code".

    Code is something that moves away from representation proper and ends up in substitution.  I think it would be a stretch to say that language "encodes" the world---one of these sides is not a signifier, and so the substitution has to fail, while the representation succeeds.  But with code as codex, law, with all its cross citations and internal consistency, we approach the substitution of text for text.  In the cipher, we realize that substitution.  In code as information theory conceives it, there is no problem, as the world is already conceived of in terms of recognizable differences, i.e. information, that are simply altered from one form to another.  And lastly, in the case of computer code, when we try to chase it to something solid, code evaporates into assembly code, assembly code into machine code, machine code into microcode, microcode into logic gates, and logic gates back into the VERILOG code that defines how they're supposed to work.

    Michael Widner

    Comments all the way down

    This is meant as a reply to: http://www.hastac.org/node/26981/17440#comment-17440

    But I hit the wrong button. Anyway...

    It's interesting that we even would classify this as code. It's more an idea of code, imaginary code that would, if it were ever really written, use the public address system to "calculate the air displacement needed to represent the public scream... and transmit the output." But, of course, there's only 4 actual lines of code, and they wouldn't run. The rest of the "program" is entirely comments intended for human reading.

    At least, that's what the version you found is. I tracked down this more complete version, though:

    http://scotoma.org/notes/index.cgi?LondonPL

    Here's a particularly poignant bit of the code:

    # The average daily scream output of fear for the period 1792-2002 is 6.

    my $TotalDaysLived = ($DeadChildIndex->{$Index}->{Class}->{LifeExpectancy} * 365)

    # Calculate the gross $Lung Capacity For Screaming for this child

    my $LungCapacityForScreaming = &Get_VitalLungCapacity(\%{$DeadChildIndex->{$Index}}) * $TotalDaysLived;

    # asign to $DeadChildIndex->{$Index}->{ScreamInFear}

    $DeadChildIndex->{$Index}->{ScreamInFear} = $LungCapacityForScreaming;

    }

    The rest of it tries to estimate the vital lung capacity of children and figure out how long it should play a sound file to represent the centuries of screams. 

    This is really a beautiful example you've found of code mapping bodies for a purposefully political meaning. Counting dead children, trying to represent their lung capacities in numbers... it's all so chilling.

    And he has a number of other programs on that site, like this one:

    http://scotoma.org/notes/index.cgi?War

    This is code art, isn't it?

    Richard Mehlinger

    DEFCON

    This is straying a bit from CCS, but reading this I can't help but think of a haunting little game called DEFCON. For those of you unfamiliar with it, the game simulates global thermonuclear war, from a secure, deeply buried command bunker, played out over an old, Wargames style vector map of the world. The game takes about 45 minutes to play. The goal, simply put, is to nuke as many enemy cities as possible while protecting one's own. The game wards two points for every million enemy citizens one kills, and subtracts one for every million of one's own that dies. At the end of the game, a score screen comes up, listing how many millions of each players' citizens were killed, and how many survived.

    I have never played a more haunting game in my entire life.

    ebuswell

    Periodization

    This is supposed to be a response to http://www.hastac.org/node/26981/17453#comment-17453 -- I thought I hit the reply button, but maybe it didn't take.

    So much to think about in this forum.  For now, just two quick things:

    1. I'm troubled by the idea of "Regimes of Code," as that seems to imply that *code itself* is a less historical and contingent object than it actually is.  Even if one limits oneself to the time period when something identifiable as code certainly existed, to properly understand the power structures involved you have to go way outside what we would usually understand as code.

    Actual computer languages---I will assert---exist as one feature of a particular attitude towards signification.  It is not a coincidence that from the 50s to the 70s, along with the widespread development of computer languages, we also saw the development of Chomskian linguistics, fiat money, and our modern insane financial system.  These are all part of the same type of developing attitude towards language, its proper form and role, and its perceived power and ambiguity/disambiguity.

    2. Maybe rather than "paradigmatic" language, we should say "hegemonic" language, as there are profound power games being played out in the creation and obsolescence of programming languages.  There have certainly been a succession of hegemonic programming langauges.  But most examples have to do with power struggles mostly irrelevant to our attitudes towards code, I would say.  C is a good language, but it ruled because it was ubiquitous on Unix, and Unix was ubiquitous.  Once the Xerox PARC ideas took hold and everyone was developing GUIs using object-oriented programming, this was added on to C, and we got C++ ruling the world.  But we shouldn't make too much of the language.  If one examines the Linux kernel source, for example, though it is written in C, it is totally full of object-oriented techniques and ideas, and it would look just as foreign to a C programmer of the late 70s as Ulysses would look to a 17th century reader; that reader would understand (or be baffled by) a lot of Modernism without having to read Finnegans Wake.

    davidberry

    Rational coding?

    Mehlinger wrote: I think all of this secrecy may have had a major impact on programmer culture. I think the most obvious place to look for this is in obfuscated code. But there's also the old-school, macho programmer attitude (thankfully dying out) that "If it's hard to write, it should be hard to read.

     

    It seems to me that this has less to do with 'macho attitude' and everything to do with a culture that is continually under pressure from management and the logic of profit to become deskilled. This is a common refrain both by 'crafts'-like programmers, and management theorists alike. The term 'code-monkey' reflects this. After all, there are both men and women who program in highly obfuscated ways and I don't think it is something that will be going away soon. 

    I think it is therefore perfectly rational for employees (Labour) to seek to obfuscate their code, remove comments and use complicated structures to safeguard their job security. It is also perfectly rational for management (i.e. Capital) to seek to unpick these by the use of formal methods (UML, Z, etc.), project planning, peer-review (e.g. agile programming) and other techniques of control and power. 

    This conflict lies at the nexus of code production in corporations. We can, of course, speculate the extent to which open-source software, which is generally complicated to work with, works from source code, and values the normative principles of freelance, high-pay, short-contract consultancy, is actually the language of an elite within the programming industry (the Uber-programmers).  Berry and Evans (2005) discuss this notion of an aristocratic class of programmers who seek to universalise their claims here

    Braxton Soderman

    Formalized Methods of Code Interpretation

    For me, when approaching CCS, the recent move seems to be calling for formalizing readings of code (Mark Marino on EBR concerning CCS's last working group: “CCS needs to attend to the work of formalizing some procedures for analyzing code.”), but I don't see it yet. Maybe we need more examples of readings before theoretical formalizations can occur (but then why call for such formalizations...)? Maybe such a call for readings is what this HASTAC discussion is about? But, it seems clear that some forms have appeared here, and in the recent history of these discussions which desire the disciplinary foundation of CCS:

    Algorithmic Critique: Focusing on a particular algorithm used within a program and critiquing its assumptions, limitations, political ideologies, etc. Scholars talk about this a fair amount in terms of critically analyzing simulations and their models (particularly when these models are used to make important political and policy decisions that will affect the future). Mark S.'s analysis of Micropolis could serve as an example in the context of CCS. Or, Mark M.'s analysis of the functionality of a past, historical algorithm used to identify terrorists presented at SLSA in 2007 (“Encoding Terrorism: Applying Critical Code Studies to Command And Control Code”).

    Textual Readings: Focusing on the language of the code from a textual perspective which understands that source code circulates in different ways and becomes one layer of meaning which can be interpreted according to the “langue” from which it is formed. Such readings can give us insights into the “assemblage” of layers which compose the digital object which the critic analyzes (its code, its effects when executed, its platform, its software requirements, Genette's notion of paratext, etc.). So Mark M. looks at the code of the Transborder Immigrant Tool and detects the notions of drowsing and witching—two concepts which might not be readily apparent to someone analyzing or using the tool without looking at its “source,” thus opening analysis into new, interpretive directions. One can work to extrapolate meaning from these textual discoveries from the code...(e.g. now the Transborder Immigrant Tool can be analyzed in terms of drowsing and witching and the historical import of such ideas...).

    Aesthetic readings: Reading code in terms of traditional aesthetic properties of efficiency, beauty, standardization, accessibility, etc.. Perhaps explaining why the code is "beautiful," "artfully" executed, etc.. Or such aesthetic readings might take a more avant-garde approach and analyze moments when certain coded texts become aesthetic in a more modernist sense, crossing or breaking boundaries of traditional notions of beauty to create transgressive, aesthetic meaning. Thus, Mateas and Monfort's "A Box, Darkly: Obfuscation, Weird Languages, and Code Aesthetics." A text which teaches us that code can be aesthetic from a modernist perspective. (Incidentally, a text which, in my opinion, should be a primary text for those interested in CCS). So here we might place perl poetry, techniques of obfuscation, codework, artistic pseudo-code like mongrel's piece, etc..

    Historical readings: Analyzing code in terms of diachronic changes in functionality. Such readings might be related to technological determinism and how the underlying structure of hegemonic programming languages influences how we think about the world. Here, analysis could be focused on individual lines of code and their reiteration across time (locating differences and attributing meaning to such differences) or one could look at shifts in the logic of structuring programming in general, from procedural languages to object-oriented ones, to duck-typing, etc... (This might be related to David's notion of "Regimes of Code.")

    What would other formalizations of CCS methodology look like? There are bound to be others. The thing is, when I encounter a literary text (or visual, filmic, etc.) I know many methods of interpretation: symptomatic, new critical, psychoanalytic, marxist, formal, Susan Sontag's quasi-call for affective readings/description of texts (which might be related to Mark Sample's post on memory, nostalgia, and code), Jameson's notion of meta-commentary, and the list could go on. Some of these might be applicable to the study of code and some not (for example, I would be intrigued if anyone could produce a “symptomatic reading” of a code object). But, while reading over something like the comments concerning the "annakournikova worm" on EBR is interesting, I am at a loss to understand any formalized procedure of code analysis from a humanities perspective that appears from these comments. What would be great, if ideal, would be to see not only the (successful) repurposing of traditional interpretative and hermeneutic methods of literary texts to code (maybe what I started to outline above) but the theorization of new interpretive paradigms derived from source code which would impart new understandings of contemporary cultural situations and objects.    

    markcmarino

    Formulations, Methodologies, and Tools

    Welcome, Braxton,

    So back in 2007, when we started the CCS blog, I put together a list of moves or approaches that might produce CCS readings, though I think you are discussing a higher level of categorization of these approaches.  I do think these ways will emerge more effectively out of compelling interpretations.  Also, at this point, CCS may already pre-suppose a kind of methology:  technological studies with a code emphasis – as opposed to primarily hardware or software.  Nonetheless, as we are developing it hear, CCS is beginning to look not just at particular objects but more broadly at the cultural milieu growing out of this thing (these things) called code.   

    One of the challenges to formalizing these procedures is that we will inevitably bring these theoretical approaches you mention to our interpretation of code:  Marxist, psycho-analytic, feminist, post-colonial theories.  I understand that during the last decade there was great concern about the dangers of colonizing new forms of media with disconnected theories or theories that primarily applied to a different media ecology.  No doubt that was a useful corrective to reading the media as a direct expression of the theory.  However, to argue that we should/can do away with those hermeneutics or philosophies all-together threatens merely to replace them (at the highest level of abstraction) with an invisible paradigm, which would often be something like neo-liberalism or technological determinism...

    As Jarah's question suggests, CCS often involves applying these hermeneutics to the study of the digital object even though, as a field of inquiry broadly described, it is not contained in any of them.  Quite possibly hermeneutics will emerge out of CCS that will represent their own paradigms to be adapted and later applied to non-code objects.

    In the case of the annakournikova worm discussion and the DAC article to which it is tied, a good deal of the methodology is inspired by the theories that informed (and are implimented by) Zach Blas' transCoder and Julie Levin Russo's SlashGoggles Algorithm. The actual interpretation, on the other hand, derives from a specific set of moves, including reading the worm against these codework art projects.

    Part of what I am looking for is something more mundane, which might make my readings seem either too partial or too myopic at this point.  Ways of reading lines of code that can be re-used like a set of tools or techniques.

    Though we are moving toward a sense of examining the code as it function, we will still be citing specific lines.  One question, I think about a lot is: which lines?  That answer will no doubt grow out of the particular reading, but there are some tendencies I’m noticing.  These include: a revealing comment, a core method, a line emblematic of the programming language or hardware constraints, a seemingly superfluous line, an elegant line, an obfuscated line, a remarkable variable name, a line revised in the versioning log. Obviously the comment might not be about any symbols in the quoted line but about the functioning of that part of the code and how it interoperates with the system, hardware, and other software.

    One of the chief operations of my part of that reading is to isolate a principle function (for me this was where the worm wrote itself into a file) and to read that against the idea of normativity in the context of a discussion on heteronormativty.   I do something similar with the Transborder Immigrant Tool, where I attempt to isolate a line in the code that seems to epitomize some core functionality.  Others in that Week 1 discussion isolated completely different lines.  Some looked at the moment when the worm takes advantage of the operating system by accessing the "Special" folders.  Others called attention to the one line that seemed to be the sole addition of the programmer (since that code was generated using a worm-generator). In other words, they isolate the aspect of the code that distinguishes it from its genre.

    Rarely will these readings rely on any one methodology.  For example, one part of my MLA talk discussed the structure of the code in relation to interfaces in java.

    I’m interested to hear what we develop as both tools and larger theories. 

    What methodologies (at the epistemic or practial level) do others see at play?

    Braxton Soderman

    Toward the Visibility of Invisible Paradigms

    Sorry for the belated response. I definitely know about your list of moves and think it is a good start to spurn the exegesis of code. But, yeah, I think my post was attempting to flesh out some of these moves and attach them to larger methodological possibilities.

    I am both optimistic and worrisome about applying inherited hermeneutics to code analysis, while being fascinating with the possibility that one might develop new forms of interpretation that would travel beyond the boundaries of what we know from literary interpretation, sketching out "invisible paradigms" (as you call them) that are specifically motivated by the code from which they stem. I think this is also expressed below when Nick M. says we do not yet have a powerful and productive model of exegesis that has been applied to numerous lines of different code. And then Clarissa follows up with, "I wonder if we need to develop that humanistic model by working OUTSIDE of the existing humanistic model, working from the code back rather than trying to juxtapose the code to our existing models...." I agree with these points and other similar ones made in the forum, and they indicate a general desire to envision these "invisible paradigms."

    You seem to be concerned that casting away prior interpretive paradigms leaves a void where unreflective analysis might occur (i.e. leading to neo-liberalism or technological determinism). I definitely understand this concern, although, take the case of video games: the so-called ludologists hastily jettisoned notions textual and visual interpretation inherited from literary and media studies, but they did so in order to develop new paradigms of analysis. Thus, you get Espen Aarseth groping at a concept of simulational hermeneutics, Alexander Galloway developing a quasi-new method of interpretation based on "gamic allegory," or Ian Bogost beginning to map the contours of procedural rhetoric. These were "invisible paradigms" at one point, but now provide methodology for the fruitful interpretation of games. Yet, these paradigms built on the old (e.g. Galloway's reworking of Jameson's Marxism, Deleuze, etc.) while providing new ways to see both the object of interpretation and the method of interpretation itself. Maybe the interpretation of code proper will lead to productive reworkings of older, interpretative paradigms: the child-class will override the methods it inherits from "parents" in productive fashion. I hope so! (Hayles' reworking of semiotics in terms of code stands as a productive example...though I have seen few applications of her theoretical arguments to interpretations of code itself...).

    I would only add that while pursuing possibilities of applying psychoanalysis, Marxism, gender analysis, etc., to code is important (e.g. perhaps the application of the concept of "heteronormativity" to code stands as a successful example, or at least glimpses of success), it is probably equally productive and important to question how and why inherited interpretative frameworks fail, or are thwarted, when reading actual lines of code. Maybe this is similar to Wendy Chun's question, "What are the limits of code?" We should be ready to jettison paradigms that do not work. Such is science. Discovering interpretive methods that fail when faced with code (and understanding why) can also be a way to focus on the positive possibilities of success. Or maybe it is similar to Wittgenstein's notion that clearing up problems in philosophy means understanding language games that create these problems? Thus, understanding the interpretive paradigms which are not useful to explicating code might clear away some noise that masks the manifestation of the invisible paradigms that CCS seeks?

    ebuswell

    The essence of code: code is not the essence

    This is in response to a number of things above...

    I think we need to be very careful here about not essentializing code---not to say that great care isn't taken in the above threads, just that more discussion will help :-).  Though many people have said that code is performative in a sense way beyond the way that regular language is performative, I want to assert the opposite: code is hardly performative at all.  Unlike the most prototypical Austinian performatives (I christen this boat/child, I do [agree to marry]), code accomplishes nothing by itself, but only through the help of a third party that enacts it, that cleans up after it and makes sure that what it has to say is true.  Further, the obsessive illusion of universal performativity is precisely what gives code its character as code; code claims to destroy the gap between signifier and signified: what it says is supposed to be true just because it has been spoken.

    Code is a command (even if it is written in Prolog), and code is only as performative as it's invoker is already in command of something outside of him/herself---this power relationship either precedes the coding or is instituted by it.  Whenever we accept code on its own terms, we are implicitly accepting the power relationship between the coder and the coded.  When the object being coded is a computer, no big deal; but code on computers has a way of leaking out and exerting a large measure of control on its human/animal participants as well.  (As I know there are a number of people here trying to push theory to action, I should mention that I think this would be one possible mode of code critique: how does a particular stanza of code command not just the bits on the machine, but also the other interested parties?)

    To summarize, to accept code on its own, essentialized terms is like accepting that the equation for continuously compounded interest is "A = P * e^rt".  It's not that that isn't the equation for continuously compounded interest, it's just that to call compounded interest an equation and leave it at that is somewhat misleading.  I could just as easily loan you money and compute compound interest using a different equation, and the objection to my actions couldn't exactly be that they were incorrect---actions don't rank in terms of correctness---but maybe that they were "unfair," "dishonest," "bad practice" or something else equally political.

    Now, obviously from the get-go CCS has been working to make sure code is recognized in all its political messiness.  But I still have my worries.  Above, Mark Marino calls for code studies to look at the "arbitrary" parts of code, and this critique is vital: simply the practice of paying attention to the arbitrary is a critique of the code regime (and capitalism itself) that privileges functionality above everything else.  But when I hear about these arbitrary elements, I keep imagining this world twenty years in the future, where CCS is an established, successful practice.  CCS practitioners are maybe brought into courtrooms to establish the evil intentions of some maniacal programmer bent on threatening democracy.  CCS is part of the Computer Science curriculum, a lot like a "critical thinking"-style composition course for code.  And CS teachers tell all their students that they'll be working with programmers from India, from Israel, from Singapore, that understanding the politics of how you name your variables is important in our globalized world, and they'll tell stories about a programmer some CCS person nailed in court, and they'll recommend that in addition to their class dealing with the real and actual practice of computing, they should take this other class about how code appears to others.  CCS will have been wildly successful as a discipline, people will be forced to confront it, but all the while there will be an invisible line between CCS and CS, protecting the core from the periphery, insulating and separating from critique the power structure of code itself, and constructing a discourse of "good code" and "bad code" to go along with the discourse of "good business" and "bad business" that tends to dominate naive anti-capitalist critique.

    So *as practice*, I'm all for diving in to the "arbitrary" parts of code.  I think it will do nothing but help, especially as a way to open the door for humanists who want to perform critiques but aren't quite sure how to begin, or whether they have the right to begin (you do!).  However, *as theory*, I would be very, very hesitant to say that our object is anything but code itself, all of it, including its context, its authors, its functionality, the mathematics that goes with it, its big O notation, its provability, the semiotic practices it implies, etc., etc.

    I supppose the point of all of this is not that we shouldn't ourselves essentialize code; I don't think we're doing that.  The problem is that the culture of code does plenty of essentializing itself, and I think it should be one of our jobs in critiquing code to do what we can to make sure that culture doesn't get away with it.

     

    markcmarino

    The future

    It's interesting the posts here, both yours and Richard's -- and I think others are concerned about the imagined (dystopic) future for CCS.  I think it is a sign of your circumspection, and your structural critique is well warranted -- though I would like to hear you develop more on how that invisible line would emerge.  To follow your analogy: is there such an invisible line between composition and literature classes? (Actually, who am I kidding).  Perhaps the clearer analogy is STS and Engineering -- as I think about PSU nixing its STS program, that one becomes in service of the other and hence disposable.

    I agree with you about including "all of it." The instance you mention isolates the "arbitrary" in order to take one a particularly contentious aspect of my own practice -- though I believe it is deeply tied to all of what we are doing here because the "arbitrary" is deeply tied to the notion of "extra-functional meaning." 

    No matter how far back we step back, no matter how much we include-- the line, the snippets, the subversion repository plus the paratexts plus the system plus the hardware -- no matter the level of detail we go with our discussion of the code -- distinguishing it from the executed code, noting its binary state, discussing its implementation in electrical sginals, at some point we are going to make a critical gesture that can be perceived as "arbitrary," as something read onto, imposed upon, inferred from  -- because otherwise, we are teaching pure computer science and engineering.  But since Humanities' hermeneutics gives us a healthy suspicion (from Foucault, Kittler, etc.) in the existence of such thing as "pure computer science," our work cannot avoid interpretation.

    On the other hands, Jeremy Douglass' presentation in CCSWG Week 2 made clear that this moment when...

     CS teachers tell all their students that they'll be working with programmers from India, from Israel, from Singapore, that understanding the politics of how you name your variables is important in our globalized world, and they'll tell stories about a programmer some CCS person nailed in court, and they'll recommend that in addition to their class dealing with the real and actual practice of computing, they should take this other class about how code appears to others...

    is basically already here. Code is already being read in court (both formal and the courts of public opinion).  Something tells me that lesson on variable names is already being taught. 

    But your vision of a wildly successful CCS presents is so dystopic because the work of the cultural critics becomes co-opted, functionalized, utilized by TPTB.  How does this stem from the talk of the "arbitrary"?  Is it because over-emphasis of the "arbitrary" or any one aspect or layer of code relegates the reader to a partial understanding of the broader system -- and hence subjugates or subordinates it to the work of those who know "how it all works"?  Is this really about the subordination of those who interpret to those who make things function? Please go further.

     

    ebuswell

    The future

    I think my worries are definitely in analogy to what's already happening or has happened in STS. Before STS had its neat acronym, we had "science studies." Although part of me wonders whether the failure of that name was the impossibility of turning it into an acronym, really, I think the problem was that it was too radical to leave out "Technology." Critiquing technology, of course, is necessary, and can be just as radical as critiquing science, but all too often technology means the simple aftereffects, the (im)proper application of an otherwise uncritiqued science. For examples of each side of this difference, for "science studies" we might choose Haraway's "Situated knowledges: The science question in feminism and the privilege of partial perspective," and for the other side, maybe something like "Boundary Configurations in Science Policy: Modeling Practices in Health Care" from the latest issue of STHV. Of course, I know that STS is often just a name and that both of these types of critique happen in disciplines and functions that go under both names, but these near-synonyms make as convenient a label as any for these two phenomena.

    A couple years ago, I went to the 4S annual meeting. And there were a lot of good presentations, but maybe one or two of them within a weekend's worth were science studies rather than STS, as I've defined these terms. Now, I know 4S and science studies/STS are by no means coterminous, and I'm sure I'd see more science studies types of things at SLSA, but nevertheless I see this as a general movement within STS/science studies that largely has already occurred. I won't speculate as to why, right now, but I definitely see some sort weakly established link between the STS mode of critique and legitimacy, and the science studies mode and kookiness. And I don't see people within the discipline strongly rejecting these characterizations. Not that I'd especially reject being characterized as kooky, as long as that doesn't also involve a dismissal of my work :-P.

    Now, I want to be very careful here to emphasize that the core--periphery separation is not our separation; this separation is not part of the critique, but part of the object of critique. As such, I want to make clear that I believe in the STS-types of critiques, I believe in the value of looking at the particular, the deviations, the overlooked, etc. In fact, without this, it is doubtful that we could build a discipline. But those of us who have enough philosophical training to always approach core--periphery with a lot of skepticism, should nevertheless be careful not to discard it; the separation of a discourse into a core and a periphery is one of the tools that a discipline uses to consolidate its power. Because this separation is deployed, it is real, and we can't think our way out of it.

    So, to answer your questions, the "arbitrary" is peripheral almost by definition. What the computer science discourse calls "arbitrary" is precisely what the discipline of computer science has decided can be interpreted, reinterpreted, rejected, etc. all without disturbing its power structure. Now, I think I agree with you that a lot of the time this word is deployed, it's deployed as a resistance towards the act of interpretation itself; and in that case, the mere practice of investigating the arbitrary is a valuable critique. But the problem is that if the computer science discourse names some of our critiques as arbitrary, in many senses they are going to be right.

    But the biggest problem is that---as with all discourses---the core--periphery separation lines up with the separation of what can be criticized from what is beyond criticism. Overemphasis on critiques of the "arbitrary" aspects of code, then, could easily reinforce this separation.

    To add some positivity to my doomsaying, the type of angry criticism I'd like to be getting from computer science is not that our critiques are "arbitrary," but that our critiques are "unscientific," that if we wanted to be talking about such-and-such then we should go study computer science and that otherwise we should shut up, or that nobody could actually believe what we're saying because it's obviously not true and goes against everything we've learned since the Scientific Revolution, etc. To me, this would be a sign that we're pushing the envelope and truly challenging computer science. Of course, it would be even nicer to get a positive response, for them to say we computer science people have been thinking about such-and-such all wrong and we've got to be more politically conscious when we theorize about that, or something to that effect. But most discourses won't change without a fight.

    markcmarino

    Conferences and Addressing the Center

    Evan,

    Between this post and your recent fascinating post in the Code Critiques section  , you raise a very compelling argument to place our emphasis on the aspects of code that distinguish it as a sign system and to engage with it within the paradigms framed by computer science.  While I'd be wary of throwing the baby out with the interpretive bathwater (on the central point that humans reading code engage with its text also as sign system and discourse environment), this notion of discussing code not as it is "understood by the computer," as people seem to have a fondness of saying, but as it is "understood by computer scientists and programmers" (even taking that as a heterogeneous set of views) is going to be key to bridging the disciplines through productive dialogue.

    Not to neglect the power of these points, while we're on the topic of conferences, I wanted to bring up MLA.  While I continue to find SLSA to be a home for CCS work, I was heartened by the enthusiastic turnout at MLA this past year. 

    At the risk of pushing us to far toward "The literary," I wanted to propose the idea of putting together some panels on Critical Code Studies for next year's conference.  Perhaps this might be a good space to begin discussing possible panels we'd like to see (at MLA or elsewhere).  Also, do people know of other spaces that might be both receptive and productive?  Is there a 3rd space where computer scientists and DH folks might meet to discuss CCS.  We're approaching the point where a few representatives from CS is too few. Speaking of DH, Digital Humanities was another EXTREMELY receptive conference.  But again, is there a conference some place more in the middle -- or do we need to create that space?

     

     

     

     

    markcmarino

    Performance Evaluation

     

    Evan, I just wanted to follow up with on your point here:

    I think we need to be very careful here about not essentializing code---not to say that great care isn't taken in the above threads, just that more discussion will help :-).  Though many people have said that code is performative in a sense way beyond the way that regular language is performative, I want to assert the opposite: code is hardly performative at all.  Unlike the most prototypical Austinian performatives (I christen this boat/child, I do [agree to marry]), code accomplishes nothing by itself, but only through the help of a third party that enacts it, that cleans up after it and makes sure that what it has to say is true.  Further, the obsessive illusion of universal performativity is precisely what gives code its character as code; code claims to destroy the gap between signifier and signified: what it says is supposed to be true just because it has been spoken.

    Code is a command (even if it is written in Prolog), and code is only as performative as its invoker is already in command of something outside of him/herself---this power relationship either precedes the coding or is instituted by it.

    I'm no Austinian but aren't performatives in language reliant on third parties, too?  Two women can say, "I do," but if the state does not recogize their statements, they haven't performed anything within the sytem of the law.  The state has to follow through or that utterance means nothing in terms of the operation they are trying to execute.  And if a heterosexual couple says "I really intend to see this thing through" instead of "I do," not adhering to the system of convention, they will likely not be married (of course, giving wiggle room for human officiator interpretation here). In my example, the performer must not be "in command" but "subject to" the system in order to make an utterance to be submitted for processing, the person must have successfully incorporated (and thereby been incorporated by the system).  So perhaps by extension, if my code compiles, my statements are now in accord with (inculcated with and subject to) the system.

    I think there is some other distinction you're trying to get at.  Can you develop this point?

    Jarah

    performing code

    I'm not an Austin expert either, but I wrote a bit on this last semester in relation to digital embodiment, so I'd like to explore it a bit in relation to the code here.  This is quite rough, so feel free to push back a bit. :) 

    If we complicate this- by saying that the performer only has to be subjected to/under the system by the terms through which one approaches that system, then the relationship of the performer to the system changes.

     In the case of your 2 women marrying, they may not be looking for state recognition- so the 'I do' is valid under their terms- let's say a life-long commitment to each other, rather than state approval, which then is similar to your human officiators's interpretation of the words "I really intend to see this thing through" as "I do" -  both are valid, through interpretation, in the system within which it has been performed. 

    Just because the system exists doesn't mean it is going to be rigidly accepted, as there are humans to interpret and append the system through their embodied actions in the context within which they are performed. 

    Code might perform a bit differently, because it needs to be compatible with the larger system in order to work.  

    J. L. Austin’s ‘performative’ is where the “issuing of the utterance is the performing of the action“ (Austin, 6).  Here, actions need to have intentional verbal utterances following a specific set of language rules in order for the action to be performative. This type of performative is a regular ‘doing’ in digital spaces through the actions of clicking menu items and choosing what site or application to go to. The language rules in this case are those of the executable code responding to the performative of the click. 

    The performative then is two-fold-  the performative of the person who clicks or otherwise begins/enacts the code's program and the performative actions of the code itself- as it executes/runs- looping, conditionals, starting and stopping other functions.  In this case, the human, not the machine, is the system, and the code is compatible with the system because a human wrote the code. 

    A few questions:

    what happens to the performative if it is code that doesn't need to be compiled?

    how does 'interpretation' fit into code execution?

    is there another, less violent word to use instead of 'executing' code?   

     

    Anyone want to help develop this a bit further?

    markcmarino

    analog analogy

    Yes, Jarah,

    It's a cludgy analogy.  To hold up we need a much more rigid system I would think, where saying "I do" is replaced with using correctly fitted wheels for type o-scale train track in order to perform the statement: "I am a train that will run on this track."

    But it does make me think again about Zach Blas and his Transcoder project, his anti-programming language which will not compile.  He had told me that Judith/Jack Halberstam, in whose class he developed it, had suggested to him that he should consider making it functional or operational to avoid the claim that this Queer Technology could not function in the larger(or perhaps smaller, more restricted) computational space.

    But let's return to the Austinian questions.

    ebuswell

    Performativity

    This is going to get off-topic for a bit as it's starting to get into my other area of study, but I'll get back to code...

    You're absolutely right to point out that Austinian performativity also relies on third parties, and that's the perfect example: in most states, gay couples can say "I do" all they want, yet they'll never be married.  However, I'm not so sure that Austin would admit this.  (It's been a while since I read him, so I hope all of this is at least somewhat right...)  Towards the end of How to Do Things with Words he starts trying to subsume the whole declarative mode into a type of performative ("I declare that...").  Now, it may be that it has to be heard and understood to function as a performative (it has to be *performed*), but it doesn't have to be believed, responded to, etc.  It's this kind of reliance on third parties that Butler takes up as legibility and iteration.  But this still assumes that there are some things at least that are true just because you say they are.  The minimum of these might be the promise: you promise because you say you do, and for no other reason.

    Now, all of this thinking is very modern and tied into a modern legal system.  Prior to about 1500, you didn't promise because you said you did, you promised because you intended to promise, and your speech was just a sign of that promise.  In court, this didn't matter very much, as in either case, you lied; but you can see the evidence for this in the way that written documents are phrased as almost wholly evidentiary, and never really performative.  A promissory note, for example, might have language like "Let it be known to all that [Name] promised [in the past tense] such and such to so and so...[signed: Name]".  It wasn't until the middle of the sixteenth century that we started seeing the type of language that tried to encode what it did, rather than witness it (or rather, the middle of the sixteenth century is the end of a period of transition that started a few hundred years earlier).

    So, the point you make about marriage is equally true for the points I make about code.  Gay couples cannot marry in most states because they don't have the power to marry, and despite giving the proper utterances, they will never be married.  Conversely, consider a successful marriage.  Marriage is a wonderfully messy example, as it contains bits of traditional law preserved from the medieval period mixed in with modern and postmodern law.  The following factors had to be present: a priest or ship's captain or other official who has been granted the power to marry others, a man and a woman, a monetary fee to obtain the marriage license from the state, a properly filled-out form, and some sort of declaration of the intentions of the man and woman.  Then, after the marriage has been performed, it can be annulled, or said to have never actually existed, if the couple has not cohabitated and copulated according to certain traditional rules.  A medieval lawyer probably would have said that the couple was married more by their cohabitation and copulation than by their simple "I do" (though this just a guess in analogy to other research I've done; I haven't looked at marriage specifically).  And yet we obsess on this moment of "I do," as if every other factor necessary for a felicitous marriage was secondary, trifling, and unimportant compared with the utterance that enacts it.

    That same obsession is what I'm talking about, and you're right: it goes way beyond computing languages.  But code is a particularly paradigmatic example, especially to the lay interpreter.  For the past couple of coding gigs, the core of my work has been protocol design.  Protocol designs are not much different than code.  You make designs, architectural diagrams, and finally write a document describing the syntax and semantics in very exact terms, usually with sections written in BNF or some similarly codic language.  And then, you put it out to everyone involved, and they give you feedback, and you change the specification.  If you were open sourcing it, you would create a sample implementation, and publish it in all the relevant forums, and answer questions, and address feedback, and if you're lucky, a lot of other programmers adopt it, and use it, and then, finally, something from your code becomes action.

    Code on a computer is really not that different.  It depends on compiler writers, users who adopt the program and use it cooperatively with your code, functional hardware, an adequate hardware specification, etc.  So the obsession with the performativity of the code is just that: an obsession.

    maxf

    Matt Kirschenbaum suggested

    Matt Kirschenbaum suggested subversion archives as potential bases for code critiques, a suggestion that I believe deserves further attention and consideration in this theoretical discussion of CCS. I'll expand below, but first, here's what Matt said:

    [begin quote]

    Just have time for a quick comment right now, but here's a concrete suggestion: make a subversion archive the basis of a code critique, rather than the one-off code snippets that are typically the centerpiece of the activity.

    In literary textual editing, tha anlog would be dispensing with the so-called copy text, the "authoritative" text to which other versions stand in relation as mere variants. This move was accomplished theoretically by the late 1980s, and has since been practically realized in the form of digital text analysis and collation tools.

    [end quote]

    The astute suggestion of critiquing a subversion archive also came up in the CCS Working Group, which took place online in the Spring of 2010 (see this ebr publication for a look at the working group's proceedings). Specifically, in a thread that has not yet been published on ebr, Hugh Cayless pointed out that version control systems provide a step-by-step history of code, thereby allowing authors and CCS practitioners to dissect individual contributions.

    I imagine that critiquing subversion archives might enrich critical code readings by adding a temporal dimension to code that would otherwise be time-dimensionless -- like the average snippet. CCS practitioners could remark on the specific moves - additions, deletions, modifications - and incorporate new elements in readings that would otherwise be impossible to discover.

    I'm interested in building on Matt's suggestion, and hashing out the specifics of what's involved with reading a subversion archive. The most obvious obstacle, IMO, would be procuring an actual archive. How frequently does one encounter an archive for a given program? Is the information on Sourceforge enough to get one started on a reading?

    What other aspects of a reading would be enhanced if a subversion archive provided the basis for it? That is, I wonder what features of an archive aren't present in regular snippets. I'd also be curious to know if there's any practical way for a group such as our to collaboratively use a web-based subversion reader to look at code / comments together.

    I hope I haven't gone astray with this line of inquiry -- I'm looking forward to seeing whether other folks here see potential for subversion archives.

    alenda

    XYZZY

    Thanks for following up on Matt's suggestion, Max. Just offering a potential example for both a versioning study and CCS in general: Dennis Jerz's quirky excavation of Colossal Cave Adventure in DHQ, starting with Will Crowther's original FORTRAN but looking closely at Don Woods's changes and additions (made available on Wikipedia but also on Jerz's own server). What I love about this piece, however, is the wedding of real-world environment to code, the blending of photographic, textual, and algorithmic walkthrough... which literalizes the term "code environment" in lovely ways (this gets at my own desire to embed CCS in not just historical, political, semiotic, and cultural milieus but also environmental ones). Granted, Jerz seems to prioritize textual analysis over algorithmic analysis, but it's a very accessible example to use (and play) with a class.

    markcmarino

    Git me to the SVN

    I just wanted to follow up on this point about bringing SVN and Github in our analysis of code.  As Max mentioned, this came up in CCSWG and I have been seeing the recommendation that we turn toward these repositiories for a few years now. As we explore the quotation, I'd like to bring in some text from Craig Dietrich and John Bell:

    While open source may not be able to provide a single set of tools to support all work in the humanities, the models for creative collaboration that have flourished in the open source community serve as inspiration.  Along with Linux, other examples include the variety of open source tools designed to manage community contributions to a single project.  Part archive, part message board, and part management tool, sites like SourceForge.net meld project development with open access and documentation.  Version control software like git and subversion facilitates asynchronous collaborations between contributors by standardizing how their work integrates.  If the creative community documents their work in as structured a manner as coders have, and with the same eye toward future integration with the work of others, it will be a boon to those trying to preserve and build upon the cultural artifacts created today.

    In the same way new production techniques change the requirements for tools supporting cultural production, new network-based tools lead to changes in the way artifacts they describe are perceived.  A system that acknowledges the importance of both conceptual and technical creators deemphasizes the idea of the single author “genius,” re-integrating all types of contributions to the creative process.  The tools described below accept these principles in both concept and execution: they are designed to harvest the perspectives of many different sources as a way of better accomplishing their individual goals, and they are also intended to work together to synthesize an expansive view of the work they document."

    Source: http://www.igi-global.com/bookstore/TitleDetails.aspx?TitleId=41892 (sorry, the forum breaks when I post a link)

    SVN and Github seem to offer a much more complete and robust environment for our investigations, though we should always acknowledge that objects of study are always partial. 

    In our collaborative reading of 10-Print, we have the entire line of code:

    10 PRINT CHR$(205.5+RND(1)); : GOTO 10

    However this one line, which is the entire program, is an entry way to talking about the platform, BASIC programming culture in the 1980s, mazes, visual design, graphical displays, randomness, and, as they say, much, much more.

    Snippets are also useful, as Jeremy Douglass has argued, because people (computer scientists, lawyers, programmers, etc.) routinely discuss segments of code as they are making other points, much as someone might talk about a quotation  or scene from a play or a few lines of legal code.  Perhaps a gesture toward Roland Barthes' "From Work to Text" will suffice to allow that all of our total objects are always partial.

    Nonetheless, I am excited by this call to look at the code within the context of its development logs and revisions, situated within a larger piece of software.

    Let's take this opportunity to propose open-source projects that we could explore?

    What might we dig into? 

    GrantLS

    Levels of Software Meaning

    It's important to understand that meaning is produced in a given software object at multiple levels up and down a given software stack and distributed between the server and the client. A web application, for example, will have markup and client-side JavaScript, as well as server-side code.

    The code on the server may require server dependencies that themselves impinge upon the meaning of a given system. Take for example, Apache's mod_rewrite. It's that handy module that tells Apache to understand a requested URL as something quite different. So, "http://example.com/news/local" could be translated into "http://example.com/index.php?controller=news&action=local". Some frameworks, such as the Zend Framework just assume that rewriting capabilities exist on the server. Further down the stack you have the HTTP server itself (Apache in my example), the operating system, etc. This is not to mention anything between the client and the server that can modify HTTP requests and responses, such as an application-aware firewall, thus changing the meaning of the interaction

    Identifying the edges of a given digital artifact is simultaneously, then, easy and complicated. This is my longwinded way of saying that the meaning of such an artifact cannot lie entirely in code; it must, to some degree, be experiential, observed, and qualified across synchronic and diachronic lines. No one can experience it in exactly the same way. Differences in the acquity of one's senses, the version of one's browser or other viewing apparatus, the general level of systemic atrophy (where a distributed application is concerned), and other factors entail differences in the hermeneutics of a given object.

     

    Nick Montfort

    Starting with the "text" (code)

    Hello, everyone.

    Critical code studies is certainly looking to be a useful term that provokes us to think about computational media and meaningful computational systems in new and useful ways. This discussion offers plenty of examples of how we have been able to describe some of the complexities of code and the way we might put together an "assemblage of approaches," as Mark Marino says, to study code.

    It seems to me that, to the extent that there is difference in opinion about critical code studies here, it is really a matter of emphasis. For example, Matt Kirschenbaum wrote that:

    Critical code studies has not, in my view, yet refined a sophisticated enough model of the thick *textualities* of code, preferring instead to treat code as an immaculate object on the screen, as when we are presented with a sample or snippet, its stark and unforgiving typography a latticework for critical exegesis.

    I could say in response that critical code studies also has not developed a sophisticated model of code as an immaculate object, as a static text in the traditional sense to be explicated. In other words, I certainly agree that the "textualities" of code - the processes of its development, use, modification, reading; the platforms it runs on and in some cases provides; the social and cultural contexts in which was developed; and so on, and so on - are important to model. But we do not yet have a good critical and humanistic model that allows us to understand a particular "immaculate," fixed, seemingly uncomplicated snippet of code. If we did, this model would no doubt have been applied to historical and contemporary code of all sorts to yield new and powerful insights not obtainable by other means.

    We should doubtless engage with the quote-textualities-unquote that Matt mentions, but one way I find it useful to begin to do so is by focusing on a single code object.

    (Another way is by focusing on a computational platform, something that need not be made of code at all. The Atari VCS/Atari 2600, which I wrote about recently with Ian Bogost, doesn't have ROM containing code. It's just a computer system upon which code runs. It's also a fascinating way to study how the idea of the home video game developed through an interplay of platform and code.)

    But, back to our immaculate computation, the code snippet: As Mark Marino mentioned, he and I and several others (including some who are part of this discussion) have been writing a long, collaborative discussion of a very short program: A one-line Commodore 64 BASIC program. The conversation started on Mark's critical code studies Ning site.

    I'll be glad to write about this further as part of this discussion - and my co-authors will, I hope, have some things to say as well. For now, to see something of how our thinking has developed, and to see the program that is our focus in action, those who are really intrigued can take a look at the six videos of my recent talk at UC Santa Cruz, "Line of Inquiry: Many Authors Explore Creative Computing through a Short Program." Here's part one.

    clarissal

    critical theory vs code/epistemic other - answering Montfort

    I am taken by what Nick Montfort says here, that "we do not yet have a good critical and humanistic model that allows us to understand a particular "immaculate," fixed, seemingly uncomplicated snippet of code. If we did, this model would no doubt have been applied to historical and contemporary code of all sorts to yield new and powerful insights not obtainable by other means." I face the same issue, as I work on the history, sociology and philosophy of quantum physics, particle physics and the large hadron collider (the resaon why I even have to resort to such lubugrious long-winded way of saying what I do, even if none of them most accurately define my work, stands testament to the fact that I have yet develop a suitable humanistic episteme for framing my work). I find many questions that one asks of that field, especially when one tries to de-anthropomorphize the object of inquiry, to have no vocabulary or language yet available within the humanistic framework, and everything is still seen in relationality to the human actor, hence my fascination with object-oriented philosophy even though I do not quite see where that goes. I wonder if we need to develop that humanistic model by working OUTSIDE of the existing humanistic model, working from the code back rather than trying to juxtapose the code to our existing models, and hence developing that model through the object of inquiry. How do we also borrow theoretical framings from CS without inevitably turning the critique into one developed mainly in extension to the conversations in CS? I think if we can find methods and ways to deal with this major question, CCS will have added a major contribution to humanities way of thinking and seeing. But where do we start? Probably by just looking deeply into an object at a time, rather than just tackling what we can see on the screen. It means dissolving the the 'black box' of code, pre-code and other-than-code.

    Braxton Soderman

    Code Interpreting Code

    Thanks for providing the link, Nick. Very interesting talk! I, for one, would love to hear more about what conclusions the collaborative team is approaching. In your post you mentioned that "a good critical and humanistic model that allows us to understand a particular 'immaculate,' fixed, seemingly uncomplicated snippet of code" does not yet exist. I am curious as to whether the collaborative team interpreting this chosen line of code is filling this void? Perhaps it is too early to tell or to share? But I imagine that this work will eventually provide insights into answering this question nonetheless...

    I was quite struck with the question that arose, "How can you do software studies by writing programs to answer questions?" (which was posed as one potential interest/conclusion of the group by Michael Mateas, though it has obviously been on the minds of all collaborators working on the project). So, how to write programs which might reveal how a platform works, the limitations of hardware & programming languages, etc.? It makes me think: critical code studies might be done "best" in code rather than in natural language? That the "language" of interpretation of CCS can operate in code itself? As John Cage said a while ago: "The best criticism of a poem is a poem itself." Or, something along those lines. (This might begin to answer Clarissa's idea that new interpretative paradigms that analyze code could be developed "outside" traditional humanities disciplines—i.e. they are not coming from previous humanities terms of analysis but from programmed responses to code). I mean, for example, it might make sense to critique the algorithms of certain politically charged simulations (like climate models), but eventually it makes more sense to write a different algorithm as the critique. So, Fqwiki's quick rewriting of Qwiki, which David Berry brought up in the code critiques section, might already be an example of CCS in action, or at least a methodological first step where one could then compare the code of Fqwiki with that of Qwiki (if you can/could) in order to begin commenting on differences, etc..

    Anyway, I look forward to reading the multiple exegesis of the single line of code that y'all are working on. If you or any other project participants want to share preliminary conclusions, that would be great! Any more thoughts from others on how writing programs about programs (or code) could operate as an interesting form of interpretation from a "humanities" perspective? 

    maxf

    CCS@USC Conference Proceedings

    Just want to give everyone a heads-up that the proceedings from the summer 2010 CCS@USC conference have just been published. The conference featured keynote speaker Wendy Chun and a host of prominent scholars, many of whom are present in this conversation. For those who've hesitated to join this discussion for lack of familiarity with CCS, these proceedings, featuring text and videos, are the perfect way to get acquainted with the innovative work that laid the foundation for our conversation.

    Find the proceedings at: http://vectorsjournal.org/thoughtmesh/critcode

    You'll notice that the proceedings were published on a unique platform called Thoughtmesh, which was developed by USC's Vectors journal and presenter Craig Dietrich in particular. Thoughtmesh was chosen for its ability to present and connect publications in much the same way that you'd expect from a live conference. I'm particularly excited about the Peer Review feature, which allows users to create conversation in and around the papers on the site.

    I encourage you to explore the proceedings on Thoughtmesh, and continue the conversation here at HASTAC. Happy meshing!

    clarissal

    Interdisciplinary collaboration between CCS and CS

    I am glad to see how the responses are growing in the days I was away! I'll have to take the time to read through all of them, though reading through one or two has me excited at how the conversation is growing! There are two points I actually want to bring in here, one in answer to Stephanie August's post on collaboration between CS and CCS, and also in relation to a twitter conversation I'd been having with Mark Marino via #critcode. The second point relates to a growth of a sort of coder's kit (I don't mean programmer's toolkits that software developers use, but more of the sort of tools that allow those with minimal programming and coding experience to build their own interface, applications, organize data or control certain functions. Scripts are some of the most basic high-level and simple code strings, and the others are visual blocks that can be moved around (such as what we can see in Google Maps. However, I may save the second point for another comment the next day.

    In answering the question that Stephanie had raised, on how CS and CCS (computer scientists and digital humanists) can collaborate, we may need to think about the possibility of creating a separate space for ourselves from existing structure. In own experience trying to attend highlevel technical conferences such as the many plenary sessions on can find in the Siggraph, however interesting I find the concepts are, I tend to get lost once the presentations go into real technical details. I am sure CS feel the same way if they were to be in a DH conference and then probably have to hear the critical language and theory used to analyze digital medias of all forms. I had suggested, at a small leve, that a form of 'internship' or lab work in CS's labs be made available to DH during the summer, and that CS students also spend a semester or summer participating in DH type courses or camps. Mark has suggested the possibility of co-taught courses, and from what I know, there are people who are amenable to that, though trying to balance material and instruction would require a lot of thought and planning.  It seems that CCS is achieving a level of recognition at the MLA, and had also been presented at the SLSA. I am probably a little naive when it comes to organizational politics, but if we can garner sufficient interests from people from CS and who work in CCS (Coders or not), I wonder if there's a way to negotiate with organizations such as ACM for a space to hold cross-disciplinary sessions and meetings that can culminate into a bigger conference and collaboration. But if that were to happen, what would this hybrid CS-CCS collaboration look like? Does CCS need to reach a higher level of CS proficiency in order to be consider as rigorous, or can CCS develop its own ontology and epistemology, allowing itself to be enriched by the epistemic culture of CS but not be subsumed by it? I think we really have to spend time thinking about this, as it's not as simple as just attending each other's conferences or co-teaching. This is something I am struggling too with my work relating to quantum physics and science studies. Even the developing of higher level tools that minimize the need for struggling with code in the production of digital objects is something we need to think about, since many digital humanists work with such tools that were developed by coders working within IT and CS. This also connects to what GrantLS has to say about the experiential with the final object that goes beyond mucking with the code.

    Will write more later. But I hope this can jump start further conversations already started.

    markcmarino

    CS-DH bridges

    Yes, Clarissa, these are the questions.

    For me the process begins on a much smaller level, in my everyday discussions with those who teach Computer Science or program for a living.  I think we can do this on our own campuses and with those we know and love. 

    But first we need to frame the collaboration as an exploration of code and what makes it so meaningful. 

    Does CCS need to reach a higher level of CS proficiency in order to be consider as rigorous, or can CCS develop its own ontology and epistemology, allowing itself to be enriched by the epistemic culture of CS but not be subsumed by it?

    I don't see this as either/or.  For CCS to be successful and fruitful, practitioners will need to learn to understand code in the frameworks used by CS; however, that should not prevent the development of its own hermeneutics.  In fact, we're not doing any work if we don't. 

    But I believe we do best when we tie our work into the kinds of conversations computer scientists and programmers already have about code. 

    That said, humanities hermeneutics are very powerful, and as we know from our experiences teaching even first-year students, the methods of performing cultural critique of complex systems are not on the start-up disk, nor are they built into the system that is focused on profitability and efficiency. 

    Take the "Beauty" question.  Don Knuth can write about the elegance of code, but Michael Mateas and Nick Montfort reveal the subculture of "obfuscation" in their article (http://nickm/cis/a_box_darkly.pdf).  That subculture was there before Nick and Michael arrived, but I would argue their article involves a creative and critical intervention, troubling or problematizing the notion of elegance.

    So yes, we need to reach the highest level of literacy possible when speaking of code with CS peers, but we must be ready to make our own paradigms and (current and emerging) hermeneutics intelligible

    But if we had a gathering, maybe we could start with something online or a smaller exploratory summit?  Who'd like to host? 

     

     

    markcmarino

    CCS and CS

    By the way, on this point of bridging:

    Take a look at the discussion we had with Paul Rosenbloom (USC) and Stephanie August (LMU) at the CCS conference at USC as an early gesture toward bringing CS and CCS together.  (http://thoughtmesh.net/publish/381.php)  We were very happy they accepted our invitation to the conference -- and, as you can see, Stephanie is here again. 

    Interestingly, they both research Artificial Intelligence, and from my knowledge of the field and its own interdisciplinary amalgamation of research topics, I can see how are well positioned to be ambassadors and emissaries.

     

     

    clarissal

    Exploratory Summit

    Damn it, I lost my nicely typed reply thanks to the Mac mousepad...arrgh

     

    As I was saying, I think interdisciplinary humanities need to seriously consider making their graduate students go out of the department and outside the comfort zone of humanities classes to take classes in disciplines that are of interest to their research and theorizing, and make that a part of the pre-requisites. I wanted to take more math, physics and CS classes to be able to refresh my memory (I was a physics major, so the material is a lot less new to me than to another humanities graduate student who had remained staunchly in the humanities all through their academic life but who have interests in other disciplines at a theoretical, ontological or philosophical level) but have always been restricted from doing that during my coursework process because of  a need to take a bunch of classes befitting of a Literature graduate student. I understand the current restrictions due to the institutional politics and also the politics of graduate funding, but if institutions are really serious in establishing interdisciplinary collaboration, programs in the humanities need scholars who can take the bulls by the horn, so to speak, if they really want to talk about genetics or codes, otherwise we risk at creating even higher level of skepticism from those 'in the field' who are already skeptical about the ability of humanists to engage in fields outside their domain of interest with rigor.

    Also, I would volunteer Duke-HASTAC to be the stomping ground for such a summit suggested by Mark and would even volunteer to be one of the organizers. We can do a soft of #ThatCamp. We can take a look at the DHSI model in Victoria (if nothing bars me, i will be there in early June!). Would Tara Macpherson of Vectors be interested in also hosting such a summit? I think ACM would be a good place to begin getting the word out to CS people (other than the ones we already personally know) to see if they would be interested in such an endeavor.

    I hope this gets more response soon!

    Taina

    The meaning of code and social media platforms

    Hi everyone,

    I've been following this discussion with great interest, as I did with the CCS working group last winter. It is interesting to see the discussions still revolving around the conceptual nature of code and whether there is something at stake in the CCS enterprise or not. Coming from a more humanistic and social science background I'm still a bit uneasy about the notion that code can be read akin to literary texts. Like with the Saussurean notion of the sign, the relation between sign and referent in source code is arbitrary. The relation between programming syntax and semantics is an agreed upon convention. Whereas natural languages are open for interpretation, in that the same word or sentence can mean different things, the meanings of a string of code is not up for grabs in the same sense. This is to say that there is a 1:1 relation in code; it means exactly what it says. A line of code cannot mean different things.  

    void setup() {

    size(400,400);

    }

     void draw() {

    background(200);

    fill(255,200,0);

    noStroke();

    ellipse(200,200, 200,200);

    }

    This code does not mean anything else than “draw an orange circle”. It cannot suddenly be read as “draw a green square”. It does not lend itself to any interpretation as such. So, whereas the meaning of a sign in artificial languages like programming languages is based on an isomorphism mapping, the same relation in human languages are characterized by implicatures, the same sign potentially meaning a variety of different things. As such I wonder what epistemological advantage "reading" code has over say studying code by proxy, whether through the ways in which code circulates as a contested artifact within discourse or as representation? How broad may Critical Code Studies be understood? I guess my question is what counts as code in CCS? Also, in what ways is code critical? Is it code that acts critically or the interpretation that is critical? If the latter and code represents an isomorphic mapping how can any interpretation of this 1:1 relation be critical? 

    I'd also like the opportunity of having so many distinguished scholars who know their code here to return to something that Jarah brought up in one of the first posts in this discussion, namely to the role of code in social media platforms. As far as I could see noone really responded to this one. During the CCS working group discussions the artist Aymeric Mansoux likewise brought up this difficulty under the heading "critical black box code study", asking What kind of strategies can we come up with to provide a critical code study ... without any code? Indeed this is something that I am very interested in. How should we approach inaccessible code? And how can we make sense of the role of code in the age of social media? Are we confined to the interface after all? What can the humanist do to critique Facebook from a CCS perspective? Taking that Facebook as a platform consists of an amalgam of different code, and that the code is constantly changing, what does such an always in becoming imply how we study such ontogenetic objects? 

    steveklabnik

    Just a small note; lines of

    Just a small note; lines of code can certainly have their meanings changed. Usually it's the more 'dynamic' languages that feature something like this, for example, in Ruby:

    def hello

      puts "hello"

    end

    hello

    def hello

      puts "goodbye"

    end

    hello

    Will display "Hello! Goodbye!" The invocation of hello has changed. While this example is trivial, this sort of technique is actually used by Rubyists so much that it has a special name: monkeypatching.

    That said, it's true that almost always, a function has one definition. But that doesn't always mean its semantics are predictable:

      def silly_hello

        if rand(2) == 0

          puts "hello"

        else

          puts "goodbye"

      end

    Ed Finn

    Social media and reading (and rewriting) algorithms

    I’ve enjoyed watching these discussions play out in the CCS Working Group and on HASTAC, and I thought this would be a good point to jump in myself. I want to pick up on a point that Jarah and taina made above about the social ramifications of code. I was struck by the claim in the introduction above that without CCS “we will be confined to cultural critique of the surface effects of a digital culture which functions within in a black box.” Actually, the borders between human and computational interface are never quite sealed, and somehow elements of the analog or the cultural always seem to seep through. I want to make that argument using a version of something I recently posted on my own blog as I started reflecting on all the attention CCS has been getting recently.

    Jarah offered a reading of some code from Facebook. I’m particularly interested not just in code itself but in algorithms and other coded functions, which exist at a layer of abstraction between programmatic intention and human experience.

    A lot of the research I'm doing on the literary marketplace explores how new computational algorithms are changing cultural systems (i.e. the seasons of book production, which operate a little like Hollywood's summer blockbusters, winter Oscar-bait formula). But what I want to dwell on briefly here is how we are all learning to "read" algorithms ourselves on the front end. That's one of the basic sources of challenge in videogames, for instance. An example from my dissertation work might be the way we reverse-engineer recommendation systems (to figure out why something was suggested to us).

    A still better example is Slate's Facebook parodies, which at their best adapt the functionality and rhetoric of the site's algorithms for political satire. For instance, in "100 Days of Barack Obama's Facebook news feed," the authors mimic Facebook's social media tracking for comedic effect:


    Source: http://www.slate.com/id/2217225/

    Here we have humans 'faking' algorithms for their own purposes, and I think the satire effectively skewers Facebook as well as politics. Ultimately, Slate's pieces work because they ask us if American politics is turning into a stylized, algorithmically deterministic system, a sadly unwitting self-parody. Or, as Aaron Sorkin put it, whether "socializing on the Internet is to socializing, what reality TV is to reality."

    Of course, there's no real code here, but I guess my point is that we're all involved in interpreting algorithms in various ways, whether or not we're coders. The challenges of CCS that Mark, Matt and others have been hashing out above relate not only to the “reader” but to the “user,” who has all sorts of opportunities and needs for performing code readings.

    clarissal

    Gentle reminder on using the code critique site

    Hello again all,

    As a co-host for this forum, I would like to request everyone who wish to critique on code, or even critique on the way code is being critique (especially those who post snippets as examples), to do so at our still rather 'spacious' code critique section. Come on people, let's populate that site with our contents and discontents! All input of such kind are appreciated!

    clarissal

    Gentle reminder on using the code critique site

    Hello again all,

    As a co-host for this forum, I would like to request everyone who wish to critique on code, or even critique on the way code is being critique (especially those who post snippets as examples), to do so at our still rather 'spacious' code critique section. Come on people, let's populate that site with our contents and discontents! All input of such kind are appreciated!

    kimlacey

    The value of teaching CCS in writing courses

    Hello everyone!

    Yet another fantastic HASTAC forum!  CCS is popping up everywhere, and I'm really excited to learn more about it in this forum.

    I'll admit, however, that I do feel a bit out of my league on this one, but we all have to start somewhere...  I'm studying rhetoric and composition and mostly teach technical writing.  The majority of my students are quite proficient in programming, and I think it would be really exciting to bring CCS to the technical writing classroom.  My students usually write standard tech writing documents: instructions, user-test memos, etc., and I want to incorporate programming into my writing classroom. I have some ideas about how to do this (I was introduced to Inform 7 at a recent THATCamp and am finding some value there), but I'm wondering if I could pick your brains here for some more ideas.  Anyone have any suggestions for assignments that bring together writing technical documents and writing code?

    For myself, I feel like I have some catching up to do on CCS.  I know some basic HTML, but I want to learn more, and I'm not sure what that "more" is or what is most valuable.  Do you have a good suggestion where a rookie can start looking (books, websites, etc.)? What language is a good one to start with? 

    Finally, there have been some books published recents that might be helpful for the theoretical side of things: Program Or Be Programmed: 10 Commands for the Digital Age, and From A to <A>: Keywords of Markup I haven't finished either, but I'd love to hear others' thoughts on these texts.

    ebuswell

    Entering into CCS

    For myself, I feel like I have some catching up to do on CCS.  I know some basic HTML, but I want to learn more, and I'm not sure what that "more" is or what is most valuable.  Do you have a good suggestion where a rookie can start looking (books, websites, etc.)? What language is a good one to start with?

    That question is liable to start a holy war in many circles; but I suppose the answer depends on how you want to participate in CCS, and how quickly you can glean general programming practices.  Python is probably the best in terms of learnability; you'll quickly learn about programming and coding in general.  You will be able to quickly start doing actual things that are kinda interesting, and probably be able to read psuedocode, which is mostly what people talk about in CS pedagogy.  But on the other hand, if you want to really start diving in and understanding the various discourses around code qua code, learning C and Java will give you the most bang for your buck.  An added bonus is that it is very likely that every programmer in your class will know C and Java, but only about half of them will know Python.

    http://docs.python.org/tutorial/ is a tutorial on Python.

    For C, I might be crazy, but I'd just start here: http://en.wikipedia.org/wiki/The_C_Programming_Language_(book)

    For Java, there's some stuff at http://www.oracle.com/technetwork/topics/newtojava/gettingstarted-jsp-138588.html but I can't vouch for it; maybe someone knows of something better?

    Learn a language and jump in---or rather, reverse the order: jump in first.  I don't think you need to for the types of things you're talking about, but here's some more general stuff about programming:

    For a very University CS approach to code, this http://mitpress.mit.edu/sicp/full-text/book/book.html is probably the most standard, though I confess I haven't read it myself.

    For a very Cybertariat approach to code, there's this: http://www.catb.org/~esr/writings/taoup/

    This isn't the first time this has come up, and I wonder if it wouldn't be a good idea to try and create some page of resources for acquiring "CCS Literacy" or something to that effect?  Anyone else have other opinions?  Anyone else have actual experience teaching (themselves or others) and can talk about the pitfalls / what worked for them?

    steveklabnik

    I actually maintain a project

    I actually maintain a project specifically for teaching programming. It uses Ruby. http://hackety-hack.com/ (though right now I'm going through a database upgrade, so it will be down for an hour or two tops)

     

    You're certainly right that this is the start of several holy wars, though. There's lots of ways to go about this.

    thorst

    thanks for the link!

    I'll be checking it out in the next couple of days. I'm in education and interested in unique ways of learning programming :)

    kimlacey

    It wasn't my intention to

    It wasn't my intention to start a holy war, but I'm sure glad for the primers.  Thank you!  I'll be checking these out!

    Richard Mehlinger

    Good starting programming languages

    If I wanted to just gain a reading knowledge of code, I'd start out by learning Python, which is quite an elegant language, and from there move on to C++, which is much more frequently used than C these days, and whose syntax is fairly similar to that of a number of other languages. Once you've learned those, it should be fairly easy to understand code in a variety of other languages.

    kimlacey

    Great--thank you!

    Great--thank you!

    Violafaithe90

    Code and Literature

    I was incredibly excited to see that code has come up as the topic for this new thread, and I can't wait to read all of the other comments. As an Undergrad. student who majors in traditional Humanities (English and Philosophy) for some reason I can't help but gravitate towards computer code, and I cannot wait to learn programming. My first ressearch paper in Digital Humanities focused on N. Katherine Hayles and Codework literature. To me a lot of digital literature is just print literature created through a computer medium, and I feel that Codeworks, or literature that is made up of computer code and programmable language, has a digital authenticity that isn't really found in hypertext literature or other popular forms of digital texts. The computer isn't just a medium, it's a thing-in-itself, so learning to understand and use code is not just useful but it's an important part of creating work in the field of Digital Humanities.  

    maxf

    @Alenda, it must be true that

    @Alenda, it must be true that great minds think alike, as both you and the CCS Working Group (Spring 2010) independently suggested an exploration of Colossal Cave Adventure. The online Working Group, which consisted of over 100 scholars engaging in CCS, performed a group code annotation of Will Crowther's original source code. We were even fortunate enough to have Dennis Jerz join the project, which you can see at the Colossal Cave Source Code file and the Colossal Cave Data File. Enjoy!