Code Critiques

Code Critiques

Welcome to the Code Critique forum. Here we are focusing on reading and critiquing specific code snippets. To get the discussion started, your hosts have chosen 5 programs to analyze, ranging across the uses (and abuses) of computer programing.

1. The first example belongs to the original use of computers, i.e. solving mathematical equations too complex or time consuming to be done by hand with the desired data set. This C++ program models the Schroedinger Equation, which describes how the quantum state of a physical system changes in time. Do not be alarmed. You do not have to have a background in physics or higher mathematics to appreciate the implications of scientific analysis based on crunching large amounts of data and how a computer program privileges certain results. Screenshots are available as well as the extremely interesting pseudo-code which describes how the program is supposed to work using a hybrid syntax of English and code.

2 & 3. Next we have two examples of obfuscated code; code which has been intentionally altered to obscure its meaning, either as a brain-teaser or to hide malware from virus scanners and users. Representing the brain-teasers, we have a visually intriguing and arithmetically simple (if tedious) method for getting the first four digits of pi. The malware example redirects you to a malicious website.

4. The fourth snippet is a single line of Perl used to redirect a website (hopefully for beneficial or at least more transparent purposes). Let your imagination run wild on all the little bits of mundane code that structure and direct our internet experiences.

5. Finally, we have a program that draws randomly generated recursive trees (in the mathematical and arboreal sense). The code itself is an exemplar of logical elegance as are the fleeting images it generates. Coincidence? I think not.

So these are our examples. Feel free to refer to these examples in your initial posts, but we also encourage you to contribute snippets that you're interested in reading in a group setting.

Here are some potential questions to ask while critically reading code:

  • At the "global" level: What's the history and context of the code/program? Whats the purpose of the software? Who wrote it? Who funded it? Who developed it, extended it, maintained it?
  • At the platform level: How do constraints of the platform impact the shape and performance of the code?
  • At the program level: What language was it written in, and why? What is the programming paradigm? What libraries does it draw upon?
  • At the character level: How do the individual and strings of characters combine to make meaning? What resonances do the signs have with other sign systems including natural language?
  • At the code level: Is code ugly or beautiful? Does it matter and why? Is the code efficient in achieving its goal (in building databases, in performing textual markups, in aggregating data online, etc)
  • How does one's choice of programming environment influence the coding/reading experiences?


Let the revels begin!


Atomic Bisection

Atomic Physics: Time-Independent Schroedinger Equation Modelling of Periodic One Atoms Using the Bisectional Method. The 'DOS' and 'Windows' versions are provided.

I was hoping that discussing some snippets of code from here (even though we can link to the entire code) can show how hybrid code is used as a way of privileging the outcome and the efficiency of outcome in certain aspects of scientific computing. Of course, this is not like a very good code (it is after all the outcome of a 4-month project) but one can see how it crunches out data and then visualizes that data from one simple mathematical derivation.

Snapshots of the interface produced from the code


Also, a selection of pseudocode used to design the program


 <Header Files>


Function Module for Opening DataFiles when running programme

entering max filename character size;

 inputting filename to be opened;

 inserting error checking in opening file

 Set format for output of numbers;


 Module Function for Central Potential (V screening)


 switch structure is used

 N= 1:

 break because hydrogen has no V screening effect.

 N= 2:

 c=0.056; (c being the parameter of each of these alkali atoms)

 N= 3:


 N= 4


 N =5; 


 N = 6:

Using two different V-screen formulas depending on whether r eq_3.pngor r >c;

return V-screen; 

Module Function for Coulomb Potential


received passed values from main;

declarationg of variables and arrays;

setting boundary values for the V Potential;

Formatting to output;

Iterating of V Potential;

Return values to main;


Module Function for Iteration of trial energy (main core of the bisection method)


declare variables;

received passed values from main function and Eigenvalue module;

set selection structure with looping structure for iteration of E + DELTA E (energy step size) or 

Hydrogen---->Z =1;

Lithium ----> Z = 3;

Natrium---->Z = 11;

Potassium----> Z = 19;

Rubidium----> Z = 37;

Cesium-----> Z = 55;

E –(DELTA E)/2 depending on the sign of the product of two wavefunctions being positive or negative;

return E energy;

Module Function for Eigenvalue(trial energy)


received passed values from main;

declare variables and arrays;

set selection structure for trial energy;

define constants for the Schroedinger Time Independent equation;

set boundary values for first set of wavefunction;

assign sum of wavefunctions to a variable;

repeat the steps for the second set by setting now the trial energy to + DELTA E;

do-while looping{

same steps repeated for the third set but this time the function for Iteration is called;

passed values from Eigenvalue to Iteration function;

when eigenvalue begins to converge, set a function that allows halving of DELTA_E until the final stage of convergence;

to test for opposite or similar signs, assign a variable to the multiplication of u[i]_previous and u[i]_next;

input tolerance level to set the scale of accuracy;

set formatting for the output on the file;

set error check for selection structure;

return 0;


Module Function for graphing utility


set x scaling, y scaling;

set max x, min x, max y, min y;

set iteration for max x ,minx ,max y, min y for pixel width setting;

set type of graphs and refresh mode;

set graph plotting function;


Main Function


declare variables;

set formatting;

list names of atoms;

Received input from users for filename , angular momentum, step size of radial increment, number of iteration, trial energy and step size of energy;

call function Eigenvalue;

format output style;

close data file;

close programme;




A Single line of Perl:

print "Location:", "\n\n";

This line will redirect a website. Discussing it would get us into issues of brevity/simplicity in programing (and programing examples) as well as the dynamic nature webpages (even static ones), the ephemerality of web content (should this lead to archiving the web?), questions of direction and navigation between pages (what is voluntary and what is imposed), and so on.


Technically, this works because the response isn't the body of HTML itself: this would generate an HTTP 301 redirect. You're not actually creating a webpage, you're manipulating the protocol directly. There's nothing the matter with this, in fact, it might make it more interesting. Normally, you're simply creating the body of the response, and letting the rest of the apparatus do its usual thing, but with this example, you're reaching up and out of the body and into the headers.


This is a really cool example of elegant, recursive code. It is written by Mitchell Whitelaw in Processing and draws a randomly generated tree. Head over to Open Processing to see it in action.

I think this program could let us talk about the aesthetics both of the code itself and what it generates. There is an interesting link (I think) between the simplicity of the lines below and the beauty of the drawings they create. Perhaps, getting into the the mind of the "programmer as artist."

void setup()







void draw()








void branch(int depth){

if (depth < 12) {






if (random(1.0) < 0.6){ // branching




branch(depth + 1);




branch(depth + 1);



else { //continue






void mouseReleased(){




My interest is in the analysis of obfuscated code, particularly malicious obfuscated code. I have some examples of malicious javascript I'm interested in using. I'm pasting one example below. I'm very interested in people's ideas of ways to approach obfuscated code. I noted in the discussion chaired by Mark Marino at CCSWG 2010 ( that he analyzed the annakournikova virus. However, the focus seems to have been on the code itself rather than the transition from obfuscated to benign, which is what I'm interested in. I'm reposting this text as a discussion element as well. If anyone has ideas please respond there and we can discuss.

var kBcbixsouogTfIchnyMH = "kV60kV105kV102k



































var FVxOkuVZcmEMnhoVehKZ = kBcbixsouogTfIchnyMH.split("kV");

var VqsBzVjmTvtZRroBmJaO = "";

for (var wfbjMQFrxPQSGJUFeDry=1; wfbjMQFrxPQSGJUFeDry<FVxOkuVZcmEMnhoVehKZ.length;




var MNwVKuSOmaiBMmiNFikT = ""+VqsBzVjmTvtZRroBmJaO+"";




I think this is an opportunity to disucss obfuscation in general, but first:

Tell us a few more details so we can engage this code critique.  Can you tell more about this code and post what it looks like when it's not obfuscated? What does it do?  When did it show up? Any sense of the author or what program was used to obfuscate?  I found references to what looks to be part of the same code here:


Are these the same?

How did you come across it?  In what context?  Any links to places where it's come up on the Web? The more we have to situate these code critiques the better.

Also, this code might be an opportunity to discuss obfuscation more broadly perhaps in relation to encryption and hashing.

Another important area is the distinction between software-produced obfuscation and "handmade" obfuscation, which, if I understand correctly, is more what comes out of the Obuscated C contest.  (  The goals of which are:

  • To write the most Obscure/Obfuscated C program under the rules below.
  • To show the importance of programming style, in an ironic way.
  • To stress C compilers with unusual code.
  • To illustrate some of the subtleties of the C language.
  • To provide a safe forum for poor C code. :-)

Again, Nick Montfort and Michael Mateas have their wonderful article about obfuscation.   

But I'd also suggest, given the goals stated above, that this contest is a place where CCS happens, albeit among programmers -- or at least, it's a good place for CCS to examine.

So if your example represents software-produced obfuscation, what can we say about the way code operates before human and machine "readers"?





Peter, Mark-

Here's a simplistic unobfuscation of the above code:

To do this, I performed a half-dozen simple text substitutions, giving the long and complicated names simpler, more expressive ones. As you can see, this is simply a message that's encoded using the ASCII -> decimal method.

Mark, I'd think that this code was obfuscated by hand, mostly because the transformation is incredibly simplistic. Machines would be able to do much more complicated transformations...


Thanks, Steve,

I wanted to draw a slighlty different distinction: not so much between obfuscations enacted by hand in terms of substitutions but obfuscations by hand in the sense of making choices about how the code is organized or occcluding its processes. There must be a better way to characterize this distinction.


From; makefile:; hint:

This program calculates pi using its own shape and, I believe, a Monte Carlo method; so help me, I have absolutely no idea how. Anyway, I think this would be a sample for CCS to take a look at (assuming someone hasn't already).

-That's a really cool example Richard, I'll see if I can make sense of how it works... I checked it out, it needs a specialized C compiler rather than gcc to operate. Sorry but I'm not installing an alternate compiler just to run it :) The question this raises in my mind: is there a difference between benign (below example) and malicious (above) obfuscated code due to the intentions of the author? --Peter

**So, for fun I decided to decipher this. It's not a Monte Carlo method at all; in fact it's very simple. The author takes an extremely long time and uses very obfuscated code to repeatedly decrement F and OO from 0. In fact, most of the code consists of OR statements, the latter half of which is short circuited and thus never executes. The code then outputs 4 * F/(OO*OO). When I calculated it out by hand, I got F = 200 and OO = 15, for an output of 3.555, which isn't quite pi, because I miscalculated the very first step and so decremented each variable 1 less than I should have. In fact, F = 201 and OO = 16, giving an output of 3.141.


Included above the code itself is a brief conversation I had with Peter Likarish on the HASTAC wiki regarding it. I am curious if people would be interested in seeing a step-by-step deobfuscation of the code. It's space consuming, but I'd be glad to post one.



I would be fascinated to see the step-by-step.  If it's not too insane for you to work through it for us.  Thanks!


You can find the step-by-step at That way, anyone who wants to work through it on their own can do so without the answer getting spoiled... and I don't have to try and hammer this particular square peg into the round hole that is the HASTAC website >_>


One of the things I find interesting about this code is that it suggests to a casual viewer--i.e., one who simply sees the circle without trying to figure out how it works--that the code is calculating pi via a Monte Carlo method. That, at least, is what I assumed, despite the fact that I really should have known better. Basically, the way the Monte Carlo method works is to randomly generate a large number of 2D points in the range ([0,1], [0,1]), and calculate what fraction of these points falls within a quarter-circle of radius of 1 from the origin. We know that the quarter-circle has area pi/4, and the entire range has area 1. Therefore, we can say that, if we use a very large number of randomly generated points, the number of points which fell within the circle divided by the total number of points should be about pi/4.

The classic visualization of this is to imagine throwing a lot of darts randomly at a circular dartboard inscribed in a square, and calculating what fraction of the total wind up in the circle. Embedding a circle in the code like this thus immediately suggests that visualization. Of course, the only thing the code is doing is decrementing.

My question, then, is whether the code is intended to deceive the reader as to the method it uses, and if so, how we can interpret that deception.


I'm particularly intrigued by the question of how one’s choice of environment influences the code readings. The environment that seems most obvious to me is the text-based source code editor, like Notepad++ or TextPad. These sorts of static, text-based environments are naturally some of the best places to produce code, although I can envision more fluid environments in which programmers can input code with movements other than keystrokes (I’m thinking specifically about non-text-based iPad development platforms, though I don’t know if such things exist). My specific concern is whether the code production environment necessarily dictates what the code reading environment should look like. I’m inclined to think that this is not the case. I think code can be written in one locale, and analyzed in another. Moreover, I believe it is crucial for CCS practitioners to examine code from myriad distinct places, so as to garner an especially rich collection of perspectives on what the code does. So then the question becomes this: what environments can we use to look at code?

The search for appropriate ‘reading environments,’ if you will, probably hinges largely on the specifics of the program at hand, which will clearly be variable from one piece of software to the next. For example, a mobile application will necessarily require a reading approach that differs significantly from a desktop application. Maybe the particular layout of the phone’s input devices (eg. buttons, the presence of a GPS device or gyroscope, et cetera) will lend itself to a reading that focuses on human interaction with handheld devices, or something along those lines. Or maybe a piece of software designed for a satellite or space shuttle will feature various components not found in other kinds of computers. Despite the virtually unlimited number of possible reading environments, I would venture to suggest that there may be some standard places for us to start.

One of the environments I have in mind is the debugger. For those not familiar with the role of a debugger, it’s essentially a tool that allows software developers to test out their program and look for any kinks as it runs. At the CCS conference at USC, which took place over the summer of 2010, I presented a paper in which I discuss the benefits of performing critical code readings with a debugger. I borrowed an idea from Wendy Chun’s manuscript Programmed Visions: Software and Memory to create the framework for my interrogation of debuggers: Dr. Chun scrutinizes the relationship between instruction and execution. She remarks on how these two entities simultaneously conflate and separate, in history and in theory. I extend this idea (or at least explicate what she has implied) to include the interaction of instruction and execution as a piece of software runs. I then proceeded to examine the relationship of these two entities in a piece of artificial intelligence software called Soar. At the end of the paper, I conclude that the debugger serves as an example of how a multifaceted approach to CCS can help produce an interesting reading (at least, I thought so :).

I’m curious to know: what other environments would be useful for performing critical code readings? How much mileage could we get out of something like a hex editor to look at closed-source software? Are there other kinds of diagnostic tools that we can explore? One great example comes courtesy of Stephen Ramsay in his terrific "live reading" video, Algorithms are Thoughts, Chainsaws are Tools.

I imagine that seasoned programmers or especially creative non-coders might be especially able to shed light on various coding (or testing, or run-time) environments that may also serve as useful reading environments.


I have always thought about the question of environments as influencing the code readings without answer it, if any answers exist. I don’t know if I will help you, I’m just exposing my thought.

I have never think about code outside the machine, and I am wondering if a piece of paper can be an appropriate environment to read code. I mean codes are always embodied in the computer/machine/software, and they are meant to have an output. Codes have an immaterial materiality in the way that they are produced in and through a non-physical space. But what happens if the code is printed. Is it still a code? Because obviously, down on a piece of paper a code cannot have any output. However, it is still readable. So my question is, environments outside the machine may be possible environment to code readings? Is code is still a code if it has no output? On a piece of paper, code is suddenly touchable but it is not practicable. The fact is it is still readable. Is any beauty or enchantment of the code still exists, once the code is apart from the machine?

I have no clear answer even if it seems obvious that code has no interest if it is not practicable, and if it has no output. But think of code down on a piece of paper seems so unlikely and so scary, that it caught my attention. 


I've seen printed code--or pseudo-code--a number of times in pedagogical settings. Textbooks routinely include printed code or pseudo-code samples. The advantage of this is that, because the code is not embodied in the machine, the readers must imagine the computation themselves. It turns the reader into the computer, and forces them to look inside the black box. Thus printed code is in fact very pedagogically valuable.


And if a code is created only on paper, i mean entirely, and never processed into the machine, is it still a code?


Yes. In the same way that even if no one ever reads it, a book is still a book and its words are still words, even code which is never executed is still code. Code simply refers to a set of instructions--it does not imply that the instructions ever need to be executed. 


Awhile back, I started tinkering with Processing.  I've only begun to code with Processing, and I've also started to experiment with Arduino boards.  When I first started trying to figure out Processing, I used it to write a program that generates a kind of Mondrian remix.  I wrote about that program on my blog.

Here's a link to my "painting":


Here's the Processing code.  It is by no means elegant.


size (1024,768);



line(0,128,1024,128); //horizontal 1

line(0,339,1024,339); //horizontal 2

line(0,658,1024,658); //horizontal 3

line(175,0,175,768); //vertical 1

line(340,0,340,768); //vertical 2

line(585,0,585,768); //vertical 3

line(890,0,890,768); //vertical 4

line(175,558,340,558); //short horizontal


rect(175,128,165,211); //red rectangle


rect(175,558,165,100); //yellow rectangle


rect(585,128,305,211); //blue rectangle


I think Processing offers some interesting ways to introduce programming into the humanities classroom.  One of the main goals of the language is to allow the non-programmer a way to create sketches.  Instead of "Hello World," students' first program can be a 3D object.  It would still be a very simple program, but it would also be something other than text.


This is exactly why I wanted to introduce Processing as a code example.  I think it's a fascinating language to look at because it was designed to be used by graphical artists with no programming experience.  I wonder if that makes the link between the code itself and the images it generates more explicit. 

I love your Mondrian by the way.  What a great example of how to replicate art in a new medium.


Would be very interested to know if people had any insight into this, used to generate the scrolling text effects in William Gibson's famously self-encrypting poem AGRIPPA (1992). Here's the first of seven pages which survive as a hard copy in the publisher's archives, available on the Web as a scanned image in the AGRIPPA Files at UCSB:




The remaining pages of the code fragment are here:


Read all about AGRIPPA itself here:



I'm really surprised that AGRIPPA was written in Lisp.

I'm not familiar enough with Lisp's various dialects to determine a definitive answer to which one this is, but if I had to guess, it's Macintosh Common Lisp. Noting the

(require 'MyQuickdraw)

this was the source code for the version that would run on a Macintosh, if AGRIPPA was multi-platform.

This is pretty cool. I'd imagine that that big block of characters were the obfuscated text, though its' hard to tell since we only see the end of it. Considering it ends with '")', it seems safe to say that that's the case.

It also appears that the-show is the main function that makes all of the magic happen. But that's not actually the case, the indentation is lying. Note how the entire page seems to be inside of the-show, but when I noticed it 'calling itself' further down, I realized that it'd recurse infinitely. Reading more closely, the definition of the-show is closed before the (setq *text.. section that's further indented. I'm not sure why he would have done this, as it's kind of confusing. Most of the source is fairly straightforward, and other than the obfuscation involved with the text of the poem, it doesn't appear that he was trying to obfuscate the source itself. So why the extra indentation?

The other thing of interest is at the bottom of page 2, (setf trash... I was looking around to see if I could find out how it deletes itself. Yet trash simply opens a file, named "Agrippa:Agrippa." Older MacOS versions used : as the directory separator, like we use  \ or / now. So I don't see what this has to do with the Trash... right after setting that up, it does something (notch 4000) times, notch isn't contained in the text anywhere. In the body, it simply writes that many ASCII '255' characters to that file.

It's all very interesting, and still quite mysterious. But overall, pretty well-formed and readable. If only there was the whole text, rather than some chunks...



This is much like what I've been thinking in terms of code, codone, textuality and genetics during the brainstorming of for this forum (I know people working on literature and science have much to say on this issue).  After scanning through the 6 pages of code and also looking at the printed texts (I would love to actually see these texts up close), I am curious as to what the missing 4th page would had been, since the 5th page is a series of obfuscation (I was just thinking how we can generated different examples of obfuscated code just by opening encrypted or specially formatted texts in a notepad or textpad.) Most of the other pages seem to be code concentrated on constraining and controlling the interface, as well as the generation of the text, on screen. I like to see it as an early generation of visual-centric coding. Much of them look hand-coded but I am not sure if the codes are optimized (I don't know LISP at all beyond its reputation and my rough guess stems from my experience with other languages), since these 6 pages just represent similar strands of code that are repeated in variations. I am thinking how much it parallels the sort of code one sees when one switches from visual board to code view on a Microsoft Frontpage-like software. What does everyone think about the generous parenthetical usage for one line code? The use of it usually signals a higher language order than assembly like machine-language. But I stand corrected.


I'd be willing to be that this was entirely hand-coded. Here's why: Lisp code often generates other Lisp code, but it doesn't usually spit it out to a file or anything. If you're writing code that generates code, you just save the initial, handwritten code in the first place. This is called 'metaprogramming.'

> What does everyone think about the generous parenthetical usage for one line code?

It's required by the language.

> The use of it usually signals a higher language order than assembly like machine-language.

Lisp is actually one of the highest level programming languages you can use, your intuition is correct.


If we look at the physical actions of a computer, code is not present in any of them. We can look at the chain of coding as a series of transformations, with the idea of software turning into an architectural specification on a whiteboard, the whiteboard drawings being implemented as code, the code compiling to assembly code, the assembly code compiling to machine code, the machine code being translated into microcode in the processor core, which is finally translated into a series of binary messages and sent through various logic gates, buses, etc. Nowhere in this chain have we reached anything physical at all. The actual physicality is a series of electrical phenomena totally oblivious to their erstwhile definition as ones and zeros, ands and nors, as codes. This is a system that behaves *as if* code were real, but it is *not* a system that actually, physically consists of code.

So when I see a Kate Hayles-style call to look at the materiality of the code, I worry. Code indeed has a materiality; right now the below code has a material being as a few electrical pulses zipping through a bunch of random semiconductors; that materiality is further transformed into an electrical signal, shot through a medium that translates it to an alteration of the polarity of thousands of squares on a grid, which in turn, by allowing light through or not, eventually brings the code to my eyes. And Kate Hayles' insistence on materiality has its place here: it counteracts the notion of code as some kind of pure subjectivity unbound by the constraints of the world. But we have to look at the nonmateriality as well.

On to the code. In my own coding practices, I do the following a lot:


find . -name \*.[ch]|xargs perl -e 'foreach (@ARGV) { open F, "<$_" or next; open G, ">$"; while($_ = ) { s/([^a-zA-Z])?([A-Z])/($1 ? "$1" : "_") . lc($2)/eg; print G $_; }}'

for file in `find . -name \*.new`; do mv "$file" "$(echo $file|sed 's/.new//')"; done; make && ./the_program

---OK, so I've never actually done this, but I've done similar stuff. For those of you who don't grok write-only Perl and regular expressions, this is a pretty naive way of translating "CamelCase" to "not_camel_case" variables. The key part is "s/([^a-zA-Z])?([A-Z])/($1 ? "$1" : "_") . lc($2)/eg", which means search ("s/") through the string for an uppercase letter ("([A-Z])"), possibly preceded by something that's not a letter ("([^a-zA-Z])?"); replace ("/") the string you found with either the preceding non-letter or with an underscore ("($1 ? '$1' : '_')"), followed by a lowercase version of whatever capital letter you found (". lc($2)").

This code manipulates code, and this manipulation affects the signifiers; but the way code works---the fact that I could do this and pass it on to another programmer who wouldn't be especially miffed, the fact that I could run the resulting program with no changes, the fact that I could be running this on Linux or *BSD or Solaris or Irix---these all show that in some sense anyway, I have created a program that alters the *signifiers* of another program while leaving the *signified* intact. And this is part of what worries me: I'm not sure the signifier--signified way of thinking is at all adequate to code. The vast majority of signifiers in any program (the variables, function names, etc.) are created by the programmer himself, or by a community of programmers, and not by what is usually understood as the actual programming language. LISP takes this to the extreme; the only signifiers that are defined as *the language*---rather than features of a particular dialect---are two signifiers, "(" and ")". To be sure, the names of computational entities are real signifiers, and they really say something, and CCS should figure out what that something is. And the above transformation must be important for some reason, otherwise why do it? But the programming language, and the core of what makes code itself really interesting, exists outside of these signifiers. Code is something like a pure grammar, a way of putting signifiers together without the usual attendant large associated set of signifiers---or at least that is part of the claim. It's not that code isn't signification, and its not that there aren't signifiers in code, but here's my proposal: most of the signification in code happens through counterfactuals, and not through what is actually materially present.

To return to the actual code, the question is: what makes the above code "correct"? By what criteria might it be judged? I can tell you beforehand that I tested this code with a file that I constructed for test purposes and it worked out just fine. But there are still bugs. I bet that nine times out of ten, it will fail on the "make"; "make" will not be able to compile the program because, for example, a variable exists somewhere at the beginning of the line, a condition for which the regular expression in the Perl code doesn't check. The (trick) question is, where in the program is the signifier that produced the error? What we want to do is blame the first piece of code, and use the code we were trying to compile as some exemplary evidence. This is all we really can do, for the only way we can know that the first program is correct is to try to compile the second one. is my relatively simple conclusion that I'm coming to in a very roundabout way: the grammar of code isn't exactly what it *means*, in the present tense, but what it *will* mean, what it *should* mean. This *will* and *should* cast their shadow over all code; they are not physical signifiers (yet); but they are somehow embedded into material reality. I'm not sure we have thought through any way of thinking about the materiality of a counterfactual, but I think that's where a materialist critique of code has to lead us.

So the question is: how is the counterfactual of its correctness present in/with this code?


@Evan, I'm curious to know what Kate Hayles had to say about the materiality of code -- can you tell us specifically what she said that you take issue with?


Specifically as in actually dig out my book and provide a quote along with a criticism :-) ?  Actually, I don't take issue with anything Kate Hayles is doing, I take issue with the notion that it's more than slightly applicable to what we're doing.

What I'm thinking about in her work is the main thrust of argumentation: that information and computers are not disembodied, and that the body matters in interpretations of code.  I'm probably oversimplifying this, but I'm not too worried about it as I'm not critiquing her directly.  For Kate Hayles, this is a particular move in a particular discourse.  Because many cyberfuturists want to spring out of their body and become ethereal bits in magical world of code, because they want to do away with the Cartesian split by leaving behind the part of that split they see as problematic (the body), Kate Hayles comes at them and says: you know, things from the body affect the mind, the material basis of the system affects it and the system wouldn't be the same without it.  Which is a brilliant move that she then extends towards her critique of electronic texts and such---not that she doesn't make a lot of other points here as well.

But when we start trying to apply that to *code*---and here by "we" I might mean "me a year or so ago," as a particular instance of someone else doing this doesn't come to mind---*code* shows us that it doesn't fit this critique very well.  The problem is that the computopian discourse Hayles is critiquing is reducing body to mind (or signifier to signified) from both sides at once.  If you look at Weaver's introduction to the Shannon-Weaver edition of *The Mathematical Theory of Communication* (the foundational text of information science) you'll see the beginnings of an attempt at encoding not just the signifier, but the signified as well.  The world is thought of as already consisting in itself of discrete enumerable states (probably as analogy to an oversimplified quantum physics), which are then, albeit imperfectly, translated from one representation to another.  This is the idea that the physical can be freely exchanged with information, which is what Kate Hayles is arguing against.

But once you zoom in (out? sideways?) on this process, and look at the actual *code* somewhere in the midst of its transformation, you start quickly realizing that the major opposition we run into in CCS is *not* that everyone is claiming that code is magic and ethereal and mental and such, but just the opposite: the claim is that code *means* what the computer *does*, i.e. that not just the signifier but also the signified is an actual, physical object.  This is also exemplified by the Shannon-Weaver model, as the point in informatizing the world is to reduce the signifier--signified model to a signified--signified model (which can then be mathematically modeled---probably because "signified--signified" basically defines modern mathematics).

So when we start trying to emphasize the materiality of code, what we often end up doing is emphasizing code-as-signifier as *opposed* to code-as-sign.  This doesn't have to be the case.  There's something in Hayles' insistence on the importance of body that's almost alchemical---that posits the physical world as a sort of material signified, as a Being that the form of the object cannot touch.  But I think this is a much more difficult position from which to frame our arguments.  Easier is the position that says, simply, that meaning is not the signifier, but the signified, and that a translation of signifier--signifier is translation (or even transliteration), but not understanding.



This is an interesting example of using hacked code to critique the start up


Qwiki is an app that creates pretty slideshows based on Wikipedia entries. The service won the top award at the last Techcrunch Disrupt conference and just received $8 million in new funding from a group led by Facebook co-founder Eduardo Saverin.

An intrepid hacker who goes by the name of “Banksy the Lucky Stiff” put together Fqwiki, a workable Qwiki clone in just 321 Lines of HTML.


<!-- This code is not pretty, but it doesn't need to be. It's only been 6 hours, but based on funding patterns I should be able to raise a few million off of this ;). Yours, Banksy The Lucky Stiff --> <html>

Load the page and show source to see the code... 




It's interesting that the author chose the name that he did, given that Banksy is the leading graffiti artist in the world and is well known for his guerrilla artwork.