As most of you no doubt know by now, Codecademy’s call to make 2012 the year you (yes, you!) learn to code has been heard throughout the Digital Humanities community. It has led in just the last week to a flurry of blog posts about gender in technology, the cultures of DH vs CS, and many other important and fascinating topics. And let’s not forget that we now have a wonderful array of DH folks working hard to address their code illiteracy.
But, the call to code is also a problem. It suggests that coding is just another skill, like riding a bicycle or sewing, that you can learn easily and then put to use. Among humanities scholars, the first model we often think of is learning a foreign language, which for many of us is a fairly trivial task: learn the grammar, then memorize the vocabulary. Yes, coding requires learning different “languages,” but those languages are not human. This model of coding-as-language-learning fails.
To understand how to program is to possess an entirely different mode of thinking, to have a way of breaking problems down into small parts that combine to form an algorithm. This mode of thought it far more foreign to many of us than even the strangest human language. James Gottlieb points out:
What I have learned over the years is that programming and writing are different in one important aspect: programming scales differently than literature. This one important rule is what explains much of the confusion humanists have about programming, and it’s something that programmers must learn if they are to rise above being just a programmer.
Literature scholars are used to literature. Books are singular objects that are self-contained. References to other books are in passing. Legal and respectable books aren’t made by combining libraries of materials, cutting and pasting chapters from a wide range of sources.
That’s how programming works, though. You want to minimize the amount of repetition. You don’t want to cut and paste code. You want to share code across projects. To do otherwise is to increase bugs and decrease productivity.
The problem is that you can’t think about digital projects in the same way you think about non-digital projects. You have to look for areas of commonality and split those out into their own projects. You have to know where to draw the lines across which the parts interact. You have to understand the limits of the computer as well as the strengths.
It is this type of understanding that makes coding both valuable and frustrating. It takes years even to begin to master one method in the broad practice we call “coding.” I remember when I was first learning to code—which I’ve done entirely through books* and my computers—that I often encountered concepts that stumped me. It was never the syntax or even the vocabulary (i.e., available function calls and libraries) that gave me a problem. Those things you can look up. When you make syntax mistakes (and you will), finding the problem is more a matter of attention and experience than understanding. Yes, it requires a lot of practice and can be incredibly frustrating, but it's something that all coders must go through and continue to go through. There are practices like pair programming that can reduce these problems, like learning any new skill, it's largely just a matter of repetition, of making and correcting the errors often enough that you learn to see them more quickly.
But when I first tried to learn C, I ran into the idea of a memory pointer. Though obvious to me now (and possible to you immediately), I had real trouble wrapping my head around the concept. Likewise object-oriented programming: after years of teaching myself languages without those advanced abilities, objects and the attendant terminology and concepts took me some real time to figure out.
So, this post is all to say: the call to code is great, but I fear raises unrealistic expectations. You can’t learn to code in a year. You can learn some parts of coding in a year, and that’s a valuable exercise, but don’t expect to pick it up quickly. If you’re new to coding, you’re trying to learn a new way of thinking, not "just" a new skill.
Ask yourself: “what do I want to learn from this work?” Are you trying to become a full-fledged developer who will be building new tools for research? Or are you trying to gain an understanding of how machines work, to develop a literacy of the concepts? While the two cannot be cleanly separated, a strong awareness of your purpose in learning will help you when you want to bang your head against the keyboard because your code didn’t compile once again or because some mysterious error still eludes you. Because it will happen. Coding is always hard.
*I highly recommend the books in the O’Reilly series, by the way: http://shop.oreilly.com/category/browse-subjects/programming.do