I was able to attend a public lecture given recently by Jim Leach, a former Iowa congressman and the current chairman of the National Endowment of the Humanities. In his talk, Leach used the used the 2010 Citizens United decision of the U.S. Supreme Court as a framework upon which to hang a discussion about the importance of the humanities in civic life: words have weight and ethical import and meanings which can be shifted and twisted in order to serve political goals; it is through an education in the humanities that we can acquire the tools to understand and analyze such complex topics.
Thanks both to Mr. Leach's talk, and recent conversations I've had with colleagues, I've been mulling over the importance of using those analytical tools to think about the practice of the humanities itself. As members of the first generation of digital humanists, we're not only able to shape how the digital humanities will be practiced for a long time to come, but I think have an ethical responsibility to create a model of scholarship which is inclusive and rigorous, whose practitioners are aware of its pitfalls.
As a historian, I know that there have often been objections to innovations in the education process, and that many of those objections have been proved to be groundless or shortsighted. Socrates objected to writing—he thought that it would lessen our ability to memorize (which is true!), but more importantly he thought that it would result in students who would be "hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality." I think we would all agree that that hasn't come to pass.
Yet I think history also shows us that it's worth playing devil's advocate about the introduction of new technologies, that we should be wary about any claims to "inevitability" (since few things ever are), and that we should always be asking cui bono—who benefits? Blogger Historiann (the pseudonym of Ann M. Little, Assoc. Professor of History at Colorado State University) recently made a post on MOOCs, in which she looks at the current push by a number of university administrations to adopt MOOCs in a historical context. Historiann is wary about such a practice because she sees it as part of a push to commercialize and commodify higher education, a concern which I must admit I share. In making her point, Historiann reminded us of some previous predictions about technology in the classroom:
Phonograph: In an 1878 article on “practical uses of the phonograph,” the New York Times predicted that the phonograph would be used “in the school-room in training children to read properly without the personal attention of the teacher; in teaching them to spell correctly, and in conveying any lesson to be acquired by study and memory. In short, a school may almost be conducted by machinery.”
Movies: “It is possible to teach every branch of human knowledge with the motion picture,” proclaimed Thomas Edison in 1913. “Our school system will be completely changed in 10 years.”
Radio: In 1927, the University of Iowa declared that “it is no imaginary dream to picture the school of tomorrow as an entirely different institution from that of today, because of the use of radio in teaching.”
As someone who is currently a graduate student at the University of Iowa, it pains me to admit but… my institution's prediction was a little awry.
This is not a Luddite manifesto. I'm a HASTAC scholar, after all. I regularly use technology in the classroom with my students—we create crowd-sourced timelines on TimeGlider; we use Google Maps to visualize data gleaned from medieval texts in a new way; we examine artifacts located in a European museum in far greater detail and from far more angles than would ever be possible in a print textbook. While I'm not a coder, I'm beginning the slow process of learning how to use 3D modeling in my own dissertation research, and I'd be lost without my Zotero database.
It is, instead, a call for reflection. How we can create a digital humanities which is ethical and open? How can we ask students to shell out for often expensive and restricted e-books and apps, or to use complex software programs with steep learning curves when tuition costs are rising ever higher and many students arrive at university unaware of how to perform the most basic computing tasks? (I'm ever more convinced that the 'digital native' is a lie.) What ethical issues are associated with the greater push to tailor secondary and tertiary education to incorporate technology in a way which serves not just vocational, but explicitly corporate, needs? How can we ensure that the adoption of digital humanities tools is undertaken because they serve the creation and transmission of valuable scholarship, and because they enhance a teacher's pedagogy, and not because of a fetishization of technology—not because of change for the sake of change? I have to confess that I often wonder what value I'm bringing to my own work and teaching through my adoption of DH tools, and would love to hear if any of you have felt the same. What steps have you taken because of that? How do you feel we can work—particularly in a practical way—to create a self-reflective digital humanities?