Presentation given on Saturday, April 27, 2013, at HASTAC 2013 in Toronto, Canada.
You will notice I have changed the title of my presentation a bit from what is in the program. Partially, it’s because I’m an indecisive academic, but mostly, it is in reaction to my experience of co-hosting a HASTAC forum on “Visualization Across Disciplines” this past week. There has been some amazingly rich conversation thus far it’s still open for participation. So, instead of talking about my own research (which is on the simultaneous analytic, aesthetic, and social use of data visualization), I’m going be a bit more theoretical and hopefully a bit more thought provoking.
As a new media scholar with one foot in visualization and the other in the digital humanities, I often find myself asking myself this:
“What exactly is visualization in the digital humanities?”
We’ve already established and can agree upon why we use it.
But what I’m more interested in and what we never really talk about is the how. Howdo we use visualization in the digital humanities? How does it function at the level of epistemology? Is it a tool? A research lens? A communication medium? Something else? What I’m going to do in this presentation is focus on the first option -- a tool -- and try to expand the way we think about visualization in the digital humanities to something beyond this construct.
First, let me clarify what I mean by visualization. Here are two definitions common to the visualization field as it has developed over the last 20 or so years.
We could certainly come up with others; but what I want to note are two important points: 1) the visualizations I’m talking about are digital, and 2) they help us to “make sense” of data.
Usually, we think of visualization in the digital humanities, we think of it as a digital tool. This comes as no surprise given the field’s historical origins. Early digital humanities projects, like John Burrow’s textual-analysis of 17th and 18th century verse used visualization complemented by statistics to help “make sense” of the large volumes of data now available to humanities inquiry.
The culture of digitization that characterized the digital humanities through the 2000s only magnified these volumes, and scholars increasingly began use visualization to look not only at textual content but also spatiotemporal data, non-text artifacts and related metadata over the longue duree.
Over the last 10 or so years, countless digital humanities projects have pushed visualization’s humanistic application in this functional tool-driven way.A good example is “Mapping the Republic of Letters.” The project, which was not incidentally developed as part of Stanford’s Tooling Up for Digital Histories Project, uses visualization to help scholars explore the over 55,000 letters and documents digitally archived in the Electronic Enlightenment Database. By “mapping out” geographic and related data for senders and receivers of letters from the early modern period, it allows researchers to perceivelarger patterns of intellectual exchange.
“Mapping the Republic of Letters” was created for the particular project from the ground up, but we could just as easily look any of the many projects created by scholars using one publicly available visualization toolkits – such as Gephi, NodeXL, ManyEyes, Google Fusion Tables, the list goes on.
Or perhaps the ultimate example of visualization’s role as “tool” within the field is its integration within software and research environments designed specifically for humanistic research. TAPoR 2.0, RoSE, Lev Manovich’s ImagePlot (a tool that I helped develop) or Tanya Clement’s still under development platform for humanities visualization.
In each of these examples, what visualization is allowing us to do is to:
- extend our conceptual scope and reach
- create and discover new knowledge
- and then also represent this process
But digital tools are often more than tools (this is one of the big ideas of HASTAC, right)? It’s not just that visualization is a graphical or cognitive aid to thinking. It is thought itself.
This is not a new idea. It’s an extension of what was originally proposed by Rudolf Arnheim in his 1969 book Visual Thinking. In it, Arnheim argues that all perceiving is thinking and all thinking is also perceiving. The two, as he puts it are “indivisibly intertwined.”
The idea has since been expanded by many scholars in the name of information visualization, media studies, contemporary technogenesis, theories of extended cognition… and in our forum this past week under the guise of a conversation about process.
The basic idea is this. We think of data visualization as a tool that gives us a product (a.k.a. insight). But it’s not just in the perception of this product that we gain insight, it’s also in the process of its creation. As Elijah Meeks has commented, “Dataviz isn't just a product, but oftentimes it's the exposed computational process.” This is where the visual thinking occurs.
Unfortunately, there’s often a gap between perceiver and creator, between representation and process that makes this visual thought appear unbiased, intuitive, and largely positivistic – all characteristics that stand in marked contrast with the type of uncertain and interpreted data we encounter in the humanities.
So the question becomes, how to we build this iterative process (including the data wrangling, the active visual thinking, and the experience of coming to knowledge) into the representation of visualization?
The most interesting example that came up in conversation (thanks to Mia Ridge) was a Lattice Uncertainty Visualization created by researchers at the University of Calgary and University of Toronto. It’s essentially a visualization that sits on top of a translation algorithm for an instant message conversation between a German speaker and an English speaker. What happens is that the German speaker types a message in German and what comes out on the other screen is this. Every path through the lattice represents a hypothesis about the translation. The varying transparency of each node reveals the certainty of the each word (with the dark blue being more certain). The user can then go through and interactively change the horizontal green path through the lattice to indicate a better translation.
What I find fascinating about this visualization is not only that it reveals the uncertainty of the algorithm and the influence of viewer interpretation. It essentially forces the viewer to go through the creation and reception of the visualization process (even if it is just only a small part of the process).
Opinions? Reactions? Ideas about how to incorporate process into visualization? And so I’ll end with the question I began with…What is visualization in the digital humanities? I hope that I’ve made this a little more difficult to answer.