This is the fourth post in a four-part series on user testing for DH projects that I've been posting over on my LiteratureGeek.com blog. On Wednesday I discussed some ways of doing “Quick and Dirty DH User Testing”, on Thursday, I discussed my work with user testing for “amateur” user audiences, and on Friday I shared some examples of DH user testing and evaluation I've encountered.
John Unsworth (1997) argues that the benefits of hypertext (and other aspects of the digital humanities as well) need to be testable. To claim that digital texts offer benefits beyond those of print texts, digital humanists must be able to point to a theory that could potentially be disproven:
When a theorist of hypertext does make claims of a factual nature (such as the claim that hypertext is an improvement over the state of text in printed form)... [he] has obliged himself or herself to support those claims with empirical evidence and rational argument... If we do think that we are "reinventing the text"... then we must have a theory to guide that research, and it must be possible for that theory to be proven wrong by the evidence. In short, if failure isn't a possibility, neither is discovery. (Unsworth, emphasis added)
That digital texts offer a plethora of tools not found with traditional resources is not being debated. Whether or not these tools are actually in frequent use and benefiting their users has not received enough attention; digital humanists must assess not the quality of the digital text as an idealized resource, but its value when accessed by real scholars on a daily basis. Being so closely tied to a digital texts' development, scholar users of a digital text can quickly voice any issues with the project they are using; amateur users cannot similarly voice their needs. Thus, evaluating the value of real use by amateur digital text users is of great importance.
Amateur audiences may form the majority of users for many digital texts, and the digital humanities world is ready for a formal evaluation of this group's needs and perceptions. If the use value offered by digital texts in contrast to traditional resources seems obvious, then producing empirical evidence of these benefits should pose no problem—and yet, as demonstrated above, very little in the way of scientifically conducted testing of digital texts has taken place. (There are certainly good practical reasons for this--funding and time being the foremost.)
Digital humanists can remedy this lack of scientific proof of digital texts' value to users. First, digital humanists can test the structure of a digital text, examining the system that delivers resources to the user; this involves a usability approach that follows users in their functional interactions with a site's interface. Digital texts can potentially assist scholarly users with many tasks. The Summit on Digital Tools for the Humanities (2006). (2006). The Institute for Advanced Technology in the Humanities: University of Virginia. Retrieved August 20, 2009, from http://www.iath.virginia.edu/dtsummit/SummitText.pdf.)) identified four areas where attendees believed technology could aid humanities work: "interpretation, exploration of resources, collaboration, and visualization of time, space, & uncertainty" (p. 5). John Unsworth (2000) provides a different list—“discovering, annotating, comparing, referring, sampling, illustrating, and representing”—as basic human activities simple or “primitive” enough to be easily transferable to humanities computing. These individual features and tools of a digital text can be tested by straightforward usability studies.
Second, digital humanists can identify the value of a digital text's content by looking at the usefulness of the content to the scholar. Such a study would test the relevance of a digital text to its proposed audience, measuring the relatedness of the material to a project's users’ research (Saracevic, 2007a, p. 1918). Saracevic (2007b) examined relevance studies that looked at user assessments of web pages, including comments on decision-making and measures of perceived usefulness and authority (p. 2127).
Third, digital humanists can identify the value of a digital text's content by looking at an audience's use of a digital text; such a study would look at user behavior, assessing what users are trying to do with a site and how they go about doing it. A measure of use is different from a measure of usefulness; where usefulness looks to the relevance of content to an audience, evaluating use requires looking at the efficacy of a digital text after its usefulness has been established or assumed (Park, 2000, p. 461).
My master’s thesis study focused on the last of these three value assessments, use. The least well-established benefit of digital texts is not the offloading of scholarly chores, but assistance to scholars in making new inferences and connections as suggested by their personal paths through material that is "both interactive and non-linear... a non-narrative experience for the user" (Katz, 2005, p. 113). Judging the worth of an entire digital text—that sum greater than the tools and resources it contains—is not as easy as quantifying the efficacy of individual features and can be accomplished with neither usability or relevance studies; instead, a digital text's worth should be judged by whether users are answering their research queries when using that resource, or coming to new, unforeseen realizations. Figuring out the worth of the whole digital text, rather than its component features, requires probing scholars' perceptions of their digital text use.
Read the three other posts in this series on DH user testing.
- Katz, S. (2005). Why Technology Matters: The Humanities in the Twenty-First Century. Interdisciplinary Science Reviews, 30(2), 105-118.
- Park, S. (2000). Usability, User Preferences, Effectiveness, and User Behaviors when Searching Individual and Integrated Full-Text Databases: Implications for Digital Libraries. Journal of the American Society for Information Science, 51(5): 456-468.
- Saracevic, T. (2007a). Relevance: A Review of the Literature and a Framework for Thinking on the Notion in Information Science. Part II: Nature and Manifestations of Relevance. Journal of the American Society for Information Science and Technology, 58(13), 1915-1933.
- ---. (2007b). Relevance: A Review of the Literature and a Framework for Thinking on the Notion in Information Science. Part III: Behavior and Effects of Relevance. Journal of the American Society for Information Science and Technology, 58(13), 2126-2144.
- Summit on Digital Tools for the Humanities(Report from the September 28-30, 2005 Summit
- Unsworth, J. (1997). Documenting the Reinvention of Text: The Importance of Failure. The Journal of Electronic Publishing, 3(2). Retrieved August 21, 2009, from http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0003.201.
- ---. (2000). Scholarly Primitives: What Methods Do Humanities Researchers Have in Common, and How Might Our Tools Reflect This? (Part of a symposium on “Humanities Computing: Formal Methods, Experimental Practice” sponsored by King's College, London, May 13, 2000). Retrieved August 20, 2009, from http://www3.isrl.illinois.edu/~unsworth//Kings.5-00/primitives.html.