Blog Post

Evaluation of Experimental Work

We're heading into the home stretch on the Participatory Chinatown project.  The game is nearly complete and we are working towards launching it a large community meeting on May 5th.  We have tested the game with 15 players in the controlled setting of a lab, but on the 5th, we will be assembling 45 players in the same room, all of whom will be playing the game and engaged in our deliberative process.  Participatory Chinatown is more than a video game - it's a physical process where people come together to talk and play.  The process must be highly choreographed, in that, the attention in the room needs to behave like a dance that moves between screen and face, face and crowd, in a fashion that produces a meaningful and engaging group experience. 

There's a lot we seek to learn in this process, not least of which is: does the game work?  But other issues include, have we created a context for good deliberation?  Does it motivate people to continue their involvement? Does it alter expectations of the community meeting?  And the list goes on.  In short, we are observing a range of phenomena.  As such, we have struggled with appropriate research methods.  The best way to evaluate a process is to establish a control group, whereby you could control for certain variables and determine the effectiveness of the process.  We discussed holding a parallel community meeting that uses a traditional format to function as this control group.  But, despite the fact that this process is part of an ongoing research agenda of mine, I am still not able to isolate variables that can be controlled appropriately.  Consequently, we have opted not to use a control group, and instead, to adopt methods that include observation, interviews and a survey.  But, rather than comparing it to something else, we are merely exploring the parameters of the phenomenon.  We need first to understand how games can influence decision making and how the co-presence of players while playing the game can change the dynamics of the room, before we can make claims about its relative supriority to existing forms of democratic meetings. 

When evaluating experimental work of this sort, it is important to fully understand the parameters of the work being evaluated, before making broader claims about its effectiveness.  The research should be more exploratory than evaulative.  We need to spend the time trying to understand the questions before leaping headlong into finding answers.  This is what makes this work different from traditional forms of scholarship.  We're building a project before we fully understand it.  And, hopefully, through the process of building, we will come to better understand it.  But then the question remains: how do we take it to the next level?  How do we move from experimental process to validation?  And more importantly, how do we translate the kinds of experimental work MacArthur and HASTAC are invested in to other funding sources, including government agencies, that surely have less tolerance for experimentation? I wonder if there is a need for an evaluative mechanism that specifically deals with this problem.  I wonder if we could devise a method that would help legitimize this kind of socio-technical experimentation within the established frameworks of traditional funding.  Contextualizing this kind of work within broader frameworks of "legitimate scholarship" is perhaps worthy of our attention.

 

 

169

No comments