Blog Post

Where is the new assessment method for the game-based learning?

I spent the past two days at the GLS 2010 conference, presented two projects I have been working on in the past years. One is "Augmented learning for Middle school kids" involving Gaming SMALLab at Q2L, and the other one is "Mannahatta: The Game". The second one is part of a bigger presentation titled "Game by Network: Thinking about Game Design for the Development of City-Wide Learning Channels". The conference was awesome, however, while everyone is certain that game is somewhat educational, no one was able to provide a new assessment method that will work for measuring game-based learning. Mcnemar's test is so last century.

I remember at "The Night to Re-imaging Learning", Katie Salen mentioned that we are in a time that games have the best access to kids' learning. That has been proven right and kids are so ready for it. However, I think now is also the time for us to convince those people around the kids, such as teachers and parents, that this medium will really work. To do that, a new set of assessment tools is needed to measure game-based learning, and it has to be standardized.

At Q2L, we work with teachers to create game-like quests as alternatives to classroom-base learning. The common goal of every game we made here is to create the "needs to know" (Katie) in kids inside or outside of the game. In order to be better in the game, kids are often self-motivated on related research. More amazingly, they sometimes came up with their own methods and strategies on memorizing materials for quick reference while playing games, or "cheats" in game terms. We are experts at designing those "need to know" moments using principles of game design. Our experiences told us that games have the potential to transform their memorized concepts to long-term active knowledge.

The other reason that is hard to create assessment tools for measuring game-based learning is that game is not a self-contained educational system that guarantees positive learning outcomes. Because games are interactive systems, and sometimes the results of certain interactions might generate negative emotions in kids. The dissemination of these negative emotions can be guided to a greater learning moment in my experience. We need guidance in game, and it could be any formats, introductions, discussion, worksheet, complete curriculum, ... etc. However, if there is not a moment in or after the game that kids reconnect their in-game experiences to a sets of learning goals, then those experience are wasted. The assessment tools have to somehow capture those moments happened outside of the games to give a more accurate evaluation.

 

87

4 comments

Kanyang, this is a hugely important issue, and you are so right to bring attention to it! I posted a link to your blog on our DML Facebook page, and will also tweet it on @hastac and @dmlcomp. Would be great to hear back from the community to see if anyone has good models or research to share.

Also, there is a fantastic forum discussion (118 comments!) initiated by our HASTAC Scholars on Grading 2.0: Evaluation in the Digital Age. When it comes to games, one of Jayme Jacobson's comments really struck me because the play-based, self-motivating and fun nature of games can die a sudden death when learners find themselves being evaluated. Jacobson wrote, "Sometimes I just want to say, 'This isn't serious, there needs to be a space left where a person can play without worrying about being evaluated.'" 

There is another interesting post on O'Reilly Radar (in their Edu 2.0 section) on the 21st Century Textbook, a post initiated by Marie Bjerede with 43 interesting comments. Bjerede wrote:

"...Could the 21st century textbook hold out a unique promise - that the student who uses this kind of textbook no longer needs to wait for high-stakes, anxiety-inducing tests to determine whether he had learned a topic? What if the digital textbook were instrumented to collect and interpret data in such a way that it could tell a student's level of mastery without test-taking, just from how he engaged with the content? Some of these measurements and interpretations are easy to imagine, such as: 'Which digital tool did the student first pick up to make measurements in the tank-filling problem', and 'What keywords did he search for on the internet?' Other kinds of data will be harder to interpret, such as: 'What solutions did he try on his scratch pad', 'What questions did he ask his peers', and 'Which of his peers' questions did he answer?' But to any degree, what would it mean for a textbook to understand a student's level of mastery in real-time from his work in this digital medium? With what information could a teacher know exactly what next challenge would be optimal for each student’s learning on a daily basis?"

While I think it would be great to toss out the high-stakes, anxiety inducing tests, I have to admit I cringed thinking about my own learning experiences with a hard topic (for me!) like statistics. I would not want my professor to know what keywords I used to search on the Internet, although I could not articulate exactly why that information is not something I want readily shared. Maybe it would be different if we were accustomed to this type of assessment as early learners, but it seems to me that we can trample too easily on affective barriers that inhibit learning. I could see that same potential occurring in game-based learning if our assessment tools were too invasive.

Thanks for the prompt -- It got me thinking early on a Friday morning! It's one of the big issues we face with learning in the digital age.

90

Education game comes in a package, it requires guidance (from teachers) to take all the students to meet a set of pre-designed learning goals. The assessment should cover both game and the curriculum/activities around the game. There is another interesting discovery in GLS is that in order to assess learning in games, most of the digital games has some kind of passive data mining mechanism. They are secretly running in the background. Because the assessment method is uncertain at the moment, they record game play ( the moment they make decisions ) and hope that these data will help the assessment in the future.   

 

99

The other reason that is hard to create assessment tools for measuring game-based learning is that game is not a self-contained educational system that guarantees positive learning outcomes. Because games are interactive systems, and sometimes the results of certain interactions might generate negative emotions in kids. The dissemination of these negative emotions can be guided to a greater learning moment in my experience. We need guidance in game, and it could be any formats, introductions, discussion, worksheet, complete curriculum, ... etc. However, if there is not a moment in or after the game that kids reconnect their in-game experiences to a sets of learning goals, then those experience are wasted. Therefore, the assessment tools have to somehow capture those moments happened outside of the games to give a more accurate evaluation.

100