Blog Post

If Bubble Tests Are the Wrong Answer, What's the Right One? Adding to the Washington Post Op Ed

This morning the Washington Post published my op ed piece in the “Outlook” section, “Standardized tests for everyone? In the Internet age, that’s the wrong answer”  You can read it here.     So now I am hearing from concerned, anxious parents asking, if bubble tests are the wrong answer, what’s right?  Good question.  The assembly line production of the Model T helped Frederick Kelly address a teacher shortage crisis in 1914 by inspiring a "mass produced" and "standardized" way of testing. 

The point is there is no easy ABCD answer.  We need a national discussion of all the array of complex, important new modes of assessment out there, many of which inspire (rather than de-motivate) learning, and are no more expensive than our current system.  Can we do better than Frederick Kelly in 1914 modeling the bubble test on the assembly line? Can the Open Web's method of peer-evaluation and contribution and participation provide an inspiration for us in the way the Model T did for Kelly? I discuss several of these alternatives to the item-response test in Now You See It, from which this Op Ed was adapted.  Here are a few things to think about.


First, we are at a point computationally and in terms of electronic publishing where more and more educators (and, of course, commercial vendors are there too–and all too eagerly!) are developing methods of teaching where testing is intrinsic to the teaching.   By that I mean, you learn a subject such as algebra or writing computer code not abstractly or by memorization or by choosing the best from among four answers, but by doing a problem, solving it, and getting a harder question that builds on that original process and knowledge.   Or, if you get the wrong answer, you can then have a simpler question that helps you review your response, work on a problem area, and move ahead.   In the best of these online tests, the process of answering the process is a test.  The watchful teacher can then tune in to the test results, see who is soaring and who is struggling, and then can have special lessons tailored specifically to the problems or the promise of the group of students at that level.   This is so much more satisfying for teachers, since you can target your lesson to the precise problem area or challenge in the way that learning a new sport or learning to play chess or any new form of informal learning is structured where a challenge leads not to a “time out” or a “recess” but to a greater and more inspiring challenge.


The best of these systems can keep records of who is doing well in a class at what and give the teacher also an hourly, daily, weekly, monthly, or annual assessment of how individuals or groups are doing in her or his class.   In other words, like all great tests, the process of doing and the assessment of that accomplishment are interwoven.  The test isn’t the end result but the inspiration for more learning.


Right now, with the bubble test being our end of grade form of testing we have a very inexact test that de-motivates learning serving as our national “standard.”   And, until the recent (Friday Sept 23) announcement by President Obama and Sec’y of Education Duncan that offers states a chance to offer alternative assessment methods, we had a national system of No Child Left Behind where “failing” scores by too many “failing students” meant “failing schools” that would lose funding by 2014; “failing teachers” who would be penalized monetarily for low test scores.   So the motivation to “teach to the test” was sky high–even though we know, through educational research, that the tests themselves fail as adequate, successful means of testing.  That is wrong.  So wrong.


*  *  *

In addition to the in-process, real-time testing-as-learning projects I discuss in Now You See It, I am also involved with a project that is helping organizations explore new methods for members to assess and record participation, contribution, quality, standards, and other forms of assessment that the organization deems important to its success.    I help to administer the MacArthur Foundation’s Digital Media and Learning Competitions ( and just last week (September 15) we launched this year’s competition called “Badges for Lifelong Learning.”   With this Competition cycle, we are inviting organizations to propose a need for a new system of assessment.  Networks and organizations that want to try new, participatory, peer-driven modes of assessment–to replace old forms of testing or even old forms of measuring so-called Human Resources–can post an application, describe their goals and their needs, and explain why traditional, current, conventional forms of assessment just are not serving the needs of their members.   In the second round, developers can read the calls from these organizations and propose alternative systems for measuring contribution to the organization.   When the Competition’s second phase closes, there will be a meet-up of organizational representatives and developers, an intense planning session, and then pitches where the public and judges will help to select winning teams.   They will receive funding to spend a year working together to create an actual, working new system that rethinks how that organization might measure the contribution of its members in new ways, ideally ways that solve the problems or limitations that the organization originally perceived.

What we hope to achieve from this competition is a set of working examples of assessment at many different levels, for many different kinds of institutions, and where individuals who make up that institution can participate in what is being judged, how it is being measured, and how they, as peer members, can contribute to assessment.   Will this have application to our national standardized testing?  We don’t know.  But we do know that at the end we will have not just ideas or critiques but a whole host of working examples for all of us to learn from, examples that will inspire, we hope, even more avenues of research. You have to begin somewhere and we want this Competition to be a jumping off place for possibility, for real workable solutions to a dilemma that has been around since 1914 when Frederick J. Kelly invented the bubble test to address a national crisis.

We are also running a parallel research competition for researchers who can study and analyze the results and produce very practical white papers on what works best in which system.

“Badges” are an the emblem of achievement but, in many systems, you can actually click on a badge and then find out all the minute information about the patterns of achievement, what contributes to the success, what is being tested, and so forth.  The point is to show working examples of working systems at the end of this competition so we can all learn from them.

You can read more about this here:   Or, read Sheryl Grant's very thoughtful analysis of badges as motivators:    Or, if you are really a glutton for information, here’s a whole bibliography of current work on the topic, also compiled by Sheryl, which even includes critiques of badging as an idea:


*   *   *

Bubble tests are  the wrong answer for the 21st century.  The inventor, Frederick Kelly, recanted this reductive, standardized way of assessment as soon as the national crisis of a teacher shortage during World War I ended.   I tell his story in the Washington Post editorial and at greater length in Now You See It.   I also tell the story of so many teachers, parents, educators, and others who are working to find a better way to measure achievement.   I am confident that, together, we can find much better answers than A, B, C, D, or none of the above. 




Cathy N. Davidson is co-founder of HASTAC, and author of The Future of Thinking:  Learning Institutions for a Digital Age (with HASTAC co-founder David Theo Goldberg), and the forthcoming Now You See It:  How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn (publication date, Viking Press, August 18, 2011).  below.

A starred review in the May 30 Publisher's Weekly notes:  "Davidson has produced an exceptional and critically important book, one that is all-but-impossible to put down and likely to shape discussions for years to come." PW named it one of the "top 10 science books" of the Fall 2011 season.

In the August 9 New York Times, columnist Virginia Heffernan calls the book "galvanic. . .  One of the nation’s great digital minds, she has written an immensely enjoyable omni-manifesto that’s officially about the brain science of attention. But the book also challenges nearly every assumption about American education. . . . As scholarly as “Now You See It” is — as rooted in field experience, as well as rigorous history, philosophy and science — this book about education happens to double as an optimistic, even thrilling, summer read. It supplies reasons for hope about the future. Take it to the beach. That much hope, plus that much scholarship, amounts to a distinctly unguilty pleasure."

NOTE:  The views expressed in NOW YOU SEE IT are solely those of the author and not of any institution or organization.  For more information, visit or order on by clicking on the book below.   To find out Cathy Davidson's book tour schedule, visit

  [NYSI cover]


No comments