Blog Post

Could Badges for Lifelong Learning Be Our Tipping Point?

As more and more fascinating and creative and surprising applications to our DML Badges for Lifelong Learning Competition flood in (we're at almost 100 and the competition does not close until the end of today), I am wondering if, a hundred years from now, some historian ploughing through the dusty data archives of the Internet, will see this moment as digital learning's tipping point. I mean that.  

This could be our tipping point for how we measure, the entry point to thinking up an array of new forms of deciding what counts for our era. This Competition isn't the end point, but the beginning. Even so, it could tip the balance so that, all over, people are wondering why and how they are measuring quality and contribution the way they are and beginning to think about better ways---better ways that fit their organization's values and goals. That's the key. So many of us work in schools or in jobs where what counts has been decided for us. The array of badging systems in these applications is starting to suggest that there are many, many of us who are frustrated with our inherited systems and want to come up with new ways of deciding what we want to count, what we want to value and acknowledge and credit and reward. A badge system can be the symbol of all that, visible proof of  some quality of participation and contribution that previously wasn't even defined.  

To understand why this is so important, we have to go back to the origins of the system that we have now, a system designed for the Industrial Age as part of the Taylorized movement of "scientific labor management." I speak of the invention of the item-response/multiple choice/bubble test that was invented in 1914 and is still the corner stone of our national educational policy, passed in 2002, called No Child Left Behind. It was the tipping point in what I call "scientific learning management," the application of Taylorized theories of uniform, standardized, timed, regulated productivity to education.  

The reason I devoted a chapter of Now You See It to "How We Measure" is because, without a uniform method of assessment, there is no standardization. Standardization is the most important ideal of the Industrial Age--but is quite contrary to the peer-led, interactive, contributory, connected ways of learning and interacting that the World Wide Web affords us. If we are going to truly transform our Industrial Age institutions for the diigital age, we have to re-evaluate how we evaluate. We have to come up with interactive, process-oriented new methods where peers can decide all the different things that count for them and why, and figure out a way to count them. We do not have to know what those outcomes will be. If we did, we would be buying into "scientific leaning management" again where, like all Taylorization, the outcomes are determined in advance of the process (Taylor called them "quotas"). Outcomes--for labor productivity or learning productivity--are defined in advance. They are the bar you have to get over, the scale on which you are measured.   

In scientific labor/learning management, there is a set scale that measures only pre-defined kinds of productivity and pre-defined forms of achievement and you are assessed by a standardized form of testing only on those things. Then you are measured against all other workers/learners and rewarded on that scale. Badges for Lifelong Learning offer us other ways of measuring and other ways of thinking about what qualities and contributions we might want to measure.   

Don't you see it? At present, just about everything about school and work rests on evaluation. If the goal is set in advance, it changes the process. Even if you try to modify the process for another end, you are modifying it against a set standard. A standardized assessment metric is a mentality as much as it is a measurement.

 *   *   *

How Did We Get Here?

In the "How We Measure" chapter of Now You See It, I go back to the archives to find out who invented the mulitple choice test, the tipping point in fully turning the movement toward compulsory, public education into a more uniform, standardized system for the Industrial Age and conforming to Industrial Age values. The invention of that item-response form of standardized assessment, invented in 1914 (and virtually unchanged in the present) is based on Taylor's "scientific labor management" that is the basis for the assembly line model of industrial manufacturing. Timed, standardized, uniform, in quality and in method of assessment.   Frederick J. Kelly, the inventor of the standardized test, transformed "scientific labor management" into what I call "scientific learning management." The test was the single most important apparatus of an educational mentality that has lasted nearly 100 years.

Here's the background on Kelly. He was a doctoral student at Kansas State Teachers' College in 1914. Men were fighting in Europe in World War I. Women were in the factory. Compulsory public education was now the law of the land in every state and the age by which you could leave school had changed to 16, meaning that two years of secondary education were no longer just college prep but for everyone. At the same time, the rank of immigrants coming into the secondary schools of the U.S. public school system was swelling at an extraordinary rate, from 200,000 in 1890 to 1.5 milliion in Kelly's day. There was a crisis. Kelly looked at Model T's being turned out in standardized fashion and came up with the itemized test, first, because it gave some kind of objectivity to what was slipshod processing of all these students through the educational system and, second, because it was cheap, fast, and easy---like turning out the Model T's.  

The reason the bubble test (what Kelly called the Kansas Silent Reading Test) caught on is because, in the decentralized state-based educational system in the U.S., a standardized test allowed some form of assessment across schools, school districts, and across states. From the U.S., the system spread to the world. America tests earlier and more often than any other country on the planet, but virtually every country has adopted some form of bubble testing. And it is an industry, worldwide, with billions of dollars of commercial investment and return. There is a lot at stake.  

But does hierarchical, timed, pre-defined, uniform, standardized testing really measure the kinds of intelligence and activity that our kids need for the challenging world they will face as adults? Do similar forms of standardized evaluation really work in the workplace today? The "timed test" is a weird way to measure intelligence, when you think about it. It's hard to imagine even trying to explain it to Newton or Leonardo or Galileo . . . that a timed bubble test would be the pinnacle of intelligence would convince great thinkers of the past that the 21st century was for lemmings running fast off the cliffs. I agree! It is a system for another century. It may have worked for that one. We need better ways of evaluating contribution now.

Sadly, Kelly would have agreed with me. The father of item-response testing himself wanted to abandon this make-shift way of testing "lower-order thinking" (as it was called in 1914) once the first World War was over. He became a Deweyesque integrated thinker, who believed all subjects were relevant to one another and answers were processes, not products, to be filled in, bubble after bubble.  He went on to be President of the University of Idaho and tried to reform that university to these more integrated, interdisciplinary, process-oriented "higher order thinking" goals. His faculty was furious at this presidential plan. The faculty there had wanted to hire the father of scientific learning and measurement. By then, even the Scholastic Aptitude Test was using a timed bubble test to decide who would or would not get into university. Kelly was fired from his Presidency within two years.

You can find a short version of this story here, in the Washington Post:

Badges for Lifelong Learning: The Competition Closes Today but the Thinking Has Just Begun

If you want to peek in on how this badge competition is unfolding, you can. You can read about the array of organizations that are taking the chance to try something new, to think in new terms about what they want to measure, and how and why. Check out the applications. They are all public. Some have logos, some do not (that is not a requirement); if you click on their box, you will be able to see their entire application :

From these applications, you can learn and get ideas that might work for your institution or organization. That's the point. If you are a teacher, you can go into class today and ask your students what they think is most important thing they will learn in the class, what they think they are learning, not just in content but in form. That, for me, is the best part of this Badges for Lifelong learning: we can all learn from this process, from these organizations willing to step back and think about what system might work best for them, now. Standardized testing is not the only way to evaluate quality. 

What Makes a Tipping Point?

Before there can be institutional or organizational change, there often has to be a crisis.For Kelly, it was World War I and the immigrants who needed to get through the newly required secondary educational system. For us, now, it is a worldwide economic crisis but also a crisis in how we work and in defining what work is that is far more complex and complicated than the systems of education that are supposed to prepare kids for independent adult hood. I don't mean "job preparation" in a simplistic way. I mean, systems designed to inspire and reinforce values and forms of responsibility, self-regulation, self-determination, and maturity that can help us to thrive in a complex world.  

This Badges for Lifelong Learning Competition is by no means  the end point but it may be a beginning. It may be the starting point, a tipping point, in helping us to think about How We Measure, and helping us to think through better ways. I am gratified beyond words by all those who have taken the last few months to think deeply about what their organization needs and to work together to propose something that the rest of us can be inspired by. That process, in and of itself, is an original and bold one that very few organizations ever engage in. It is a bold step towards learning the future together. 




Cathy N. Davidson is co-founder of HASTAC, and author of The Future of Thinking:  Learning Institutions for a Digital Age (with HASTAC co-founder David Theo Goldberg), and  Now You See It:  How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn (Viking Press).  NOTE:  The views expressed in NOW YOU SEE IT are solely those of the author and not of any institution or organization.  For more information, visit or order on by clicking on the book below.   To find out Cathy Davidson's book tour schedule, visit

  [NYSI cover]





1 comment

My (RE)newed interest in Badges reflects some thinking - and looking at your book on a Toshiba pad downloaded from Amazon via Kindle. This is not as much flattery as the recognition that kids will see books that way forever on. And that such a vision is astoundingly different from Taylor and testing.

At issue is that e-portfolios and badges are the natural alternative to tests (without displacing, nor even challenging tests). In Massachusetts, where state tests were early (and have had an early-bird impact with high test scores), the legislature recognized this also early: with paper portfolios required of all children, k-12, as an alternate option to justify graduation without passing tests. Of course, the portfolios, as bureaucracies will have it, just got thicker, and the kids, on graduation, dumped a lot of paper in waste baskets, but...the idea was there, to linger and savor.

In Somerville, with a slightly more traditional bureaucracy, those portfolios got particularly heavy. Meanwhile, it seems that only six kids - over more than 15 years - used that option to get a diploma without the test. The Somerville High School Council, recognizing the techie influence in this very multicultural city next to Cambridge, decided to convert those portfolios to digital form, and developed a strategy to make that happen. By a very fortuitous coincidence, as a member of that Council, I had consulted the summer before with Dr. Arnold Packer's program through a local media workshop, creating "verified resumes," or verifying that kids accomplished "soft skills" improvements in areas like responsibility, teamwork, work across cultures, acquiring and applying new knowledge, creativity, listening, planning and technology. As a further coincidence, I'm lousey with names, so I asked those kids in their first week of a summer program to score themselves.

We talked a little about the terms, but the fancy rubrics that overlay so many other "subjective" measures seemed largely irrelevant. One kid, with a not atypical bravado, gave himself high scores on everything, only to have the rest of the group laugh at him, "help him realize" his strengths, and cut some of those numbers to appropriate size. I didn't need to "teach" rubrics when they taught them themselves so brilliantly. They also used their own scores to build teams for video projects (the theme of the summer jobs), and found people with complementary skills to "keep everybody on task," or generate "creative" questions for interviews, etc. In other words, they took their own assessments at face value.

Your implication that badges may be at a "tipping point" with regards to testing misses some of the other potential points that are also "on the line." This technology - of the kids' not necessarily of teachers or schools, employers or colleges - offers a host of other "points" that are, if possible, even more vulnerable than tests. Watching a kid take an online course, and, when trapped in a question they barely understand, consult - on the same computer, often in several different windows, both google, wiki, ask, and a batch of other sites - not for a word-for-word answer, but for better information than the question posed, says a whole lot more about this tech than most would guess. Kids are truly learning to teach themselves, in lieu of content-oriented junk.

So one of the tipping points is whether to bother with school at all, or, even more, with college. That gets messy, indeed, since parents' histories interfere with their kids own experience, at least at some vectors. Parents of Gates and Jobs could cope, and those with little English may be managed by some fairly sophisticated kids. But that is most surely at risk.

Also at risk is the entire concept of grading. If a kid can show they know what the teacher is trying to say, they may actually show the teacher how to say it better. So another risk is the whole question of teacher competence. Is a teacher who learns from students better than a student who learns from teachers? Wow, what an icky tipping point that reveals.

In any case, badges, that give kids tools to "badger" teachers, and teachers carrots to incent those kids, and employers, unions, colleges and others much better illustrations of what kids actually can do, are most surely a very risky tool. Let's hope the transformation is as good as it might be.