Blog Post

Badges Now

In 2009, when I was researching and writing the "How We Measure" chapter of Now You See It, I could offer a critique of the current system of standardized, summative, end-of-grade testing, but had little to offer that addressed the key positive contributions of standardized testing:  its ability to be graded by machine (or by humans with a template) and provide the basis for comparisons of outcomes across disparate institutions.   Originally pioneered in a doctoral dissertation at Kansas State Teacher's College by Frederick Kelly in 1914, the item-response test was adopted widely and almost immediately because of these two features. 

Many esteemed critics over the years have pointed out that multiple-choice tests are a poor instrument for motivating learning, for  instilling an appreciation for complexity, or for accurately testing what one actually learned.   They encourage “teaching to the test” and “learning to the test,” especially in our post-2002 No Child Left Behind world where public school teachers can be penalized and schools closed if kids don’t do well on such tests.  However, until another system addresses the key attributes of standardization and automation, school life will continue to be governed, preschool to professional school, by EOGs, SATs, ACTs, GREs, LSATs, M-CATs and the like.   Many have designed alternative systems, but these have not yet been widely adopted and have never displaced standardized testing as the most widely accepted cornerstone of our educational assessment apparatus.  No one has invented a better educational mouse trap.

Until now.   Digital or Open Badging is the first real alternative to one-best-answer testing that both has those features and addresses key other learning objectives--and, it must be added, is being adopted in numbers that make its acceptance as an alternative system feasible.  With the announcement of the Clinton Global Initiative's adoption of badges, the Chicago Summer of Learning, and universities, informal learning centers, and many other programs devising new badging systems it is possible that we are on the verge of a major change in "how we measure," one that better counts what we value and values what we count.

What about digital (sometimes called "open") badges makes them a better mode of assessment than currently used standardized testing?  Digital badges can be customized so that they are awarded for a range of skills, achievements, interests, or accomplishments valued by the institution that offers the badge—not by a set of standardized values prescribed by a test that may or may not be suitable to the values of the organization.   Badges can be awarded by peers, and the reasons they were awarded travel the badge so anyone who wants to see that HASTAC has given me, let’s say, a Prolific Blogger Badge, can click on my badge and find out that I’ve blogged on this site XX number of times.  HASTAC does not, currently, offer such badges but we could decide to.  If we did, we could decide what things we wanted to badge and, if someone achieved those, the badge would come with “meta data” that would tell you all the attributes that, institutionally, HASTAC had arrived at in decided who would or would not be recognized. 

The advantage of digital badges over test scores is that they are customizable and verifiable, and they can be awarded outside normal, traditional institutions charged, in our society, with credentialing expertise.  Unlike other customizable formats—such as resumes—it is harder to “fudge” badges since they come with the meta data that tells you why they were awarded and can provide contact information for the person or institution who awarded them. 

Nor is it just institutions.  Because learning and achievements and skills can happen anywhere, you can also have peer-awarded badges that, again, acknowledge how and why you received one.  Perhaps you taught yourself to code and another programmer worked with you on an online project (this is how much of the open source coding in the world happens) and wanted to give you credit for your tireless contribution.  You might receive a coder badge. Open source coding organizations such as Top Coder and Stack Overflow have been using badges in this way, quite successfully and seriously, for many years; indeed, their use in the open source coding community (tested and tried by anonymous collaborative web developer partners for many years) is the inspiration for this wider application of digital badging. With digital badges,  If someone else wants you to collaborate with you on a future project and sees the badge on your website, it, they would click on it, and could find out why it was awarded to you and even contact the person who awarded it for more detail.   Since reputational systems reflect on both the giver and the receiver, there is a high degree of trust—much as one sees on the trusted respondents and reviewers on GoodReads, Amazon, Yelp, or other peer-recognizing operations. 

Badges offer a possibility for more equitable ways of measuring achievement than many current high-cost structures, ranging from traditional higher education to for-profit institutions to various re-certificatioin or retraining programs that are often cost-prohibitive, depend on location, and often come with officially recognized prerequisites (a high school diploma, GED, or college degree).   Badges can also take a variety of personal skills and help you "create" a career:  I've seen this with street artists, self-taught and highly accomplished, who learn web skills and become designers.  Badge systems can help facilitate making assorted and even random-seeming skills (graphic design + programing skills + project management ability) visible as "careers."  

 Mozilla’s Open Badge Initiative has even created standards for badges, a “backpack” where you can keep your badges, and has pioneered security, privacy, and interoperability issues.   Check out Mozilla’s beautiful and easy-to-use website to find out more:

Have questions about badges still?  You should:  as with any major systemic change, there are upsides and downsides, unexpected consequences and uneven developments.  This is a sea-change.  Finally, we may have something flexible enough and widespread enough to displace Kelly's 1914 test.  As Dan Hickey, Rebecca Itow, and others part of Dan's Indiana University research team have noted in their  research on badges, appearing on Dan's website and reposted on, no system will displace the current one until it is widespread, a bit of a catch-22 (and a common enough circularity in many new system designs, until there is a tipping point that bursts through the feedback loop of acceptance-standardization-verifiability intrinsic to most peer-review):  see the excellent, lively discussion, for example, between Dan Hickey and David Gibson (two major scholar-teacher-researchers in the assessment world), on these issues:

For the last two years HASTAC, supported by the John D. and Catherine T. MacArthur Foundation, and in partnership with Mozilla, has run the Digital Media and Learning Competition on Badges for Lifelong Learning to support a number of institutions interested in developing and actually trying out badging systems for their institutions.  We supported thirty major and highly diverse institutions--in schools, museums, after school programs, universities, the military, the entertainment world, Fortune 400 corporations, small firms, and more-- who have spent this time exploring, experimenting, and now implementing an array of badge systems that address some issue within their organization. 

A major museum is using badging to help inspire kids who come to summer programs--and to help them maintain a community of other kids who earned badges once they return to Akron or Peoria or Laramie, excited by all they saw and learned at the museum and now with a community of other kids with whom they can still learn together about a favorite subject on line.   Badges for Vets is finding a way that all the skills and talents that a vet learns in the military might be put into a coherent form that will help him or her find a job in civilian life.   And schools are pioneering invention, exploration, and discovery as "badgable" skills--not just selecting the one right answer from four available on a very static, silo'd subject.

This week, HASTAC will begin to publish final reports from two years of what has been tireless and inspiring work.   The reports are long and detailed, addressing the mechanics of a particular badging system as well as the larger issues of learning lifeong.  This, along with the DML Research Competition (much of which is being republished on this HASTAC site already), will provide many examples of how your institution can use badging for its own goals.

Finally, there is an alternative that seems to be picking up the kind of widespread use that may displace the existing assessment apparatus.  Is bading perfect?  Not at all.  Is it better?  Way better.   Stay tuned to this week's reports and, in the meantime, visit our badging resources pages here:  ://




90% of the sites on the HASTAC/Mozilla demonstration present "nice feeling" badges - that is, badges to document almost random achievements, which may, in fact, be critical to success, careers, employment, college or graduate school, but which are not easily accessed, nor do they reflect clear and specific (not arbitrary and pre-arranged - like standardized tests by subject and grade) achievement. As a teacher with whom I worked on desegregating colleges in the '60's once put it, "if you want kids to have a warm feeling, tell 'em to pee their pants. School's about more than that."

It doesn't have to be this way. Badges could be more easily compared - over time, across kids, teachers, subjects, and other settings - without becoming dreary test scores. Because they reflect a convergence between kid goals and teacher or school goals, and they can, at the same time, embrace goals for jobs, careers, colleges, grad schools, and even grandma back in some home country. They need not ignore "standards" if those standards reflect real world issues. It is not a matter of remembering how long was the Seven Years War. It is more a matter of "how well I listen," "how do I show I'm reliable," or "do I ask good and useful questions?"

Such standards were developed and studied exhaustively in the 1990's, in the Secretary (of Labor's) Commission on Achieving Necessary Skills (SCANS). Unions, national employers, labor and educator groups all participated. Most certainly there are some new "necessary skills" (like using a smartphone), but they can largely be subsumed by the categories of the SCANS model. It is not the skill itself which is critical to amplifying the meaning of test scores, but, rather the category that allows comparisons and how such comparisons are applied.

Ten years after SCANS, the principal investigator, Dr. Arnold Packer, then at Johns Hopkins, piloted a "verified resume," which adapted those same "soft skills" to evaluating student performance. In Somerville, Massachusetts, we extended that concept and asked young people to evaluate themselves and each other, in a collaborative, feedback oriented annual activity. The skills young people want to show off ought, most surely reflect something about what teachers try to teach! And that's what those portfolios show. They are not just "I like myself because...", but, rather, they reflect the form, goals, and categories for goals expressed and shared by students as well as teachers.

And they are way better, way more concrete, and way more accountable than the badges at the MacArthur site, even though MacArthur actually funded part of their pilot phase.



Are you saying badges are pointless because they are not e-portfolios?  It sure sounds like that.  If so you are conflating assessment practices and credentialling/recognition practices.  One of the things that is most interesting about badges is that they can be used to recognize evidence of learning in entirely new ways.  That evidence of learning can come from e-portfolios OR standardized tests.  

If you look at the intial set of assessment design principles we identified in our study of the assessment practices in the 30 DML awardees you will see that (naturally) use portfolios emerged as a principles (because it was enacted by multple awardees).  We are currently developing working examples that will let everyone find out which projects enacted those practices and (importantly) how the educational context impact the way that principles was enacted. For example, you will see that Sustainable Agriculture and Sustainable Future project used "private" porfolios (students = FIRPA) while Makewaves used public portfolios (naturally as they are training young sports jounalists).  This is the useful knowledge people need about digital badges.  More importantly, highlighting the vision behind badges at MacArthur, this salience of the badges promise to help communities of practice emerge around particular credentialing practices.

You seem to be implying that portfolios "work.".   Portfolios are great for somethings and terrible for others.  Naive exhuberance for portfolios and other alternatives in the early 1990s imploded on itself in the mid 1990s, opening the door for NCLB.  Work by my colleague Ginette Delandshere and others have shown how problematic portfolios can be in teacher ed programs.  In all too many cases, a beautiful e-portfolio is simply evidence of how explicty the instructions for creating it were, how carefully students followed them, and how much indiidualized feedback the teacher provided.  In my observations, much of the discourse around portfolios is around the artifact and not the disciplinary knowledge the artifact is supposed to represent. ("Is THIS what you want????).   

Don't get me wrong.  Portfolios are central in much of my own assessment work (primarily in the form of wikifolios).  But I never use them to extract formal evidence of individual understanding.  That is what good well-designed performance assessments do really well in the contexts in which I currently teach and do research (mostly online and hybrid classes in cognition, educational assessment, composition, telecommunications).  And in some (but not all) if these setting, old-school achievment test have proven quite useful for some very narrow but important purposes (documenting increased learning over time as I iteratively align my portfoio and performance assessments and comparing learning with others who target the same standards with conventional expository instruction)

Here is the bottom line for me:  Recognition and assessment are highly contextualized practices.  Any discussion of them needs to recognize the contextual features that make some practices more appropriate than others in particular context.  Badges promise to help make this happen and I am thrilled to be involved in helping make this so.  I enourage you to take a broader view than just focusing narrowly on a practice that you are passionate about.  Your passion is quite valuable.  There is a ton of literature and experience out there with performance assessment.  It would be more helpful to consider what features of your contexts made portfolio assessment appropriate in your case.  The the research literature that you might also know would be very helpful for other working in that context.  Then work with us to help people who want to use badges to recognize learning learn which practices for assessing learning might be most helpful.


Can you share which of the 10% you don't consider "nice feeling" badges? It will help explain what you mean by badges that document random achievements. And are you gathering this impression from reading through the 30 project Q&A's that we posted before the July 4th holiday? I am not sure where else you would find enough information to determine whether these badge systems represent a nice feeling or not. The Project Q&As represent some of the richest, most detailed data to come out of the Badges for Lifelong Learning initiative, but they were not designed to convey whether or not the badges reflect clear and specific achievement to the criteria you are implying. 

Over half of the K-12 badging systems align to the Common Core or other standards. Do you mean to suggest that the badges are somehow random achievements, but the Common Core standards are not? Or it may be that you are critiquing the individual badge systems for not being mature systems? Giving the Badges for Lifelong Learning grantees a failing grade at this early stage is woefully premature. Most of these systems have been in their grant cycle for less than a year, and they are working through these designs as pioneers, mapping pathways that have never existed before, across very complex collaborations, across disciplines, across in-school and out-of-school learning spaces, online and offline, using a variety of assessment systems and pedagogies in diverse contexts -- nothing of this magnitude has ever been done before, and by definition the ecosystem is intended to be decentralized, but certainly not random. 

It is an ecosystem that these projects are contributing to, with many systems (plural) of assesments (plural) and credentials (plural) providing multiple trajectories (again, plural). It is too early to critique what are likely to become networked learning pathways that are more accessible to broad audiences and reflect well-designed and articulated tracks or strands -- whatever you'd like to call them. And keep in mind that these projects are, for the most part, in the most nascent stages of multi-stage development cycles with tremendous forethought about the learning and the learning pathways going on at the heart of the badge systems. 

Here's an example: Computer Science Student Network (C2SN) describes what's under the hood of their badge system:

The Certifications are aligned with the newly emerging NSF funded Computer Science Principles framework – the forerunner work for what is expected to become the entry-level Advanced Placement Computer Science exam – and the National Instruments Certified LabVIEW Associate Developer Certification, an industry-standard first-level certification in the visual programming language that powers LEGO MINDSTORMS robots, but also sophisticated aerospace laboratory equipment and self-driving cars. (see more at

Another example is Pathways to Lifelong Learning, the badge system developed by Providence After School Alliance, in which the badges are "designed to support the Hub’s credit-bearing learning initiative." The criteria for each badge "is therefore connected to both programmatic goals and outcomes as defined by ELO (Expanded Learning Opportunities) community-based program providers, as well as to the Common Core, attendance, and participation." (See more at

It would be careless to say that either of these projects are about "nice feeling" badges or random achievements. And what about Intel and the Society for Science and the Public? The youth who engage in that badge system generate college-level rigorous research as part of the Intel Science Talent Search (Intel STS) and the Intel International Science and Engineering Fair (Intel ISEF). I think the students who participate in these science competitions would be insulted by your comment that they earned a "nice feeling" badge. 




The same skills - with some modest variations - that were cited in SCANS and the verified resume are in the EuroPass and form the template for their ePortfolio system. We yammer about how advanced the Finns are but the Euro Union is spending $16billion on just what MacArthur is about three years late in recognizing. And is building, ironically, on our own history with SCANS.


Hi,Joe, I appreciate all the work you have done in this area---badging, ePortfolios, eRubrics, and many other alternative, wise, useful, constructive, and helpful systems have been developed in the last 100 years since Kelly.   Unfortunately, none has taken away from the primacy of standardized testing.  We're hoping this bading initiative will have some traction more widely.


I want to make a note of your tone.  While candid critique is appreciated, disrespect and what some would call "trollishness" is not in the HASTAC community.  For a decade, HASTAC has tried to model a community of concerned learners, all learning together, including from informed and considered and considerate disagreement.  Your tone is disrespectful and dismissive and not the kind of engagement we want to model.  


Today is the day many of our project reports from two years of work on creating and implementing badge systems are going live.  Throughout the day more and more of the reports will be up.    Anyone who wishes will be able to see for themselves what these institutional partners and grantees have accomplished and why they thought it was valuable to their efforts.






There's nothing wrong about "badges," just as there is little wrong with portfolios or tests or other measures in themselves. Yet without some metric - without some way to say I'm better than I was, or I need help with something, or I'm better at something than you are and I might be able to help - neither a badge, a portfolio nor a test really has much meaning. That was my point about a warm feeling.

The most revealing experience I had with any of these metrics was when I asked summer youth workers to assess themselves on those six criteria in the verified resume - responsibility, teamwork, work with cultural diversity, inquiry, creativity, listening, technology, and planning - and they were not only frank but actually sought partners in tasks where they were weak and could learn from each other. The distinctions between assessment and evaluation are external to the process of learning, and, while badges are excellent incentives, provide useful feedback, and have value in career and college or professional growth, they lack the open-endedness of these metrics, and, while they provide feedback from authorities and perhaps even from peers, they don't structure the kind of collaboration that 21st century professions so critically need. Nor do they accommodate change and progress with clearly stated comparison between now and a year, two years, or ten years from now. 

Part of the problem is that I stopped in science after college, so I am far less sympathetic to measures of monthly, quarterly, or annual progress. I continue to remember Larry Cremin citing the eight grades of elementary school as largely reflecting the builder of the first elementary school's site plan rather than a real sequence of learning. Montessorians (of which I'm only an admirer rather practitioner) have a continuous cycle and move individual learners as the learners themselves move rather than according to some chronological lock step. I think we get ideas almost randomly - largely from each other - and our progress is full of fits and starts, which makes most testing useful only insofar as it provides some feedback on the class rather than the individual student. Just as badges provide great individual feedback, but little incentive to collaborate (except when the group awards a badge, which is probably the case but less often than I'd hope).

Finally, look at the EuroPass model. Not only does it use some - although hardly all - of the metrics in the verified resume, but it provides the kind of scaffold most e-portfolios lack: stages or thresholds for comparisons. The problem I found with the badging model is that it does not, in itself, have such scaffolds - without which comparisons are more likely invidious than collaborative, and competive rather than cumulative.