Blog Post

Introducing the DML Design Principles Documentation Project

I want to introduce the DML Badges Design Principles Documentation project. This two-year project was launched at Indiana University in July 2012.  The project intends to document the badge design principles that emerge from the Badges for Lifelong Learning initiative sponsored by the MacArthur Foundation’s Digital Media and Learning program.  In this post, I describe our general goals and seek input on accomplishing these goals. There are specific questions at the end if you want to jump ahead.

Why Document Badge Design Principles?

The roughly 30 badge content projects funded under DML 2012 are all now underway.  Teams are working hard to implement the plans in their proposals.  Many are refining those plans to accommodate things they had not anticipated and are uncovering unimagined (and unimaginable) new opportunities. Some are completely revising their plans.  Mozilla’s Carla Casilli explained this process in a recent post about similarities and differences in badge system design:

Regardless of where you start, it’s more than likely you’ll end up somewhere other than your intended destination. That’s okay. Systems are living things, and your badge system needs to be flexible. You must embrace a bit of chaos in its design.

This reality was highlighted by Charles Perry of awardee Mentor Mob  at the recent DML Badges workshop.  Charles announced that his team had finally made enough progress that they “could now start bragging about their failures.” In the closing remarks at the workshop, MacArthur’s Connie Yowell applauded awardees for taking risks and trying out new things.

Research on design rationale has shown that most of the knowledge generated when designing complex systems simply “evaporates” as features evolve and teams dissolve. Given tight deadlines and budgets, projects forget why things were done the way they were, or why a plan did not work out.  This knowledge is actually quite valuable across projects and for others who might pursue similar goals.  Our project aims to capture this knowledge. 

Like digital badges, this is uncharted territory.  The closest example I am aware of is the Design Principles Database project led by Yael Kali and Marcia Linn.  Their project also captured design knowledge across multiple projects and helped share that knowledge. To do so, they distinguished between specific practices within projects and more general principles across projects.  The Badges Design Principles Documentation project is organized around this distinction as well.

Project Goals

Our overall goal is identifying appropriate practices for using badges in particular learning contexts. This is important because the features that define contexts are what determine whether a particular practice is appropriate.  This is complicated for the badges initiative because (a) the practices and contexts are mostly new, (b) the practices and contexts are evolving, and (c) the practices and contexts shape each other. Our specific goals aim to tame this complexity.  With input from DML, HASTAC, Mozilla, and some of the projects, here are our specific goals at this time: 

Near-term (by February 2013):  We aim to document how the plans in each proposal (“intended practices”) evolved as they were put in place (“enacted practices”), and link that evolution to relevant aspects of the project contexts.  So far we have summarized the intended practices in all of the proposals and have conducted one-hour interviews on enacted practices with eight of the projects.  We hope to complete the interviews and have all projects review and approve our characterizations before the January workshop.

Medium-term (by June 2013):We plan to use clusters of similar practices across projects to derive “initial badge design principles” that can be exemplified with specific project features.  Around DML 2013 in April, we hope that projects will be interacting with each other around the principles in mutually useful ways.

Long term (by June 2014).  We hope to finish with a comprehensive database of general badge design principles. Roughly five sets of about five principles should be manageable.  Each principle will be linked to more specific project practices and features. Our ultimate goal is finding the best way to share this information with projects and other badge design efforts more broadly.

We are also going to be reviewing and cataloguing the relevant research literature; I will ask about these plans in a separate blog post.  One thing we are not doing on this project is evaluating projects or studying learning outcomes.  But we certainly want to help projects share practices and principles for doing so!

Project Team

Three doctoral students in Indiana University's  Learning Sciences Program are working on this project.  Elyse Buffenbarger, Rebecca Itow, and Andi Rehak.  are all also HASTAC Scholars, so you can access their profiles and leave messages by following the links.  They will introduce themselves in more detail in a subsequent post that describes our plans to review the relevant research literature associated with the design principles as another part of this project.

Help the DML DPD Project Help You!

All comments and suggestions are welcome and appreciated.  One specific question I have concerns our initial categories of principles.  As I described in a previous post, we have initially organized our search for badge design principles around four categories of badge functions:recognizing/accrediting learning, assessing learning, motivating learning, and evaluating/studying learning.  I just posted several questions there and would love to get feedback and suggestions on that post.

A second question concerns the way design knowledge is shared.  We are currently summarizing intended and enacted practices on a private wiki.  Does it seem reasonable to ask projects to review our summaries before letting other projects see those pages?  Eventually we want the “badge design principles database” to be fully open and self-sustaining. What we really want to do is leave behind a network where these principles are continually refined and spread like “memes” across the open badges ecosystem.  This makes me wonder, for example, how our initial decisions about categories (described on the other post)) will impact the spread of principles across such a network. I suspect that some readers are familiar with research on creating self-sustaining interest-driven networks.  I would love to get some pointers and suggestions.

We look forward to hearing from you.  We would love to get feedback and suggestions here as comments.  But we also hope that team members will ponder these questions and share additional insights as part of our project interviews.



Knowing there is a team committed to documenting the research conducted by the various DML Competition winners and the design principles they arrive at is a huge comfort for me, my team, and all the rest of the people working on accrediting informal online learning with digital badges--because it means that we've got a much better shot at remembering how we figure out the things about badging we're currently figuring out!

I'm imagining the eventual findings of the DML Design Principles Documentation Project identifying best practices with regard to:

  • The combination of different KINDS of badges that results in the most effective and fun learning experience - (automatically issued "stealth" badges, assessed "competency" badges, and comprehensive "mastery" badges are the 3 kinds that seem to have gotten the most press at the September workshop.
  • The kinds of assessments that are most effective, and in what combinations (self, peer, administrator, results-based).
  • The motivating factors that encourage the most participation by learners and instructors (this pertains to the pacing, social aspects, and satisfactions level of the learning experience).
  • The best methods for sharing knowledge gains between teams working on specific badging projects.

So if you'll just keep reminding us to take off the blind flaps now and then to think about HOW we know what we know about badging, we should be in good shape!


Thanks Charles--yes it is indeed the combination of kinds of badges and types of assessments that are probably going to end up being most interesting. 

I hope to get second post up soon about the second aspect of our project that I we are really excited about.  As the design principles emerge in looking at practices across projects, we will also begin attaching the most relevant research literture to those principles and specific practices. We will have one team member focusing on each set of principles.  So for each of the more general principles that we find, we should able to point people to the appropriate contexts for enacting that principles, AND some relevant research literature to inform those principles.  But the principles and the literature should also be linked back to specific practices in particular projects which will make it all even more useful.

Consider for example the great insights that Angela Elkordy and others have already shared on the previous post about badge functions. That conversation is interesting to the two of us because we bring situated experiences in which to ground a relatively academic discussion of two different approaches to categorizing things (one from information sciences and another from anthropology).  To most badgers right now this discussion is pretty abstract--if they even bother to read it. But as their projects go forward, some badges are likely to enocunter challenges and gain experience that might make that conversation relevant an useful. 

Our near term challenge is designing a system for connecting the literature reviews with the practices.  For example, rebecca itow (who is heading up the assessment reviews) recently ran across a reference to peer assessment while working on class paper.  She thought it would be relevant so she just dropped it into our growing zotero database and gave it some initial tags.  As we were reviewing the summaries of intended and enacted practices that we just completed for one of the projects, she figured out how that reference connect to that project in a pretty specific way, and annotated the reference accordingly.  With four of us doing that over the next year, we should develop a pretty cool resources for everybody.

Our longer term challenge is creating a system that lets other people contribute and take credit for their contributions.  For example, the exchange that Angela and I had will likely end up being very relevant to one of more principles about recognizing learning (Andi Rehak's area).  If that exchange happend over their, or was at least cross referenced, the aforementioned badger who is struggling with that issue could articulate that struggle and we and others could chime in.  That would leave behind a highly contextualized strand of discussion that could draw in others as well.

I have no idea how to make that happen, but I am learning a lot from watching how effectively HASTAC and Mozilla are able to construct deliberate knowledge building networks.  I hope folks can provide specific suggestions now and in the future!




Whilst at DML workshops a lot of conversation was around how do we catalogue and cross-reference between badges. It seemed many teams had, like us created similar tables detailing each badge level, its relevant skills and competencies, examples of evidence that would fit, if other badges were required and if it was a OBI or internal badge.

I wondered if a quick start may be for all projects to share their badge criteria tables and the DML research team could help to  draw some threads between them. Particularly in context of adding some sort of competency/badge equivilancy info to the meta data which i know there is a lot of discussion around already.

In my utopian vision we all add a "competency key" field to our badge meta data that corrolates to other similar badges in the ecosystem so users can move easily between them and discover/create new learning paths across issuers and organisations.

Adding a blank field to the meta data is one step we can do quite easily now. Joining the dots on paper between our existing badge plans and tables may be another step in this direction?

Hope this helps and not too off at a tangent but it is where my head is at and it is related to how we share information more effectively between badge projects and the badges themsleves



I think this is a great idea, Cliff, and possibly a necessary component our communities will require for more broad badge uptake. Those who accept the badges of our eaners (i.e. post secondary institutions, employers, etc.) will want to know  -- do want to know. Being pro-active in addressing the inevitable questions seems wise.


Great points Cliff and Stacy-

I discussed this with the DPD project team and we did conclude that this is a worthwhile goal and one that we should help lay the groundwork for.  It mirrors some of the disucssions of coruse equivalence that is so important in European universities.  I do think that it is beyond the scope of our current project but we very much want to helps lay the groundwork.  I am assuming that the way we will do that is by watching project for whom equivalency is an important goal, and watching how the enact that practice and whether it is successful.  

This will however raises some very complicated issues around assessment and context.  Even for specific skills and content knowledge, the context in which that learning takes place has a lot of implication for the usefulness of that knowledge in subsequent learning or performance contexts.  Take for example, the skill of preparing content and writing for the web, like you might learn it in a hackjam setting.  A hacker whe earns a Wordsmith badge writing a story based a a superhero in the HACKtivities ( has mastered someting very different than someone who learns writing for the web in, say, working on a social justice campaign or in a e-marketing effort.  

The equivalency issue might be less of an issue for the Super Styler badges because that is all about CSS.  But the equivalancy issue would be much greater for the Mentor badges.  Could a Mentor badge for Hackasaurus ever be equivalent to the Mentor Badge from Global Kids summer program?  I don't know the answer but this sure does seem to raise a lot of questions.


This is a good discussion. The biggest problem I have encountered is how to be consistent over time with regards to where a badge is situated in a specific learning continuum.

I wanted to mention that I started by leaning towards an capability for our system to shown

a) specific standards alignment  (for all)

b)learning mastery for some specific contexts by addition of description to the LRMI strategy we have chosen to manifest in the web page specified by the criteria url.

In order to show mastery, we will design some activities (such as solving for volumetric pressure, or say naming the planets) to have sequential repetition.

Understanding the equation and then solving it , while "dicing" the parameters would show a learner had a master of this physics concept. But how, therefore to "show" in this context to someone (for example a potential employer or teacher, or your future self). The method we choose is to tie it to the criteria URL of the awarded badge using LRMI, and use standards. There is some flex here because  "grill and drill" (my colloquialism) is certainly not the only path to mastery, and we mustn't be married to that path, but the LRMI does look something like this:

    <meta itemprop="targetUrl" content=";page=178" />

<meta itemprop="suggestedCCSS" content="Science.8.UTW.S1" />

    <meta itemprop="successfulSequentialAttempts" content="3" />


Following a detailed explanation and link references, this LRMI subtext (if you will) mentions that the learner got the correct answer 3 times in a row without fail. It happens to be that this is the criteria (and hence requirement)  of the awarded badge.

Considering those definitions of science common core are still to be published, and that this picture is not complete, there will be more defined standards references, as well as other activity types than "drill and practice."

Regarding Activity Types we might now add:

<meta itemprop="medium" content="video game" />

<meta itemprop="activityclass" content="drill and practice" />

Activites are very varied, for example one list is:

concept mapping
group discussion
guest speakers
interactive lecture
just-in-time lecture
panel discussion
poster session
rubric design
story telling
artistic expression
case studies
drill and practice
event production
peer assessment
problem solving
role playing
student teaching
team building
field trips
job shadowing
portfolio building
scavenger hunt
service learning*


We will work towards, for example, a common taxonomy relating types of activities and should evolve this to be more inclusive. It would simply be excellent and desired if for example I could add: "activityType" and then have a standard set to choose from... including types that involve "team" or "reading" or "research" or "communication (social)" as options. I may not want someone to solve relativity equations, but I might have wanted them to read Feynman's QED, and that should be Badge-Able.

I would like to know if anyone recommends a more standardized list of learning activity types ?