Blog Post

5 Buckets of Badge System Design Revisited: Where to Put Assessment?

5 Buckets of Badge System Design Revisited: Where to Put Assessment?

 

This is a follow-up to the 5 Buckets of Badge System Design: You Are Here blog post that I wrote a few weeks ago. I wrote that I’m interested in making badge system design less daunting, so I made choices while developing the framework that err on the side of simplicity. Of course, making things simple means leaving things out. This blog post is about discussing what I left out: Assessment.

To recap, here are the 5 Buckets:

  • New build. The badge system, learning content, and technological platforms are designed simultaneously.
  • Integrated build. The badge system and learning content are co-created and integrated into a pre-existing technological platform.
  • Layered build. The badge system is layered on top of pre-existing learning content and pre-existing technological platform.
  • Responsive build. The badge system responds to pre-existing learning content, and the technological platform does not yet exist, is optional, or is distributed.
  • Badge-first build. The badges are designed first and the learning content and technological platform are designed around the badges.

This is a design consideration conversation more than anything else. It isn’t about how to build assessment into a badge system, or different ways to assess. (For that, see the Design Principles Documentation work that Dan Hickey is leading.) It’s more about where assessment belongs in a badge system framework. How does assessment influence badge system design? At what point do people need to talk about it when designing their system? How does the technology part influence the assessment part, and vice versa? 

When I was initially messing around with the buckets, I debated how to treat assessment in a system in which it plays such an important role. I decided to treat assessment as a design implication that follows a  design approach, even though assessment may be built into a technology platform or be coupled with the learning content prior to designing the badge system. Meaning, it might be part of a design approach (buckets), and it might also be a design implication (if this, then that). 

So why is this important?

The main reason this matters is for communicating what we’re talking about when we talk about badges and assessment in the design process. Some conversations will be independent of the technology or badge platform. Some will be dependent on the technology or badge platform. If you want to do something cool with assessment, maybe that means you have to rethink your technology platform. Maybe it means looking for a specific kind of badge issuing platform that can handle what you want it to do. If you decide that choice-based assessment measures what matters most to your organization, you may need a system designed to handle that.

Being able to communicate what an organization wants or values makes it easier to understand what decisions need to be made, and when. 

An organization that already has a technology platform with built-in assessment features (like rating, ranking, voting up, voting down, or some kind of social assessment) will have a different design approach than an organization who has a system that lacks those affordances. 

An organization that has hi-tech, super slick assessment features built into its technology platform will have implications tucked right inside the design approach. If your technology platform has no assessment built in, that’s a different story. Assessment is then hammered out during a different part of the badge system design process.

Here’s the problem, though, which gets to the highly iterative nature of badge system design: What if an organization decides it wants to use adaptive assessment built right into the platform so educators can use learning analytics and automated features that are too hard to do without computer support? That decision process might mean finding a new platform. Or building in some new features to a pre-existing platform.  

In a conversation that’s going on in the Open Badges Google group, the assessement system is being discussed independent of the technology platform. I think this happens a lot. Choices about assessment are going to widen and narrow depending on what kind of affordances a technology platform has. Same thing if your technology system already has a badge system built right in.

The discussion in the Open Badges group is an interesting one about “binary assessment” and badges (i.e., students either get credit or they don’t for doing an assignment). Maybe there is a system that does something particularly creative that automates a binary assessment in interesting ways that are too hard to manage without technology. 

This is why I separated the technology platform from the badge platform in the 5 buckets. Your assessment functions might live inside the technology platform. They might live inside the badge platform. Or you might be assessing people independent of both the technology platform and the badge platform. So assessment is something we can talk about independent of the technology platform, even though it won’t always be that way.

Ross Higashi of Computer Science Student Network, one of our Badges for Lifelong Learning grantees, immediately recognized that I'd left assessment out of the 5 Buckets. I'd like to include what Ross wrote me because it gets to the significant role assessment plays during the initial design stage. In fact, it can be so important that an organization will want to jump immediately to a different bucket (if they have the resources). For example, moving from a Responsive Build where the technology platform and learning content are already in place to a New Build where the learning content, technology platform, and badges are designed simultaneously.

Here's what Ross had to say: 

Having some form of Assessment is unavoidable for any Badge containing a claim and an Evidence link. But in particular, it is possible to have Content with Explicit Assessment, and Content without Explicit Assessment. Badging learning from these two different types of activities will take on substantially different forms, because the Evidence each provides will fall into different categories. Speculatively, I think that we will see Badges cleave along this line:

  • "Representative Assessment" Badges will emerge, whose claims of significance are tied to performance on assessment instruments meant to be representative of something larger than themselves. These include your traditional "passed the test" or "Certification" exam badges. The key is that the claim is stronger than the literal evidence, and the mechanism necessarily involves an assessment intermediary. Value to a consumer is contingent upon the consumer's willingness to trust the assessment's claims of representation.
  • "Self-Evident" Badges will be those whose claims of significance are meant to be taken at face value. They exist to document a specific achievement in and of itself, and make no assertion of value beyond that. Evidence attached to such a Badge is focused on establishing that the claim is factually correct. No intermediary step is involved. Value to a consumer is contingent upon what that consumer wants to read into the fact of the accomplishment.

This is relevant to the discussion of  You Are Here Buckets, because it behooves an organization to think carefully about whether they will also need to develop Assessments for badge-earning purposes, and what those look like. It is also important to figure out whether Assessments are a category unto themselves, or whether they fit in with Content, or with the Badge System.

In terms of the buckets, Ross nailed it: "The difference between Content and Assessment of that Content is not clear. It's tempting to lump them together, because neither of them is Badges and neither of them is Technology, but the specific role of Assessment in Badging ends up being more important than one might expect." 

So assessment is more than a design approach, and more than a design implication. As I build out the framework, I'm beginning to see the steps more clearly. Badge system design is a highly iterative process, but it makes sense to have some conversations before others. And some decisions are going to influence not only how the system behaves, but how the system is built. I'm curious to hear where others think the assessment conversation fits into the design process. 

 

 

 

Flickr image courtesy of Sugar Creek via http://www.flickr.com/photos/sugarcreekphoto/907296140/

92

2 comments

Sheryl (and Ross)--this is another brilliant step forward for the deep thinking of badges.   For me personally, my commitment to badges, is that I find our current educational system's obsession with end-of-course summative standardized assessment to be so offensive and, in the end, so discriminatory.   When we can chart test scores so precisely according to income level, we know we have a method that is about class, not about learning.   I keep hoping that in badging we can come up with a form of assessment that is real-time (formative), flexible, positive, comprehensive (institutionally determined), adaptive, inspiring, motivational, and, of course, customizable so that it can help provide kids with pathways through their own skills, interests, passions, and competencies (in school and out) to other ones.  

 

Teaching to the test is an almost cynical reductio ad absurdum of learning to that which can be measured.   It is not only easy to game such a system; it reduces learning to a circular game, not a pathway for those who don't have the resources to know how to play the game well.

 

What I like about the original open source community badging by open software designers is you don't grade negatively.  You only award points for success.  This is the opposite of grade inflation.  By not grading low, or high, you instead have a system where points are awarded for accomplishments, and that meta data follows so you can see why person or institution X awarded 50 points for Y.   In the open source world, you would not offer to partner in a competition with some unknown programmer out there who only had 300 points when you could partner with someone who had 3000 . . . . unless, that is, you follow through and find out the 300 points were awarded for a first and only job performed brilliantly and in exactly the skillset area where you need a partner.   Conversely, the person with 3000 points may have never earned 300 points from one job in a long career of mediocre collaboration.  The point is the number has a lot of content behind it.  Badges could be loaded with content, quite transparently, in the same way.   Including all the data about what was required to earn certain points in certain projects or subject areas.

 

I can see some version of that system adapted to badges and useful in both in-school and out-of-school programs.   It's already difficult for institutions to design badges to simply recognize participation and completion of a project---we're seeing a lot of those kinds of badges now with all our DML and Summer of Learning projects.   The next and crucially important phase would be adding crediting, accruing assessment as well.   THEN we have a chance of displacing the dismal system designed in 1914 to deal with a teacher shortage and that, sadly, now floats the entire world's educational boat . . . except, of course, in Finland.

 

Thanks again for initiating this hugely important conversation.

116

Over on the original 5 Buckets post, Jim Diamond commented that, "..badges are different—they're not about the doing, so much, as about the done."

This is a profoundly important point -- and you are describing the same thing, Cathy, by way of your examples. Jim is acknowleding that a certain type of badge system design carries with it the danger of describing the outcome first, which narrows the way everything is designed -- not only the learning and the doing, but the entire system, technology included. That's a very expensive, resource-intensive way to replicate the status quo. And as you point out so well, the status quo is discriminatory and demoralizing. 

Lucas and I are working on a post that addresses Jim's comment, but I wanted to address his point here in some depth because it's true -- when and where the assessment conversation takes place in the design process is critical. The designers of the points system you describe, Cathy, might have made decisions about their assessment system early in the process, maybe even before they began building the technology platform. Or did they? Maybe they designed their assessment system to mimic the "doing" that was already being done, and the assessment system was iterated to reflect what was already embedded in their process. 

Again, quoting Jim, "...if we start with the achievement—that is, if we start with the badge and then build backwards—we should be incredibly clear about the process that the badge is supposed to represent, rather than the outcome."

I keep coming back to Jim's points because he's describing the beating heart of badge system design. Badge systems poorly designed will wag the dog. So how can we describe a framework that guides people to the right decision points at the right juncture during the design process? If we can minimize the complexity, I think it allows people to make more sophisticated decisions about how their badge system will upend the status quo. Like you say, we're seeing a lot of badges awarded for participation and completion of a project. There's so much more potential here. How can we simplify the assessment conversation without dumbing down the system?

Take a look at the brilliant cards that Nate Otto made to reflect the Design Principles Documentation project that Dan Hickey is leading, and the post Dan wrote about workshops they're doing on campus. For some additional context, Nate wrote about how he used the cards at MozFest here. What is so exciting about Nate's cards is that they give people tangible objects that represent super complex principles. Because badge system design is highly iterative (our 30 Badges for Lifelong Learning Competition grantees can vouch for that), simplifying things so that the decision points are clear is crucial. 

Thanks for your comment, Cathy.

 

108