This is a follow-up to the 5 Buckets of Badge System Design: You Are Here blog post that I wrote a few weeks ago. I wrote that I’m interested in making badge system design less daunting, so I made choices while developing the framework that err on the side of simplicity. Of course, making things simple means leaving things out. This blog post is about discussing what I left out: Assessment.
To recap, here are the 5 Buckets:
- New build. The badge system, learning content, and technological platforms are designed simultaneously.
- Integrated build. The badge system and learning content are co-created and integrated into a pre-existing technological platform.
- Layered build. The badge system is layered on top of pre-existing learning content and pre-existing technological platform.
- Responsive build. The badge system responds to pre-existing learning content, and the technological platform does not yet exist, is optional, or is distributed.
- Badge-first build. The badges are designed first and the learning content and technological platform are designed around the badges.
This is a design consideration conversation more than anything else. It isn’t about how to build assessment into a badge system, or different ways to assess. (For that, see the Design Principles Documentation work that Dan Hickey is leading.) It’s more about where assessment belongs in a badge system framework. How does assessment influence badge system design? At what point do people need to talk about it when designing their system? How does the technology part influence the assessment part, and vice versa?
When I was initially messing around with the buckets, I debated how to treat assessment in a system in which it plays such an important role. I decided to treat assessment as a design implication that follows a design approach, even though assessment may be built into a technology platform or be coupled with the learning content prior to designing the badge system. Meaning, it might be part of a design approach (buckets), and it might also be a design implication (if this, then that).
So why is this important?
The main reason this matters is for communicating what we’re talking about when we talk about badges and assessment in the design process. Some conversations will be independent of the technology or badge platform. Some will be dependent on the technology or badge platform. If you want to do something cool with assessment, maybe that means you have to rethink your technology platform. Maybe it means looking for a specific kind of badge issuing platform that can handle what you want it to do. If you decide that choice-based assessment measures what matters most to your organization, you may need a system designed to handle that.
Being able to communicate what an organization wants or values makes it easier to understand what decisions need to be made, and when.
An organization that already has a technology platform with built-in assessment features (like rating, ranking, voting up, voting down, or some kind of social assessment) will have a different design approach than an organization who has a system that lacks those affordances.
An organization that has hi-tech, super slick assessment features built into its technology platform will have implications tucked right inside the design approach. If your technology platform has no assessment built in, that’s a different story. Assessment is then hammered out during a different part of the badge system design process.
Here’s the problem, though, which gets to the highly iterative nature of badge system design: What if an organization decides it wants to use adaptive assessment built right into the platform so educators can use learning analytics and automated features that are too hard to do without computer support? That decision process might mean finding a new platform. Or building in some new features to a pre-existing platform.
In a conversation that’s going on in the Open Badges Google group, the assessement system is being discussed independent of the technology platform. I think this happens a lot. Choices about assessment are going to widen and narrow depending on what kind of affordances a technology platform has. Same thing if your technology system already has a badge system built right in.
The discussion in the Open Badges group is an interesting one about “binary assessment” and badges (i.e., students either get credit or they don’t for doing an assignment). Maybe there is a system that does something particularly creative that automates a binary assessment in interesting ways that are too hard to manage without technology.
This is why I separated the technology platform from the badge platform in the 5 buckets. Your assessment functions might live inside the technology platform. They might live inside the badge platform. Or you might be assessing people independent of both the technology platform and the badge platform. So assessment is something we can talk about independent of the technology platform, even though it won’t always be that way.
Ross Higashi of Computer Science Student Network, one of our Badges for Lifelong Learning grantees, immediately recognized that I'd left assessment out of the 5 Buckets. I'd like to include what Ross wrote me because it gets to the significant role assessment plays during the initial design stage. In fact, it can be so important that an organization will want to jump immediately to a different bucket (if they have the resources). For example, moving from a Responsive Build where the technology platform and learning content are already in place to a New Build where the learning content, technology platform, and badges are designed simultaneously.
Here's what Ross had to say:
Having some form of Assessment is unavoidable for any Badge containing a claim and an Evidence link. But in particular, it is possible to have Content with Explicit Assessment, and Content without Explicit Assessment. Badging learning from these two different types of activities will take on substantially different forms, because the Evidence each provides will fall into different categories. Speculatively, I think that we will see Badges cleave along this line:
- "Representative Assessment" Badges will emerge, whose claims of significance are tied to performance on assessment instruments meant to be representative of something larger than themselves. These include your traditional "passed the test" or "Certification" exam badges. The key is that the claim is stronger than the literal evidence, and the mechanism necessarily involves an assessment intermediary. Value to a consumer is contingent upon the consumer's willingness to trust the assessment's claims of representation.
- "Self-Evident" Badges will be those whose claims of significance are meant to be taken at face value. They exist to document a specific achievement in and of itself, and make no assertion of value beyond that. Evidence attached to such a Badge is focused on establishing that the claim is factually correct. No intermediary step is involved. Value to a consumer is contingent upon what that consumer wants to read into the fact of the accomplishment.
This is relevant to the discussion of You Are Here Buckets, because it behooves an organization to think carefully about whether they will also need to develop Assessments for badge-earning purposes, and what those look like. It is also important to figure out whether Assessments are a category unto themselves, or whether they fit in with Content, or with the Badge System.
In terms of the buckets, Ross nailed it: "The difference between Content and Assessment of that Content is not clear. It's tempting to lump them together, because neither of them is Badges and neither of them is Technology, but the specific role of Assessment in Badging ends up being more important than one might expect."
So assessment is more than a design approach, and more than a design implication. As I build out the framework, I'm beginning to see the steps more clearly. Badge system design is a highly iterative process, but it makes sense to have some conversations before others. And some decisions are going to influence not only how the system behaves, but how the system is built. I'm curious to hear where others think the assessment conversation fits into the design process.
Flickr image courtesy of Sugar Creek via http://www.flickr.com/photos/sugarcreekphoto/907296140/