Design Principles for Assessing Learning with Digital Badges

This post is cross-posted at Remediating Assessment.

Rebecca Itow and Daniel Hickey

This post introduces the emerging design principles for assessing learning with digital badges. This is the second of four posts that will introduce the Design Principles Documentation Project’s (introduced in a previous post) emerging design principles around recognizing, assessing, motivating and evaluating learning.

At their core, digital badges recognize some kind of learning. But if one is going to recognize learning, there is usually some kind of assessment of that learning so that claims about learning can be substantiated by evidence. Over the course of the last year, we have tracked the way that assessment practices have unfolded across the 30 DML Badges for Lifelong Learning competition winners. We have categorized these practices into ten more general principles for assessing learning with digital badges. These principles are not presented as “best practices.” Rather, these principles are meant to represent appropriate practices that seemed to work for particular projects as they designed and refined their badge systems.

No one project embodies all of these principles, and the principles mean somewhat different things to different projects. The general principles have been broken down into specific practice categories, but they are still being refined. Some of these categories are discussed here. We seek input from individuals and projects at this time as we attempt to firm up these principles and categories.

Design Principles for Assessing Learning

The following principles are ordered by their prevalence among the different projects. The first principles are employed by many badging projects, and the last principles are employed by just a few. These principles are fluid and are being refined as we enter the second year of our project. Feedback and comments on these principles are welcome and encouraged.

Use leveled badge systems. A majority of badging projects use some kind of “leveling system” that learners work through as they complete activities and projects. For example, some projects use competency levels to structure their system so that learners earn badges as they master each level of the content (e.g. learning to add fractions before learning to multiply them). Some projects have set up a system where learners earn smaller recognition like stars or points as they complete smaller goals, and these add up to a larger badge that learners can push out to their backpack (we are currently calling these metabadges). Other projects have categories of badges such as leadership badges and collaboration badges; these categories may not be levels, but they are included in this principle because they act as levels that learners can master.

Enhance validity with expert judgment. In an effort to substantiate the claims made about learning, many badging projects are using some kind of expert to assess learners and judge the artifacts they create.Projects are doing this in a variety of ways; some are using human experts like teachers or field experts; others are using some kind of computer scoring system or artificially-intelligent tutors. Several projects reported that a combination of human and computer experts provide a useful balance of nuance and automaticity. Interestingly, some badging projects are actually giving their human experts badges. In this way, the assessors can earn credentials that show off their expertise, and in some cases badges may give them privileges to be a more prominent leader in a community.

Align assessment activities to standards: Create measurable learning objectives.Almost all of the badging projects have decided to align their activities to some established standard. In some cases these are established state or national standards; in other cases they are internal standards developed by their organization. By aligning assessment activities to standards, projects are better positioned to make and warrant particular claims about the kinds of learning that takes place within their badging system.

Use performance assessments in relevant contexts.There are many different types of assessments. Projects are naturally thinking carefully about the claims they wish to make about learning and the type of learning they are supporting when selecting assessment formats. Some projects are using performance assessments. In some cases these are summative or final assessments that ask learners to use the knowledge they have practiced with in previous activities in a new and sometimes complex context. In other cases these are more formative assessments that share some features with portfolio assessment.

Use e-portfolios.Portfolio assessment can be cumbersome, but when implemented well, portfolios trace a learner’s growth over time and extensive and valuable feedback conversations can occur around them. Some projects encourage the learning community to provide feedback on the portfolios to the learners, while others make the portfolios public and encourage family and local community members to provide insights and feedback so that learners benefit from an array of opinions and points of view. Still other projects use portfolios primarily to track each learner’s growth. In these cases, experts evaluate and provide feedback, although this interaction may be limited to a small number of exchanges between the learner and expert.

Use formative functions of assessment. Formative feedback was sufficiently incorporated into enough of the projects to qualify as a specific principle. Many projects use some kind of formative assessment; this means that they create opportunities to provide feedback to the learner that shapes their ongoing and future work rather than just providing a score at the completion of an activity. Some projects do this in the form of expert feedback on artifacts. Some encourage peer feedback and collaboration, although in many cases, the peer feedback is informal and does not directly influence formal assessment.

Use mastery learning. Some projects’ goals for learners involve mastering very specific skills. These projects use conventional feedback techniques where learners practice a skill in a particular context until they master it. Many of the projects using mastery learning do so in conjunction with some kind of assessment that provides the learners with more general formative feedback about their overall progress.

Use rubrics. Some projects have invested substantial effort in identifying or developing rubrics for assessing student work. Rubrics provide both the learner and the assessor a clear idea of expected levels of mastery. Some projects create their own rubrics while others use rubrics created by schools, districts, states, or organizations. Some rubrics are developed for specific activities, while others are developed more generally to be used for a variety of activities.

Promote "hard" and "soft" skill sets. In addition to learning specific content, some projects encourage the development of skills such as leadership and collaboration. Different projects have designed creative ways for learners to demonstrate these difficult-to-assess skills, such as designing activities so that they cannot be completed without collaborating with peers. Some projects are building in places for learners to acknowledge those who helped them reach certain goals, which in turn is helping to build collegiality and encourage collaboration within their communities.

Involve students at a granular level.A few projects have decided to involve their community in the design and assessment processes. They encourage learners to think about the kinds of activities that they think would best demonstrate desired skills, and involve them in the basic design of the badge system. They ask for learner input on different aspects, from the particulars of the activities, to the kinds of assessments that will be used, to the design the badges and their specifications.

We Want Your Feedback!

Once again, not every badge system will use all of these general principles in their design. These principles outline assessment practices that should be considered when designing a badge system. Which principles inform a project will depend on the context in which they are being implemented and the goals of the project. We welcome your feedback on these general principles, and hope that this work is useful as you move your badging project forward. 

David.Gibson

Some thoughts about assessment

After glancing at all the principles, I began to wonder if we need psychometric validity as well as the validity of the expert community, in which case there should be a mention of how reliability is monitored and improved. What if one expert scores a work with an x but the next one scores it with a y - is this OK? An old rule of thumb used to be that if the stakes are low, then don't worry so much about reliability, but I'm not so sure that rule is a great one. (the broken watch is reliable, after all, but fails on validity; but if we do NOT have reliable measures, then the validity suffers).
 
I am also reminded of the triad of models needed for assessment ("Knowing what students know - Pelligrino etc.) and you might have your principles covering these three, but you might want to elaborate.
 
(1) You need a task that elicits the knowledge and skill you want to measure (and this is tricky - you sort of have that with the levels, standards, hard and soft skills, and relevant context);
 
(2) you need a model of what a performance of that task looks like when done very well, not so well, etc.(this overlaps with your levels and trajectory concepts and relies on expert judgment - you had mentioned auto-methods in the recognition blog, maybe bring those back here again); and
 
(3) you need a model of how you are going to use the evidence of the specific performance of someone to make a judgment about how much of the knowledge and skill they showed you (you have the rubrics, but there are many other ways to build an evidence chain).
 
One point you might want to make is that if the badges are going to be signs that a valid assessment has taken place (as opposed to say just motivating people to keep going), then the system has to have all three of these components.
 
Cheers from Curtin University.
 
 
Cathy Davidson

Triad of Models Needed for Assessment

I very much like this reminder of the Pelligrino etc "knowing what students know" triad of models.   In the online course I'm teaching in the Spring, I'm thinking to use a version of these models in every assignment, a variation of the old medical school "See one.  Do one. Teach one."  And I'd add "And then Share one" (tell us what you saw, did, and taught to someone else and how it worked).   I'm also very interested in the validity/reliability and low and high stakes issue.   I've been learning from Al Filreis's long experience with online community learning of modern poetry, going back to 1994, including really difficult, edgy, experimental poetry.   He now teaches via a Coursera course and uses an interesting system of peer grading where, basically, every writing assignment has six or seven people responding to it.   Possibly three or four don't contribute very much---but three or four others do, he says, and so each person ends up with far more commentary than a normal student would receive from a professor.   AND something interesting that he builds in as discernment, on the part of the learner, in evaluating which of the peer critiques are useful and which are not----all part of the larger project of learning how to read deeply and critically (including in peer-to-peer evaluation).   The circularity of that intrigues me.  I don't know that badging would fit into it, but I see how it could.      I may quote your three nice, succinct restatements of the models in my own course description.  Thanks for that.

 

And, Rebecca, thanks for this very interesting discussion---and, yes, I vote for visuals too.  That would be really powerful.    For our recent DML Competition, we asked applicants to use their institutional logos or create new ones and the power of seeing 200+ visuals is really inspiring in a very different way than text:  http://dmlcompetition.net/competition-category/summer-youth-programming

David.Gibson

Open feedback with scoring of the feedback

I like the idea of increasing the amount of critique in the feedback stream and especially when there is an expectation of recognition of the quality of that feedback. Negative or unhelpful critique from a few people is a normal part of being in an open community, and as long as flaming doesn't get in the way, it's just part of the natural range of external validity. A few years ago when the Web Project was a new idea (kids posting art work for artists to critique), we had a simple 3 step process and a simple rubric for the high school kids to apply. 

REQUEST. First, the person posting a work for feedback had to ask for "specific feedback" - the more specific and the more it used a critical vocabulary, the better. By "specific", we meant to use the vocabulary of the art form (we were dealing with painting, music and dance etc. and each form has its own vocabulary) and point to a particular aspect of the work that one had a legitimate question about or a need or desire to hear from others about.

e.g. a nonspecific request for feedback = "Do you like what I did?" "What do you think about this piece?" etc. 

e.g. a better request = "Did I capture the idea of silence by using these particular motions near the end of the piece?"

RESPOND. The feedback person had a 3-level rubric: 1) reply doesn't relate to the request 2) reply acknowledges the request or 3) reply responds to the request and offers an elaborative option for the requester to consider.

REPLY. The requester, to get the full value of the points for this exchange had to do two things: 1) say thank you to each person who gave feedback and 2) when appropriate, tell the feedback person how they used the feedback to edit, amend, think about or change the work in any way.

dthickey

On peer grading

Cathy--

Your points about peer grading are interesting.   I am looking for everything I can in advance of my Assessment BOOC this Fall.  I really liked a post last year by Audrey Watters.  Audrey mentions the poetry course and raises the issue of automated scoring within things like poetry and goes on to say that the Coursera's coming peer assessment system sounds fine.  But a more recent piece by Jonathan Rees points to a huge flaw in that system. Coursera trys to motivate students to give grades by not letting them see the grades others gave without giving grades themselve.  This explains why so many of the peer grades have no comments at all.  They created a vicious cycle.

To me this is a good example of why peer grading or any sort of peer-driven summative assessment will always fail.  The sumative functions of assessment often overwhelm the formative.  I have been experimenting with a very different approach that has been much more formative and low stakes.  It is entirely public and has not formal stakes attached.  Students simply post comments starting with an ampersand as a "stamp" that serves to "endorse" a classmates wikifolio each week.  In my current class I just use endoresments to encourage commenting and discussion.  In my BOOC I plan to require at least one endorsement for the wiki to count.  Not certain it is gonig to work but Google thinks it is worth a try.  Interactive learning at scale is going to be difficult

In some cases we are linking peer-awarded stamps and teacher-awarded badges.  Peers give each other stamps and teachers mediate these recognitions and issue badges accordingly.  

 

And regarding the graphic images, absolutely!  I was just at the proposal site and was really happy to see icons for every project.  

mbranson

For students who always got trophies, regardless. . .

You last point, to me, is the most salient: "as opposed to say just motivating people to keep going." Keep in mind the the current generation of students always for the trophies, always got the ribbons, always got the accolades, regardless of their "mastery" of any feat.

What we find in public two-year colleges is this very conundrum : how do we motivate students to keep going, to keep striving, or--in Tennyson's words--"To strive, to seek, and to  find, and not to yield."  The idea of badges as motivators along the way is something I have encouraged my faculty to explore and use.

Concomitant with the badge notion is also the encouragment to use game modality for freshman students. We all know the hours of interest games have generated from students--why not exploit this approach for delivering instruction?

Our students are not us. . . .

rcitow

re:trophies

I'll say that I think the "everybody gets a trophy" phenomenon is pretty problematic. As a teacher, my students expected "trophies" for mediocre work because they were used to being rewarded for everything. I tried to teach my students the value of different kinds of praise and rewards, and the value of working toward a goal and reaching it. 

I am the the Games, Learning, and Society conference right now, and I have been thinking about what it means to use games and game formats in the classroom. I think that rather than trying to make curriculum more like a game, we should take these broader lessons from games and work them into our teaching: 

  • It is ok to fail, and we can learn as much if not more from our failures as our success, experiment more and try new approaches to problems
  • learning content can be relevant and fun
  • use what youtlearn in a lot of different contexts

I worry that focusing too much on adding a game format to curricula can undermine the learning process. I designed a curriculum for a Hackjam that used a badging system with a point system, and in the pilot, the participants were so caught up in getting points that they weren't thinking about how they were learning these new technical skills.

We do want to keep students going, but I don't think that means we need to award a badge for every action. Show them that the content is relevant to their world and how to think about their own learning processes, and they will want to learn more. That's what I focused on in my own teaching. 

dthickey

On trophies and motivation

Mark and Rebecca--

Great exchange.  It always amazes me how big the void is between motivation researchers and assesment researchers.  In educational contexts, the relationship between motiviation and assessment is massive and complex.  It is going to be the case with badges for sure.  A new post on design princples for motivating learning with digital badges was just posted.  It gets at some of these issue.

dthickey

On trophies and motivation

Mark and Rebecca--

Great exchange.  It always amazes me how big the void is between motivation researchers and assesment researchers.  In educational contexts, the relationship between motiviation and assessment is massive and complex.  It is going to be the case with badges for sure.  A new post on design princples for motivating learning with digital badges was just posted.  It gets at some of these issue.

ottonomy

re:gamifying curriculum

Rebecca,

I love a couple things you mentioned here, and I wish that more of the hype on "gamify all the learning!" paid more attention to what really makes games fun. 

Becoming familiar with how things work in a game domain, applying experience, experimenting with different strategies, and maybe gaining mastery is a fun and natural process. If the game is well-designed, tested, and balanced. 

Even in a software engineering course I took that used an autograder on homework assignments, there was some playfulness in overcoming the limitations of its programming. Got a halfway solution? Try it and see what the feedback is. Hmm, 50%. Iterate... Got a working product, but the autograder can't tell it's right. Fiddle some more. 90%. Try another tack. Ahh, there's the 100! This had the same feel as testing out the rules of a game and running different strategies through. I'm sure some people found this process a little more frustrating than I did.

Slapping some "game elements" on a basic lesson plan doesn't automatically make for the type of engaging experience that keeps the Xbox on until the wee hours. The best games provide a space for familiarization, experimentation, iteration, and mastery. You don't win every time.

I'm not even always against the sort of slap-a-game-on-it lesson plans like "Topic X Jeopardy!" for unit review, but there should be no confusing that with the quality of the best games or the best lessons.

Developing the good games and principles takes a similar process too. You probably don't have a great one if you get it right before you've iterated a few times.

Vanessa Gennarelli

Choice and Community

Hey Rebecca, hey Dan! A few comments:

  • Visual and more examples. This is comprehensive, and yet I think I might only follow because I've watched all the DML folks as their projects evolve. Maybe screenshots of the mechanisms themselves will make clearer how / in what ways rubrics have been deployed, e-portfolios are used, etc.
  • Where is the community? I see the piece on Experts, and that is definitely something we are tinkering with at P2PU. But so much of being part of a learning community is a.) learning how to become/take part in a learning community and b.) giving feedback in the context of a community's norms and values.
  • What about choice? Badges are unique because, in many contexts, learners elect to apply for one. Indeed, perhaps the only thing that really interests me about "leveling up" is the option to watch learners choose their own paths. I've been reading Dan Schwartz's "Measuring What Matters Most" and I think additional coverage about choice-based assessments/interest-driven learning might strengthen the document.

As always, you know where to find me: vanessa@p2pu.org

rcitow

re: choice and community

Thanks for these comments!

Others have asked us for visuals as well. We are working on making all of this more accessible. It is good to know that you think visuals would help. 

While I haven't addressed community directly, under peer feedback I am describing how different badge systems encourage community building and collaboration. I work to make this more distinct. 

I am reading Schwartz right now as well, and was just writing about the importance of choice in the literature review we are putting together. I'll be sure to highlight this. 

Thanks for your feedback. I'll incorporate it into the next stages. 

dthickey

On Visuals and Community

Vanessa--

Sorry for my delayed comments.  I was on vacation and then was at the Games Learning + Society and Computer Supported Collaborative Learning conferences.  

Thanks for the great comments and suggestions. We have struggled a lot with the first two.  In response, I have decide that we need to create Working Examples (www.workingexamples.org) for the projects we are studying.  This will make it simple to bring in graphics and will also help create community around these principles.  Our project charge was pretty narrow--HASTAC and Mozilla are lead on the community so our job is to just support it.

But I am pretty excited about making WEs for the projects.  WEs are organized into ideas at the seed, sprout, and bloom phases.  These phases correspond nicely to the way our project identified intended practices (in proposals), enacted practices (going forward), and formal practices (after the DML funding was over).  I figure if we have a separate WEs for the principles in four areas of learning (recognizing, assessing, motivating, and studying) that those can be linked to graphic WEs.  

And.... I think it will be a good way to get the community involved.  In particular I hope that can get the community involved in identifying the relevant research and external resources.  Sheryl grant made a great start at this in her annotated biblography and I hope we can keep that momentum going to by linking research to more specific practices and design principles.

mbranson

21 st Learners

I believe that our discussion of gaming here is tied to a recent piece by Terry Heick, "How 21st Century Thinking Is Just Different." At one point he asserts this critical comment:

"If the 20th century model was to measure the accuracy and ownership of information, the 21st century’s model is form and interdependence."

This sentence challenged me in ways no other commentary I have read in the last month has. It confirmed for me that Millennials (or Gen Y'ers) have evolved into tribal structure because of the internet. It confirmed for me that gaming has extreme potential for today's learners. It confirmed for me that my faculty who are always incensed by issues if plagiarism all "cut their teeth" in 20th century model of learning.

David.Gibson

Before the 20th C model of

Before the 20th C model of double checking accuracy (and scientific peer scrutiny etc.) some ideas took hold because they were simply in print (see http://www.amazon.com/Man-Misconceptions-Life-Eccentric-Change/dp/1594488711 for an example).

So I think it will interesting to see if there is an assumption of "buyer beware" on all knowledge which has become the new norm, so that no amount of authority-producing formalism is good enough anymore. If so, then we  all must simply do our own double checking - if we feel like it.

Does this lead to a "gut check" level of acceptance of knowledge on a par with other things that we either "like or not". (Does knowledge consist primirarly of what I like and believe, including what my friends also like abd believe?) David Williamson Shaffer talks about "multi-sub-culturalism" as the state of affairs now with global social networking (me and my friends are all the culture I know, and perhaps all I need). So does the shared knowledge of my friends amount to the only validity needed?

If the new C is about form and interdependence I hope it is not without some regard for and awareness of accuracy and ownership of ideas...or we might all live in our own increasingly separated (and friend-driven) echo chambers.

dthickey

David--The problem with "gut

David--

The problem with "gut level" IMO is that it places all the onus of valdity on what individual's think about knowledge on the web.  The whole point about knowlwedge networks is that they distribute credibility issues across users.  The assessment community dismisses "face validity" as an "unsanctioned" form of valdity.  But that misses the point.  I think Carla Casilli got it right when she talked about how credibility forms in knowledge networks and is likely to do so around badges as well.

mbranson

Everything in the internet is true, isn't it/

This reference to the commercial running now about a young woman awaiting her "French" boyfriend, whom she met online, is one way to continue this conversation. From the ME's perspective, there is little differentiation between what is online and what is "true." Also, we are finding that many ME's and others are using "reviewers" of online content to guide them in shaping their opinions. And I have to consider: is there an objective "truth" outside of us that we are seeking through "double checking accuracy (and scientific peer scrutiny, etc"?

I am increasingly cautious of this last sense of "objective truth." It may be due to my age. It may be due to the inherent mistrust I have for data (for too long, I have looked at data and seen ways to make the data do what I want). And it may be my own philosophical (for lack of a better word) perspective of the world. . . .

I confess that even tho' I relished the 1950s Superman who defended "Truth, Justice, and the American way," I am embracing a different model illustrated by 5 for Figthing's song, "It's Not Easy Being Me" (which, ironically, is the voice of the 21st "Superman" from the TV series, Smallville.)

 

David.Gibson

Re: "Gut" reactions and validity

Dan, I like that you have brought up the issue of validity. Seems to me that there is a two-fold problem with validity (and I was not aware that "face validity" has a bad rap, if I correctly caught your meaning). One problem is that my personal gut reactions to things is a kind of validity that forms a filter for my experience - so if I don't personally see the validity, and then if there are two or three others who also don't see it, then what happens with that knowledge? Certainly if we take it out to the whole of humanity and nobody sees the validity, then we might say that it has attained a kind of low status in knowledge (it is invalid). 

The second problem is the group's shared validity, which I think is what we mean by "external validity" (but please correct me if I'm wrong on this). We certainly would say that if the "knowledge network" (which I assume means people who know some things and share with each other) says that something is valid, then for that group, it is valid.

This is where the Shaeffer's idea of "multi-subculturalism" comes into play in my thinking. Let's suppose that group A says that (a) is valid and Group B says that (b) is valid, and these two ideas conflict with each other. Both (a) and (b) cannot possibly be true at the same time (I realise that there are many other cases where they can be true and conflicting at the same time, but let's just consider this special case.

Now how do we assess knowledge in these groups (is there no between-group assessment possible?) Peer-to-peer assessment in Group B should confirm (b) and Group A should confirm (a) - but the fact is that only one group gets it right. Now where do we go? Do we need to form a Group C and mediate a dialog? Is this some kind of Hegelian synthesis problem? I'd love to know what people think; what this means for knowledge, and what this means for assessment of knowledge.

dthickey

RERE on gut reactions

David--

Oh this is getting deep into the trenches of validity now.  One of the reasons I have sometimes cross posted longer versions at Remediating Assessment as that with friends like you things go pretty far pretty fast.  but let us do this hear and see if we can pull some of the members of the badges community into the the messy world of validity.

As for external validity.... Validity has tradtionally been viewed strictly within the assumption that an assessment represents the proficiecy or tendency of one individual, and that the validity concerns the claim that is made regarding that score. Even though everybody says stuff like "valid assessment" and "valid badge" there is really no such thing.  When I put on my traditional assessment hat, I go for Sam Messick's "six distinguishable aspects of construct validity. (content-related, substatnive, generalizability, external, and consequential).  Messick described external as "the extent to which the scores' relation with other measures/behaviors the the expected relation implict in domain theory.  Of coruse, I care much more about consequential valdiity than Messick did. I wrote a pretty readable paper about this when I was a postdoc at ETS and got to see Messick before he died

But as a situative theoriest, I only put on the traditional hat when need to (to obtain widely convincing traditional evidence).  And yes it a Hegelian synthesis.  The reason I am so passionate about situative theories is because Jim Greeno helped me appreciate a dlalectical reconciliation of individual assessment evidence and social assessment evidence.  This is why Carla Casilli's points about "credibility" are so important.  The credibiltiy of the claims contained in a particular badge WILL be crowdsourced.  If you embrace the more common aggregative reconciliation of individiual and social evidence, the the social evidence is simply an aggregation of the individual evidence.  Given the way the BJ Fogg has shown credibility  emerges in a communial fashion in social networks, it is far more appropriate to take the diallectical stance and treat what individuials find credible as a "special case" of the communally formed knowedge in a particular network.  

We will work some of this stuff into our project reports and papers.  There is a special issue of Assessment in Education coming out this year on Sociocultural approaches to validity that should help us all figure out what we need to figure out.  Badges turn out to be a near-perfect context for thinking about it.

Cathy Davidson

Hope You'll Join Future Class!

This is an incredible exchange.  I am enjoying and learning so much.  Fabulous.  We'll make sure to capture it in the Digital Badges collection and bibliography and highlight it.   The back and forth is some of the most sophisticated arguing out of points and implications that I've seen anywhere.

 

A proposition:  if either/both of you are teaching in the term beginning January 2014, or even if you host a workshop, I hope you will join a "History and Future of (Mostly Higher) Education" project that HASTAC is mounted then at many universities around the world.  We really want STUDENTS to be taking their own future education seriously and will be providing plenty of platforms through which they can communicate with one another and the larger world about their own goals and aspirations and ideas and research.  I think we're going to have medical school deans and law schools communicating----maybe even an executive education center, and I'll be doing a Coursera MOOC on the topic in addition to a face-to-face class and a PhD Lab in Digital Knowledge informal seminar on this.   Your assessment work would be incredible to add to the roster----so this discussion could continue with many students participating too.   Here's what we have posted so far but lots more is brewing: 

http://hastac.org/collections/history-and-future-higher-education

 

 

 

dthickey

RE: Future Class

Cathy--

Thanks.  I have been following your progress pretty closely and have been considering that course. I put off some of these issues until I dove full on into getting my own open course up and running and now I see that there is quite a bit already up there to draw from.  Thanks for all your leadership in this area! 

I want to try to really get out in front of the badges issue.  The assessment principles really don''t surface one of the most important things of all: what do we have to do to make a badge so meaningful that the earners will push it out over their social networks, and what do we have to do to make the claims in the badge so credible that it will accomplish real work for the earner.  For example, I will issue a State Assessment Expert badge for people who complete all three sections of my BOOC (Practices, Principles, and Policies).  I want the earners to be able emai that badge to their adminstrators to request continuing education credit and have that be readily honored.  And I want them to push it out to facebook because they are proud of earning it.  What do I put in the metadata to make that happen?  Stay tuned.  I am trying it out in my current for-credit online version of the course.  

Cathy Davidson

Good luck!

What a great project.  Keep us posted.  Really interesting and work, Dan.  I cite back to it in my recent omnibus badge post "Badges Now".   This discussion between the two of you is extremely thoughtful and models the kinds of questions we need to address next.  Thank you as always for this engaged participation and for your research.  I'd love to see widespread adoption of a better system than what we have now and, until we have that real coverage across many and diverse institutions, we're pretty much stuck with what we have----despite various innovative and far superior systems that many have tried to put in place.  

 

I think your comment that it won't work unless and until it is widely accepted is exactly right.   As with all peer review systems, it is the development of the peers that is prerequisite for the system.  Lots of people over the years have had great ideas (portfolios in the 1990s, peer-to-peer editing systems recently) but until they are massivley scaled and implemented, tested and deployed, they are ideas, not structural and systemic disruptions of an existing apparatus and the underlying social structures that support them.   

 

For me personally (and I'm not speaking for anyone beyond myself here, although I happen to know others share my conviction), what is most exciting about the momentum around badging now is that we have such a terrible income disparity---and an increasing one.  Sadly, higher education is no longer the stepping stone into the middle class but because of increasing costs, increasing student debt (a tragedy that Congress just made worse), higher education more and more reinforces income inequality rather than remedies it.   So we know that the high school drop out rate is hugely influenced by "relevance" in the sense that, if you know you have no chance at future educational advantages and advancement, you are more likely to drop out of high school.   Badges for skills and abilities that are useful in the world, that provide careers and jobs (including in creative jobs from sound mixing to graphic arts and many others), but that do not require a college degree might be one of the only roads to advancement for an increasingly disempowered underclass.   Perhaps that is too much hope to place on an assessment system.  But it is a hope.   To me it is tragic how much the present system of testing reinforces the present system of inequality that leads to testing failure that leads to greater inequality . . . a devolution so circular it strangles optimism of any kind.

 

Your thoughts? 

 

David.Gibson

Granularization of accreditation

I agree that badges may be the technology needed to undo the "testing system" (which I think is also the "course system" and "diploma system" etc.) and for opening the doors of access to education and life advancement. Gotts run to a session in Torun Poland right this sec at the WCCE...having just heard Geo. Siemans via SKype to this international crowd...on exactly these same issues...so the world is "abuzz" with the challenge.

Cathy Davidson

Enjoy--and hi to George Virtually!

Sounds like a great conversation to continue.  I admire George's ideas on this point and learn a lot from all of you.   Maybe we should do an online Forum sometime---on hastac.org---on new forms of assessment and credentialing.  Let's think about it.  Invite George and others too.