Cathy Davidson's post on her use of crowdsourcing techniques to facilitate grading in her courses has sparked a lot of interesting commentary, both on the HASTAC site and on this post at the Chronicle of Higher Education's "Wired Campus" blog.
I can't think of a more meaningless, superficial, cynical way to evaluate learning than by assigning a grade. It turns learning (which should be a deep pleasure, setting up for a lifetime of curiosity) into a crass competition: how do I snag the highest grade for the least amount of work? how do I give the prof what she wants so I can get the A that I need for med school? That's the opposite of learning and curiosity, the opposite of everything I believe as a teacher, and is, quite frankly, a waste of my time and the students' time.
Although I agree with this assessment of traditional grading, in fairness I have to say that traditional grading isn't entirely worthless. In many ways, traditional grading prepares students for some of the types of evaluation that they will see when they leave the academy. Along these lines, Ian Bogost's comment on Cathy's original post is a reasoned and thoughtful defense of traditional grading that emphasizes how this method of evaluation teaches students the importance of avoiding teh suck. According to Bogost,
as educators, we are doing our society a disservice by equating "doing fine" with doing well, or with being excellent. When I hear about starting students out at "A" and letting things fall off from there, I wince. A is arbitrary, sure, but if we have to use this arbitrary scale of A - F then we might as well balance it against some non-arbitrary concept. And one non-arbitrary concept is the difference between sucking, doing fine, and being excellent. When showing up and getting by is enough to count as excellence, something's wrong.
I think part of my attitude comes from having spent a lot of time in industry. Do you know how hard it is to find people who want to do more than "fine" in industry of any kind? It's very rare. And like I said, nobody gets hurt, but nobody gets excited either. I believe that the university is a good place and a good time for students to learn about the difference between getting by (note that I didn't say "mediocrity") and striving for excellence. I also think they should be allowed to make the choice, which is not to say that I shouldn't be required to encourage them.
I think it's safe to say that no one involved in this debate thinks that encouraging students to be excellent is anything other than a worthwhile goal. However, excellent outcomes are merely one standard for evaluation, and it is not entirely clear that methods that rely solely on outcomes are the ideal ones for the academy. As Evan Donahue points out in his comment, "the model of evaluation...does more than encourage or discourage work, it shapes the very way work is imagined."
In light of this comment, it seems that the primary questions to ask when choosing a grading method are: What is it that we want to measure? and How will our choice of measurement affect our students' learning? Traditional grading measures outcomes, giving its highest rewards to those outcomes--papers, test scores, projects--that are deemed excellent. I would argue that the system Cathy is proposing measures participation, which is itself a highly worthwhile goal. Effective, thoughtful participation is an important part of the model of social interaction that the internet is enabling. With regard to traditional grading, we can legitimately ask if measuring excellent outcomes leads to an increase in excellent work from students or--better yet--in those students developing into lifelong learners.
For the past few years, I have used the Learning Record to evaluate student work in my courses. The Learning Record is a portfolio-based grading system that measures students' learning based on five dimensions: confidence and independence, skills and strategies, knowledge and understanding, the use of prior and emerging experience, and reflection. Throughout the semester, students record observations about their learning process and collect samples of their work. Then at the midterm and final they evaluate their own work against a set of criteria--typically, a generic grade criteria combined with course-specific goals--and must make arguments for why the evidence of their learning matches that criteria.
While the Learning Record isn't perfect, in my experience using it I have found it to be far superior to traditional grading. Not only does it encourage student participation, both in their own learning and it evaluating their fellow students' learning records, it often leads to better outcomes because it allows students greater freedom to fail--or to borrow Bogost's terminology, to suck. In other words, it allows students to experiment and take risks. To return again to Evan Donahue's comment on Cathy's post:
If I produce a poor assignment at first because I attempted something that didn't work very well, but continue working with those ideas and eventually yield something really nice, how should that be graded? One could argue that the grading will balance out if you do well on the later assignment to make up for the first, but what that glosses over is the fact that if I do poorly on something I was initially working with, I may very well be discouraged from continuing with that.
How many times have our most instructive experiences been the result of sucking? By privileging excellent outcomes, traditional grading systematically ignores the type of learning that comes from making mistakes (and, subsequently, identifying and correcting those mistakes in the future). The Learning Record is a means of avoiding this blindspot of traditional grading because its object of interest isn't excellent outcomes (although students in my classes frequently produce excellent outcomes) but rather, as Donahue says, the student's "development throughout and beyond the course."
(And the Learning Record avoids what I think is the most damning critique of traditional grading: the possibility that a student can achieve an excellent outcome without learning anything at all. When I was an undergraduate, I was required to take an introductory Algebra course during my freshman year. I had a strong math background, so I spent each class period talking to my friends in the back of the room or reading the newspaper. I aced all the tests. At the end of the semester, I received an A, yet I believe I can say without qualification that I learned nothing in that course. In the years that I have been teaching introductory writing courses, I have encountered a number of students who came to my class possessing strong writing skills. When I graded my courses in the traditional way, these students had no motivation to learn--that is to push themselves to improve--because they already were able to produce writing that was worthy of an A. If I had pushed myself in that Algebra class--for example, by working the more complicated, unassigned problems in our textbook--I would have gained a better understanding of mathematics. Similarly, the strong writer in my course who pushes him- or herself can become a more nuanced writer. In my experience, making the student's learning the object of measurement in my courses has prompted students to drive themselves to this kind of improvement.)
While traditional grading can be ideal for those classes where the goal is to closely model the kinds of situations students will face after they leave school, for other classes this method may make less sense. It is in these courses that evaluating participation, as Cathy's proposal does, or evaluating learning via the Learning Record have their place.
If you are interested in learning more about the Learning Record, you can read the full documentation here. Additionally, in the video below, Dr. Margaret Syverson outlines the use of the Learning Record as a tool for achieving social justice. You can view the video in HD here, which may make it easier to read some of the slides.