Blog Post

Crowdsource Grading: Or, How Prof D Got an A!

Yesterday I received the sweetest gift of all:  two of my deans wrote me a formal letter of "heartfelt appreciation," with cc's to my department chair and my DUS, letting me know that my teaching evaluations from last spring's "This Is Your Brain on the Internet" put me among the "top 5% of all undergraduate instructors at Duke."  That blew me away.   For about a minute.  And then I began to wonder (and put this in my letter back to my deans), what does it mean that a prof who has become infamous for a blog about "how to crowdsource grading" is beaming with joy that she got a great grade?   Methinks it's time for a blog!


So here's the backstory:  it was in last spring's version of "This Is Your Brain on the Internet" that a few of my top students (A+ not B+ students) got together and, in their evaluations, wrote me a very carefully worded suggestion that I had to rethink grading.  Everything about the course had been so revolutionary and eye-opening, from the peer-grading to the unusual combination of disciplines (from poetry to neuroscience to digital theory), all the peer-driven conversations had been so engaging and important . . . but then grades were assigned in the same old way.   "Assessment"--whether IQ, multiple choice testing, SAT's, NCLB, learning disability tests or aptitude tests or tenure and promotion rules--was a topic in the course.   We talked about how, if you measure for one thing, you are overlooking other qualities, marking them as deficiencies rather than differences, and that means a lot of human talent gets overlooked and wasted.  But then my mode of assessment had been entirely traditional: Prof gave assignments.  Students did assignments.  Prof marked assignments.  Prof gave grades.  Not very thoughtful or original, to say the least. 


If you are a teacher worth her salt, you take what the A+ students say seriously.   So I spent the summer really thinking about and doing research on assessment.   I came up with a new method, wrote about it in "How To Crowdsource Grading," and that blog went all over the world, quite literally.  (You can read it here.  I had no idea it was incendiary.  See what you think:


So, now, this semester's version of "This Is Your Brain on the Internet," uses a combination of contract grading and crowdsourced or peer grading.   The contract is something I came up with in tandem with the TA in the course, Alex, and the the two teaching apprentices, Ashon and Bill.   None of them have ever taught before so I try to include them in all I do, a way of having them learn by teaching.  We think through pedagogical issues together, including this one of grading.  (NB:  they are amazing.  I couldn't work with three better grad students.  They add to the class every day, and I'm charmed by how accepting the undergrads are of these three grad students, plus one returning student earning her MA, plus two HASTAC team members.  The unusual combination is part of the wonder of this course.)  


The contract took us about two weeks to come up with because it is, after all, a contract.  We presented one form to the ISIS 120/English 173 students, they gave us feedback, we worked on it, presented that, they gave us more feedback.  We negotiated the contract, then signed, passed out, copied, and returned a signed copy to all.   The contract specifies all of the course requirements (there are a lot) and allows students to contract to earn an A if they satisfactorily complete each of the long list of requirements.   The contract also spells out the consequences/penalties for failure to meet any part of the contract.  


But now here is the crowdsourcing part.  The two students in charge of each week's readings (I have made up a proposed syllabus and they can accept, reject, augment, or propose something entirely different for their week as peer leaders) are also in charge of reading that week's blog posts by each student in the course, responding thoughtfully to each post, and then reporting back to TA Alex on whether everyone has done them well.   If they believe someone has not done the assignment to the A level required contractually, they give the student one more chance to redo it, directing the rewrite.  If it is satisfactory the second time, they report to Alex, and the student fulfills the contract.   Then, the next class, those two peer students are writing blogs that are read, commented upon, and evaluated as to whether they meet the contractual obligation by two other students.


Of course, I'm also reading everything--the blogs and the comments on the blogs--and making posts too.  And students are posting on one another's work.   All blogs are due at 11:59 pm the night before class led by the students.   So often class time is spent three or four levels removed from the readings, because everyone has already staked an idea, written about it, and often received an evaluation and comment back  before class.   The result is a class discussion among experts on the topic.   That is, they have all not only done the reading but presented their take on it before the class.  The shared knowledge with which we go into each class means we can roam anywhere the peer leaders want us to go.   I leave astonished, over and over again, by the depth and importance of the class discussion, the seriousness and honesty, and by the incredible insights that come out in the blog.    


I've never had a better classroom experience.    I am full of admiration for these students, most of whom are about to graduate.  We keep saying: this is the class for the rest of your education, for the one that happens the day you graduate.  We challenge them to ask questions for that education, the one that comes after the grading and the courses and the requirements and the major stops. 


But isn't it hypocritical, to be kvelling about this course where grades have been rendered trivial by this peer-responsible, contractual, crowdsourced methods, and to be so happy that Prof D Got Her A!   I thought so at first then realized my dean's had done the same thing I am espousing in my class.   They have no idea how good a teacher I am.  Neither George nor Lee has ever attended a single one of my class.   They haven't installed a "nanny cam" to watch me.  The only evidence they have that I'm a good teacher is by aggregating the evaluation scores by the students who took my class.   They are congratulating me for being a good teacher--by trusting the evaluative abilities of my students.  


In short, my deans have crowdsourced grading.   They have trusted the wisdom of crowds (in this case Duke students) to possess a good enough, sound enough, and mature enough ability to evaluate what is or isn't good teaching. They haven't brought in a professional assessment team. They haven't imposed some "rigorous" method of assessing teaching (as many states are proposing right now).   They have entrusted students with the ability to judge who teaches them well.  What a concept!   We know this is true in our private lives; if our child comes home weeping over a cruel teacher or is bored to death or is a failing a subject they should be bright enough to learn, we don't need a formal assessment team to tell us the teacher is bad.   If we are a good parent, we act on it.   But now, as a nation, we are in this sick spiral of needing formal assessment for all kinds of things that, in fact, we are perfectly capable of assessing more directly and experientially.


In this case, I was lucky enough to have wise Deans (this is called positive reinforcement!) who trusted the verdict of non-experts (i.e. students) in rendering verdicts.   They trust this crowdsourced evaluation mechanism enough that they have  written those profs who scored highly and thanked us for "infusing instruction with a sense of dynamic engagement and inspired learning."  That makes me proud.  I feel great today.   And I feel especially great that the students who graded me so highly are the ones who gave me a far greater gift:  they challenged me to rethink grading.  





No comments