I read Jonathan Rees’ post on the flaws of peer grading in MOOCs with interests.
Because of the size of the course I think I can safely assume that many of my fellow MOOC students inevitably had no history background at all, yet the peer grading structure forced them to evaluate whether other students were actually doing history right.
The implicit assumption of any peer grading arrangement is that students with minimal direction can do what humanities professors get paid to do and I think that’s the fatal flaw of these arrangements. This assumption not only undermines the authority of professors everywhere; it suggests that the only important part of college instruction is the content that professors transmit to their students.
Read more: http://www.insidehighered.com/views/2013/03/05/essays-flaws-peer-grading-moocs#ixzz2MiKxNP7b
Inside Higher Ed
What are the assumptions behind the background of MOOC students? Do we know enough if one’s fellow MOOC students had any (history) background at all?
What are the assumptions behind peer grading? I could see the values, merits and limitations of peer grading in certain fields, such as evaluations of group projects, individual assignments, but in the case of MOOCs, would there be huge variations in the grading, when subject to the assessment of different peers or professors? The use of 0, 1, 2, 3 etc. as a grading scale is appropriate when those performance criteria is clearly understood, with a concise marking guide. However, given the “unknown” abilities of the peers in assessment of the work, such as this “professor”, how would one be able to judge professionally, except from the report from this student professor?
Professional judgment in assessment requires one to comprehend the significance and application of validity, reliability, authenticity and sufficiency in evaluation and assessment of a piece of work (essay, report, project or artifact). That’s why we need quality assessment (control over variation in “standards”, which could be measured based on concrete and reliable performance standards). Could super professors do this for hundreds, or tens of thousands of students in MOOCs? That is mission impossible. May be one could develop a course in “training” the students in how to assess in a professional manner first (based on what professors would normally do). Even then, one would realize that there are always variation in the assessment tools, methodology when students are requested to do the assessment, as an “experiment” in MOOCs.
There are many assumptions made here, and I just can’t help but to quote the Assumptions Theory
that I suggested. The xMOOCs are based on assumptions that people could learn from the best professors in the world, with these peer assessment and grading rendered possible due to the advances in technology. We have also assumed that participants (students in particular) have got the skills to peer assess, and provide valued comments to other students. I just like to continue with stating the assumptions, but would think it better for you to share your assumptions on this interesting topic.