Peer and self assessment in massive online classes

Abstract Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.


@article{Kulkarni:2013:PSA:2562181.2505057, author = {Kulkarni, Chinmay and Wei, Koh Pang and Le, Huy and Chia, Daniel and Papadopoulos, Kathryn and Cheng, Justin and Koller, Daphne and Klemmer, Scott R.}, title = {Peer and Self Assessment in Massive Online Classes}, journal = {ACM Trans. Comput.-Hum. Interact.}, issue_date = {December 2013}, volume = {20}, number = {6}, month = dec, year = {2013}, issn = {1073-0516}, pages = {33:1--33:31}, articleno = {33}, numpages = {31}, url = {}, doi = {10.1145/2505057}, acmid = {2505057}, publisher = {ACM}, address = {New York, NY, USA}, keywords = {MOOC, Peer assessment, design assessment, design crit, massive online classroom, online education, qualitative feedback, self-assessment, studio-based learning}, }