The impact of peer assessment on mathematics students’ understanding of marking criteria and their ability to self-regulate learning


  • Chris Brignell University of Nottingham
  • Tom Wicks University of Nottingham
  • Carmen Tomas University of Nottingham
  • Jonathan Halls University of Nottingham



peer-assessment, assessment criteria, formative assessment, rubric-based scoring, analytic rubrics


At the University of Nottingham peer-assessment was piloted with the objective of assisting students to gain greater understanding of marking criteria so that students may improve their comprehension of, and solutions to, future mathematical tasks. The study resulted in improvement in all four factors of observation, emulation, self-control and self-regulation thus providing evidence of a positive impact on student learning.The pilot involved a large first-year mathematics class who completed a formative piece of coursework prior to a problem class. At the problem class students were trained in the use of marking criteria before anonymously marking peer work. The pilot was evaluated using questionnaires (97 responses) at the beginning and end of the problem class. The questionnaires elicited students’ understanding of criteria before and after the task and students’ self-efficacy in relation to assessment self-control and self-regulation.The analysis of students’ descriptions of the criteria of assessment show that their understanding of the requirements for the task were expanded. After the class, explanation of the method and notation (consistent and correct) were much more present in students’ descriptions. Furthermore, 67 per cent of students stated they had specific ideas on how to improve their solutions to problems in the future. Students’ self-perceived abilities to self-assess and improve were positively impacted. The pilot gives strong evidence for the use of peer-assessment to develop students’ competencies as assessors, both in terms of their understanding of marking criteria and more broadly their ability to self-assess and regulate their learning.

Author Biographies

Chris Brignell, University of Nottingham

Lecturer in Statistics, School of Mathematical Sciences, University of Nottingham

Tom Wicks, University of Nottingham

Assistant Professor, School of Mathematical Sciences, University of Nottingham

Carmen Tomas, University of Nottingham

University Assessment Advisor, Teaching Transformation, University of Nottingham

Jonathan Halls, University of Nottingham

Researcher, Teaching Transformation, University of Nottingham


Bidgood, P. & Cox, B., 2002. Student Assessment in MSOR. MSOR Connections, 2(4), pp.9-13.

Boud, D., Ajjawi, R., Dawson, P. & Tai, J., eds., 2018. Developing evaluative judgement in higher education: Assessment for knowing and producing quality work. Abingdon, Oxon: Routledge.

Brookhart, S.M., 2018. Appropriate criteria: Key to effective rubrics. Frontiers in Education, 3, 22.

Dawson, P., 2017. Assessment rubrics: towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, 42, pp.347-360.

Evans, C., 2013. Making sense of assessment feedback in Higher Education. Review of Educational Research, 83, pp.70-120.

Iannone, P. & Simpson, A., 2011. The summative assessment diet: How we assess in mathematics degrees. Teaching Mathematics and its Applications, 30, pp.186-196.

Jones, I. & Alcock, L., 2014. Peer-assessment without assessment criteria. Studies in Higher Education, 39, pp.1774-1787.

Jones, I. & Sirl, D., 2017. Peer assessment of mathematical understanding using comparative judgement. Nordic Studies in Mathematics Education, 22, pp.147-164.

Jönsson, A. & Svingby, G., 2007. The use of scoring rubrics: reliability, validity and educational consequences. Educational Research Review, 22, pp.130-144.

Mertler, C.A., 2001. Designing scoring rubrics for your classroom. Practical Assessment, Research and Evaluation, 7, pp.1-10. Available at: [Accessed 4 September 2019].

Messick, S., 1994. The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23, pp.13-23.

Newton, P.E., 1996. The reliability of marking of GCSE scripts: mathematics and English. British Educational Research Journal, 22, pp.405-420.

Panadero, E. & Broadbent, J., 2018. Developing evaluative judgement: a self-regulated learning perspective. In D. Boud, R. Ajjawi, P. Dawson & J.Tai, eds, Developing evaluative judgement in higher education: Assessment for knowing and producing quality work. Abingdon, Oxon: Routledge.

Panadero, E. & Jönsson, A., 2013. The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9, pp.129-144.

Price, M., Rust, C., O’Donovan, B., Handley, K. & Bryant, R., 2012. Assessment literacy: The foundation for improving student learning. Oxford Centre, for Staff and Learning Development.

Reddy, Y.M. & Andrade, H., 2010. A review of rubric use in higher education. Assessment and Evaluation in Higher Education, 35, pp.435-488.

Sadler, D.R., 2009. Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education, 34, pp.159-179.

Swan M. & Burkhardt H., 2012. Designing assessment of performance in mathematics. Educational designer, 2, pp.1-41. Available at: [Accessed 4 September 2019].

Winstone, N.E., Nash, R.A., Parker, M. & Rowntree, J., 2017. Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist, 52, pp.17-37.