The impact of peer assessment on mathematics students’ understanding of marking criteria and their ability to self-regulate learning

Chris Brignell, Tom Wicks, Carmen Tomas, Jonathan Halls

Abstract


At the University of Nottingham peer-assessment was piloted with the objective of assisting students to gain greater understanding of marking criteria so that students may improve their comprehension of, and solutions to, future mathematical tasks. The study resulted in improvement in all four factors of observation, emulation, self-control and self-regulation thus providing evidence of a positive impact on student learning.

The pilot involved a large first-year mathematics class who completed a formative piece of coursework prior to a problem class. At the problem class students were trained in the use of marking criteria before anonymously marking peer work. The pilot was evaluated using questionnaires (97 responses) at the beginning and end of the problem class. The questionnaires elicited students’ understanding of criteria before and after the task and students’ self-efficacy in relation to assessment self-control and self-regulation.

The analysis of students’ descriptions of the criteria of assessment show that their understanding of the requirements for the task were expanded. After the class, explanation of the method and notation (consistent and correct) were much more present in students’ descriptions. Furthermore, 67 per cent of students stated they had specific ideas on how to improve their solutions to problems in the future. Students’ self-perceived abilities to self-assess and improve were positively impacted. The pilot gives strong evidence for the use of peer-assessment to develop students’ competencies as assessors, both in terms of their understanding of marking criteria and more broadly their ability to self-assess and regulate their learning.


Keywords


peer-assessment; assessment criteria; formative assessment; rubric-based scoring; analytic rubrics

Full Text:

PDF

References


Bidgood, P. & Cox, B., 2002. Student Assessment in MSOR. MSOR Connections, 2(4), pp.9-13.

Boud, D., Ajjawi, R., Dawson, P. & Tai, J., eds., 2018. Developing evaluative judgement in higher education: Assessment for knowing and producing quality work. Abingdon, Oxon: Routledge.

Brookhart, S.M., 2018. Appropriate criteria: Key to effective rubrics. Frontiers in Education, 3, 22. https://doi.org/10.3389/feduc.2018.00022

Dawson, P., 2017. Assessment rubrics: towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, 42, pp.347-360. https://doi.org/10.1080/02602938.2015.1111294

Evans, C., 2013. Making sense of assessment feedback in Higher Education. Review of Educational Research, 83, pp.70-120. https://doi.org/10.3102/0034654312474350

Iannone, P. & Simpson, A., 2011. The summative assessment diet: How we assess in mathematics degrees. Teaching Mathematics and its Applications, 30, pp.186-196. https://doi.org/10.1093/teamat/hrr017

Jones, I. & Alcock, L., 2014. Peer-assessment without assessment criteria. Studies in Higher Education, 39, pp.1774-1787. https://doi.org/10.1080/03075079.2013.821974

Jones, I. & Sirl, D., 2017. Peer assessment of mathematical understanding using comparative judgement. Nordic Studies in Mathematics Education, 22, pp.147-164.

Jönsson, A. & Svingby, G., 2007. The use of scoring rubrics: reliability, validity and educational consequences. Educational Research Review, 22, pp.130-144. https://doi.org/10.1016/j.edurev.2007.05.002

Mertler, C.A., 2001. Designing scoring rubrics for your classroom. Practical Assessment, Research and Evaluation, 7, pp.1-10. Available at: https://pareonline.net/getvn.asp?v=7&n=25 [Accessed 4 September 2019].

Messick, S., 1994. The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23, pp.13-23. https://doi.org/10.3102/0013189X023002013

Newton, P.E., 1996. The reliability of marking of GCSE scripts: mathematics and English. British Educational Research Journal, 22, pp.405-420. https://doi.org/10.1080/0141192960220403

Panadero, E. & Broadbent, J., 2018. Developing evaluative judgement: a self-regulated learning perspective. In D. Boud, R. Ajjawi, P. Dawson & J.Tai, eds, Developing evaluative judgement in higher education: Assessment for knowing and producing quality work. Abingdon, Oxon: Routledge.

Panadero, E. & Jönsson, A., 2013. The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9, pp.129-144. https://doi.org/10.1016/j.edurev.2013.01.002

Price, M., Rust, C., O’Donovan, B., Handley, K. & Bryant, R., 2012. Assessment literacy: The foundation for improving student learning. Oxford Centre, for Staff and Learning Development.

Reddy, Y.M. & Andrade, H., 2010. A review of rubric use in higher education. Assessment and Evaluation in Higher Education, 35, pp.435-488. https://doi.org/10.1080/02602930902862859

Sadler, D.R., 2009. Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education, 34, pp.159-179. https://doi.org/10.1080/02602930801956059

Swan M. & Burkhardt H., 2012. Designing assessment of performance in mathematics. Educational designer, 2, pp.1-41. Available at: https://www.educationaldesigner.org/ed/volume2/issue5/article19/ [Accessed 4 September 2019].

Winstone, N.E., Nash, R.A., Parker, M. & Rowntree, J., 2017. Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist, 52, pp.17-37. https://doi.org/10.1080/00461520.2016.1207538




DOI: https://doi.org/10.21100/msor.v18i1.1019

Refbacks

  • There are currently no refbacks.