"

Enhancing Student Engagement in Numerical Methods

Through Peer Evaluations and Self-Assessment Rubrics

Vivek Singhall, Ph.D.; Kenan Baltaci, Ph.D.; and Oai Ha, Ph.D.

Abstract

Student engagement is critical for achieving positive academic and social outcomes. However, maintaining high engagement levels in challenging math-based courses like numerical methods can be difficult. This study addresses this challenge by implementing two assessment-based strategies: a bonus point self-assessment rubric and peer evaluations. These interventions aim to incentivize engagement and reward essential skills such as effort and teamwork, which are often overlooked by traditional grading systems but are crucial for success beyond the classroom. While existing research highlights the potential benefits of self-assessments and peer evaluations, there is limited evidence on their use in technical courses like numerical methods. Additionally, few studies have explored how these strategies impact specific outcomes such as engagement, anxiety, and learning in a higher education STEM context. This study aims to address these gaps by investigating the effectiveness of bonus point self-assessment rubrics and peer evaluations in a numerical methods course, providing valuable insights for educators and researchers. The effectiveness of these strategies is evaluated by comparing student engagement levels in this course to those in other courses. Results indicate that for many students these strategies fostered active participation, reduced anxiety, and enhanced overall learning experiences, offering valuable insights into improving student engagement in challenging academic contexts.

Keywords: student engagement, alternative grading, peer evaluations, bonus points, math-based courses

Introduction

Student engagement is essential for achieving positive educational outcomes, both academically and socially (Guenther & Miller, 2011; Tinto, 2012; Hu & Kuh, 2002). However, sustaining high levels of engagement in challenging math-based courses, such as numerical methods, continues to be a significant challenge for educators. These courses are typically perceived as difficult and abstract, which can lead to decreased engagement and higher levels of anxiety among students (Freeman et al., 2014). Addressing these challenges requires targeted strategies to support learners and promote engagement.

Traditional grading systems often focus narrowly on academic performance, such as exam scores and assignment grades, while overlooking essential non-academic skills like effort, teamwork, and self-regulation (Brookhart, 2013). This narrow focus can discourage students from engaging in behaviors that are critical for long-term success, such as collaboration, persistence, and self-reflection. Additionally, traditional grading systems may contribute to grade anxiety, which can negatively impact student motivation and engagement (Pulfrey et al., 2011).

To address engagement challenges in math-based courses, educators have employed various strategies, including mindfulness practices and growth mindset approaches, as outlined by Samuel (Samuel et al., 2023), with the aim to cultivate a positive attitude towards learning and resilience in the face of challenges. Journal responses and hands-on learning centers, advocated by Finlayson (Finlayson, 2014), promote active learning and personal reflection, enhancing students’ engagement and understanding. Additionally, virtual and anonymous platforms for quantitative literacy, emphasized by Latiolais (Latiolais & Laurence, 2009), provide alternative avenues for student participation, particularly benefiting those uncomfortable in traditional classroom settings. These strategies collectively contribute to fostering a supportive and inclusive learning environment that encourages both academic growth and personal development.

This study explores the implementation of two easily applied assessment-based strategies—peer evaluations and bonus point self-assessment—in a numerical methods course. Previous research, though limited in the context of STEM higher education, suggests that these strategies can effectively improve student engagement (Ingalls, 2018), and address limitations of traditional grading by incentivizing broader skill development and reducing student anxiety (Panadero, 2016).

Literature Review

Peer Evaluations

Peer evaluations serve as a valuable assessment tool that allows students to critically assess their classmates’ contributions in group or collaborative settings. This approach has been shown to foster a sense of responsibility, enhance communication skills, and encourage active participation, all of which contribute to greater student engagement (Topping, 2009). Moreover, these evaluations not only increase participation in group work but also help prepare students for real-world professional environments by simulating industry practices of performance assessment and feedback (Tessier, 2012; Chen & Lou, 2004; van Helden et al., 2022). Additionally, research suggests that peer evaluations provide students with exposure to diverse perspectives, facilitating deeper learning and self-improvement (Li et al., 2010).

Peer evaluations typically assess students’ contributions based on four key metrics: contribution to goals, communication, collaboration, and reliability. The contribution to goals metric encourages students to align their efforts with team objectives, reinforcing accountability and project management skills (Gaitonde et al., 2017; Menezes et al., 2021). Collaboration, a critical component of effective teamwork, emphasizes the integration of diverse perspectives and problem-solving strategies, which is particularly valuable in engineering disciplines where interdisciplinary cooperation is essential (Johri & Olds, 2011; Colston et al., 2017). Additionally, collaboration fosters interpersonal skills such as conflict resolution and negotiation, both of which are crucial in professional settings (Menezes et al., 2021).

The reliability metric ensures that students fulfill their responsibilities and meet deadlines, thereby promoting a strong work ethic and accountability (Temel et al., 2013; Francis et al., 2022). Reliable contributions are essential for maintaining workflow efficiency, an important aspect of engineering project management. Finally, the communication metric assesses students’ ability to convey ideas clearly and effectively, reducing misunderstandings and improving teamwork—especially in complex technical discussions that characterize engineering projects (Balta & Awedh, 2017; Stump et al., 2011). Evaluating communication skills through peer assessments helps students identify areas for improvement, thereby contributing to the development of essential professional competencies (Berge & Weilenmann, 2014).

By incorporating these structured assessment criteria into peer evaluations, students not only refine their technical and teamwork skills but also cultivate professional attributes that align with the collaborative nature of engineering practice (van Helden et al., 2022). Prior research has established that assigning 25–40% of group work grades to peer evaluations is an effective strategy for enhancing engagement and accountability, with this study adhering to Holland’s recommendation of allocating 30% of the grade to peer assessments (Holland & Feigenbaum, 1998).

Bonus Point Self-Assessments

Self-assessment rubrics provide students with a structured framework to evaluate their own performance based on predefined criteria, fostering self-regulation, metacognitive skills, and a heightened sense of accountability (Andrade, 2019). Engaging students in the assessment process helps clarify learning objectives and expectations, leading to a deeper understanding of course material (Boud & Falchikov, 2006). Additionally, incorporating self-assessment can alleviate anxiety associated with evaluation. The National Council of Teachers of Mathematics (NCTM) suggests that self-assessment strategies can reduce math anxiety (Furner & Gonzalez, 2011), which is a known barrier to student engagement and achievement (Latiolais & Laurence, 2009; Samuel et al., 2023; Finlayson, 2014; Furner & Gonzalez, 2011). Given that disengagement can further exacerbate this anxiety (Latiolais & Laurence, 2009), encouraging active participation through self-assessment can contribute to improved performance in mathematics and other STEM disciplines (Wahid et al., 2014; Head & Lindsey, 1983; Chan, 2001).

More broadly, assessment-related anxiety is a prevalent challenge in high-stakes or rigorous STEM courses (Zeidner, 1998). Alternative assessment strategies, including self-assessment rubrics and peer evaluations, have been shown to mitigate this issue by granting students greater control over their learning and assessment experiences (Panadero et al., 2017). These approaches also enhance students’ perceptions of fairness by increasing transparency and inclusivity in the evaluation process (Rust et al., 2003).

Incentive-based assessment methods, such as awarding bonus points, have been employed in educational settings to encourage participation in class discussions, completion of assignments, and collaboration with peers (Cameron & Pierce, 1994). However, research on the specific effects of bonus points on student motivation, engagement, and course performance in higher education remains limited (Moll & Gao, 2022). Existing studies suggest that well-designed incentive systems can strengthen intrinsic motivation by aligning rewards with meaningful learning outcomes (Deci et al., 1999). When implemented effectively, bonus points can enhance engagement without becoming an undue burden, thereby contributing to improved academic performance (Dunn et al., 2020; Rassuli, 2012). Moreover, research indicates that structured incentive programs can lead to sustained increases in student participation and learning outcomes (Ingalls, 2018; Moll & Gao, 2022).

The Bonus Point Self-Assessment Rubric provides students with the opportunity to evaluate their own performance across multiple criteria, with each metric contributing up to 1% toward a potential maximum of 5% in bonus points. These metrics include attendance at office hours, active participation in class discussions, comprehension of course content, effort exerted, and points lost on class assignments due to gaps in prerequisite knowledge. This rubric not only incentivizes engagement but also reinforces behaviors that enhance learning outcomes. Recognizing class participation and effort has been shown to increase student interest and academic achievement. Additionally, encouraging attendance at office hours—an intervention positively correlated with academic success—provides students with valuable support and guidance (Guerrero & Rod, 2013; Schinske & Tanner, 2014; O’Connor, 2013).

A particularly notable aspect of the rubric is its inclusion of a prerequisite knowledge category, which specifically supports students struggling with foundational concepts. By acknowledging their efforts and the challenges they face, this approach fosters confidence and promotes fairness, ensuring that students from diverse academic backgrounds receive equitable consideration.

This structure aligns with broader educational strategies aimed at fostering inclusive learning environments and reducing barriers to student success.

Materials and Methods

At the start of the semester, students are provided with both the rubrics (see the appendix): one for peer evaluation and another for bonus point self-assessment. The rubrics for these assessments were developed by reviewing literature on engineering education to identify relevant metrics with proven importance in real-world engineering practice. The rubrics feature specific, measurable questions using an ordinal scale to capture varying levels of peer and self-perceptions. To ensure validity and reliability, the rubrics underwent pilot testing with feedback from colleagues, which led to minor revisions to improve clarity and effectiveness.

Providing the rubrics at the start of the semester establishes clear expectations, ensuring that students understand the standards they and their peers will be evaluated against, which fosters accountability and transparency. At the end of the semester, students self-assess their own and their peers’ performance using these rubrics. This process encourages reflection on their own work and contributions, as well as those of their classmates. Finally, students complete an anonymous survey to assess the effectiveness and success of the rubrics. This survey is designed to gather detailed feedback on various aspects of the rubrics, including clarity, fairness, and their impact on learning and engagement. The rubrics were implemented across two semesters. Twenty-nine students participated in the study during the first semester and another 22 students participated during the second semester. Ethics approval for the study was obtained from our Institutional Review Board (protocol #IRB-FY2024-10). Shared survey data is only for students who filled out an IRB-signed consent form at the beginning of the semester.

Results

Survey Questions and Student Responses to Measure Impact

Table 1 outlines the survey questions designed to measure the impact of the bonus point self-assessment rubric and peer evaluations on various aspects of student engagement, learning, anxiety alleviation, and overall course performance. The student responses in Figure 1 provide valuable insights into how these alternative grading practices influenced their motivation and academic outcomes.

Table 1: Survey Questions to Measure Impact of Bonus Point Self-Assessment Rubric and Peer Evaluation
Question # Impact Area Survey Question
Q1 Group Engagement Do you believe that the survey-based grading criteria influenced your motivation to actively engage with your group?
Q2 Engagement in Class Discussions and Assignments Do you believe that the survey-based grading criteria influenced your motivation to actively engage in class discussions and assignments?
Q3 Seeking Help and Clarifications Has the survey-based grading criteria encouraged you to seek help or clarification when you encounter difficulties?
Q4 Learning Do you believe that the survey-based grading criteria has positively affected your learning in the course material?
Q5 Relieving Grade Anxiety Did the survey-based grading criteria help alleviate some of the anxiety associated with grades?
Q6 Performance in Course Did you perceive your academic performance improve due to the introduction of the survey-based criteria?

 

Grouped bar chart displaying survey responses for questions Q1 through Q6. For each question, four response categories are shown: Significantly, Yes, Neutral, and No. Q1 results: Significantly 9, Yes 24, Neutral 12, No 6. Q2 results: Significantly 3, Yes 24, Neutral 18, No 6. Q3 results: Significantly 7, Yes 27, Neutral 9, No 8. Q4 results: Significantly 6, Yes 18, Neutral 22, No 5. Q5 results: Significantly 3, Yes 19, Neutral 21, No 8. Q6 results: Significantly 1, Yes 20, Neutral 29, No 1.
Figure 1: Results of survey questions to measure impact of bonus point self-assessment rubric and peer evaluation.

Survey Questions and Student Responses to Measure Clarity, Fairness, and Future Implementation

To further understand the effectiveness and reception of the alternative grading practices, students were surveyed on the clarity, fairness, and potential for future implementation of these strategies. Table 2 presents the survey questions that students were asked, and Figure 2 presents the student responses to these questions, providing insights into how well the grading criteria were understood, perceived as fair, and whether students would recommend their use in other courses.

Table 2: Survey Questions to Measure Clarity and Fairness of Interventions and Recommendations for Future Courses
Question # Impact Area Survey Question
Q7 Process Clarity How satisfied are you with the clarity of the survey-based grading criteria provided for this course?
Q8 Process Understanding How well do you understand the objectives and expectations of the survey-based grading criteria?
Q9 Process Fairness How fair do you consider the survey-based grading criteria in evaluating your performance?
Q10 Recommendations for Use in Future Courses Would you recommend this survey-based grading criteria in other courses?

 

Four bar charts showing responses to Q7–Q10, each with four categories. Q7: most respondents answered Well. Q8: most answered Satisfied. Q9: most answered Fair. Q10: most answered Neutral. Bars use consistent colors for each response category.
Figure 2: Results of survey questions to measure clarity and fairness of interventions and implementation recommendations in future courses.

Notable Student Comments

To gain a deeper understanding of the personal impact and perceived benefits of the alternative grading strategies, students were invited to share their comments. This section highlights notable student feedback, illustrating how the bonus point self-assessment rubric and peer evaluations influenced their engagement, learning experiences, and overall satisfaction with the course.

“Incentivizes introverted people to attempt to be more extroverted. Being introverted myself it showed what things I need to work on, like asking the professor for help when needed.”

“Yes, it pushed me to get work done earlier so I could ask questions if I had any issues.”

“Relieved grading anxiety and made me feel more accomplished at the end of the course.”

“Strongest influence when it comes to group work. If you are not participating, they can review you negatively.”

“Felt the need to be able to explain my work rather than just answering questions.”

Discussion

The alternative assessment strategies implemented in this study—peer evaluations and the bonus point self-assessment rubric—had positive impacts on student engagement, learning, and satisfaction. Specifically, 64.7% of students reported that the assessment criteria motivated them to engage in group work (Q1), while 52.9% noted increased participation in class discussions and assignments (Q2). Additionally, 66.7% indicated that the assessment structure encouraged them to seek help when needed (Q3). These results highlight the efficacy of the assessment strategies in promoting active student engagement and fostering a more collaborative learning environment.

Regarding learning outcomes, 47.1% of students perceived a positive effect on their learning experience (Q4). The assessment approach also contributed to reducing grade-related anxiety for 43.1% (Q5), and 41.2% believed it supported their academic performance, while 56.9% remained neutral on this aspect (Q6). These findings align with previous research indicating that assessment strategies which allow for more student input and self-reflection can reduce anxiety and improve learning outcomes.

An important aspect of the assessment strategies was the clarity and fairness with which students perceived them. Most students indicated that they found the assessment criteria to be clear and understandable, and they viewed the grading system as fair. Specifically, 82.4% of students were satisfied with the clarity of the assessment criteria (Q7), 78.4% reported a clear understanding of the assessment objectives (Q8), and 84.3% found the assessment strategy to be fair (Q9). These results are crucial because transparency in the assessment process can foster a sense of trust and accountability between instructors and students.

Although the assessment strategies implemented in this study were well-received, student responses to whether they would recommend these strategies for future courses (Q10) were more moderate. While 45.1% of students indicated they were likely or extremely likely to recommend these assessment strategies (Q10), 50.9% remained neutral. This mixed response may reflect the subjective nature of assessment preferences and the challenge of finding an assessment system that satisfies all students. Some students may feel more comfortable with traditional grading methods and might not appreciate the self-reflection and peer evaluation components, which can introduce a level of uncertainty. The neutral responses may indicate that, while students generally appreciate these assessment methods, they may need more time to fully adapt to the self-assessment and peer evaluation processes. In courses that integrate peer evaluations and self-assessments, it may be helpful for instructors to provide additional guidance and support to help students feel more comfortable with these processes.

These tools—peer evaluations and the bonus point self-assessment rubric—can be adapted to various courses across disciplines, particularly in settings where student engagement and teamwork are essential. For example, in project-based courses in fields such as engineering, business, or computer science, peer evaluations can be used to assess group dynamics, communication, and individual contributions. The bonus point rubric, on the other hand, can incentivize students to engage more actively in class discussions, attend office hours, and seek help when needed, regardless of discipline. The rubrics are managed through the Canvas learning management system, ensuring that even as class sizes grow, the process of conducting peer evaluations remains manageable for both students and instructors. Moreover, the time needed for students to fill out the rubrics is minimal, typically taking only 10–20 minutes, and does not interfere with their learning experience, especially in courses with demanding workloads.

Limitations and Challenges

Peer evaluations in group work, while beneficial in many respects, also present several limitations that can affect their effectiveness and fairness. One significant limitation of peer evaluations is the potential for bias and inconsistency in the assessments provided by students (Bidna, 2024; Falchikov & Goldfinch, 2000). Research indicates that peer evaluations can be influenced by personal relationships, leading to favoritism or unfair scoring (Cook et al., 2017; Khalid, 2023). Furthermore, the reliability of peer evaluations can diminish, particularly when the group size exceeds a manageable number, as larger groups may lead to less engagement and accountability among members (Yoon et al., 2018). Another challenge is the issue of free-riding, where some students may contribute less while still receiving similar evaluations to their more active peers, thus compromising the fairness of the evaluation system (Eguchi et al., 2020; Khalid, 2023). Furthermore, the impact of peer evaluations on diverse student populations must be considered. Students from different cultural backgrounds may have varying perceptions of teamwork and individual contributions, which can affect their evaluation practices (Li, 2011). For instance, students who are less vocal or assertive may receive lower scores due to their quieter demeanor, despite making valuable contributions in other ways (Li, 2011; Garcés et al., 2023). This disparity can lead to inequitable outcomes, where certain groups of students are disadvantaged by the evaluation process.

The use of bonus point self-assessment rubrics in educational settings presents several limitations and challenges that can impact their effectiveness. While these rubrics are designed to enhance self-regulation, provide clarity in expectations, and promote accountability among students, they also face significant hurdles. One of the primary criticisms of self-assessment rubrics is the potential for subjectivity in the evaluations provided by students. Research indicates that students may struggle with accurately assessing their own contributions, leading to inflated self-ratings (Chowdhury, 2018). This subjectivity can undermine the reliability of the assessment process (Asli & Matore, 2023). Additionally, excessive reliance on extrinsic rewards may undermine intrinsic motivation and lead to superficial engagement (Kohn, 1999). Balancing incentives with opportunities for authentic learning is therefore essential. Addressing these challenges requires careful design, clear guidelines, and ongoing support to ensure that self-assessment rubrics achieve their intended educational goals.

Conclusion

The study highlights the promising impact of bonus point self-assessment rubrics and peer evaluations in enhancing student engagement in challenging math-based courses like numerical methods. These strategies have been successful in incentivizing participation, reducing grade-related anxiety, and fostering a more inclusive and motivating learning environment. Many students reported increased involvement in group work, greater participation in class discussions, and a stronger willingness to seek help, indicating that these methods effectively address common challenges in student engagement. Additionally, the clarity and fairness of these assessment criteria were well-received, with many students recommending their implementation in other courses.

Despite these benefits, these assessments are not without limitations. Peer evaluations, while effective in fostering collaboration and accountability, can introduce concerns related to bias, free-riding, and scalability, potentially impacting the overall fairness and effectiveness of assessments. Similarly, self-assessment rubrics may be subject to issues of subjectivity and varying levels of student self-awareness, which can influence the reliability of the results. Furthermore, ensuring that these strategies are equitable across diverse student populations remains a key consideration for their successful implementation.

Although the study did not include a direct control group using traditional grading, student feedback and engagement metrics suggest substantial improvements. For example, 64.7% of students reported that the grading criteria motivated them to participate in group work, while 66.7% indicated that it encouraged them to seek help when needed. Additionally, 52.9% of students noted increased engagement in class discussions and assignments, suggesting a shift in student behavior compared to conventional grading models. These findings provide strong preliminary evidence that these assessment strategies can play a crucial role in fostering student engagement and motivation.

The overall findings of this research align with the findings of other studies, though such studies are limited in higher education and even more so in STEM.

To strengthen the validity of these findings, future studies should incorporate comparative analyses by implementing both traditional and alternative grading methods within the same course or across different sections. This would allow for a more concrete, data-driven assessment of their impact. Additionally, longitudinal studies could provide valuable insights into the long-term effectiveness and sustainability of these grading practices, ensuring that they continue to support student motivation and success over time.

Another important consideration is the refinement of survey methodologies to minimize bias in student feedback. Some survey questions may have been framed in a way that encourages positive responses, potentially skewing the results toward a more favorable perception of the grading strategies. For instance, using “actively engage” instead of “engage” in survey questions 1 and 2 could introduce bias. Additionally, survey question 5 may also be considered leading. To address this, future surveys should incorporate neutral wording, a mix of Likert-scale and open-ended questions, and opportunities for students to express both positive and critical feedback. This approach will provide a more balanced and accurate representation of student experiences and perceptions.

In conclusion, the peer evaluation and bonus point self-assessment strategies represent a valuable opportunity to shift the focus towards more collaborative, self-reflective, and effort-based evaluation models. By reducing math anxiety, enhancing teamwork skills, and promoting a more supportive learning environment, these methods have the potential to improve student engagement and academic performance. As education continues to evolve, integrating innovative assessment approaches will be essential in fostering meaningful learning experiences and ensuring student success in mathematically intensive courses.

Acknowledgements

Special thanks to: UW Stout Provost’s Office, OPID, Valerie Barske, Heather Pelzel, Sylvia Tiala, all my OPID peers.

References

Andrade, H. L. (2019). A critical review of research on student self-assessment. Frontiers in Education, 4, 87. https://doi.org/10.3389/feduc.2019.00087

Asli, N., & Matore, M. (2023). Dear second language learners (L2): The complete guide of primary trait scoring rubric for self and peer assessment (SAPA). International Journal of Academic Research in Progressive Education and Development, 12(2), 2498–2512. https://doi.org/10.6007/ijarped/v12-i2/18096

Balta, N., & Awedh, M. (2017). The effect of student collaboration in solving physics problems using an online interactive response system. European Journal of Educational Research, 6(3), 385–394. https://doi.org/10.12973/eu-jer.6.3.385

Berge, M., & Weilenmann, A. (2014). Learning about friction: Group dynamics in engineering students’ work with free body diagrams. European Journal of Engineering Education, 39(6), 601–616. https://doi.org/10.1080/03043797.2014.895708

Bidna, T. (2024). Impact of rubrics on students’ self-assessment and overall performance in an EAP writing course. GJELT, 4(1), 1–6. https://doi.org/10.20448/gjelt.v4i1.6060

Boud, D., & Falchikov, N. (2006). Aligning assessment with long-term learning. Assessment & Evaluation in Higher Education, 31(4), 399–413. https://doi.org/10.1080/02602930600679050

Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. ASCD.

Cameron, J., & Pierce, W. D. (1994). Reinforcement, reward, and intrinsic motivation: A meta-analysis. Review of Educational Research, 64(3), 363–423. https://doi.org/10.3102/00346543064003363

Chan, E. (2001). Improving student performance by reducing anxiety. Positive Pedagogy: Successful and Innovative Strategies in Higher Education, 1(3), 1–4.

Chen, Y., & Lou, H. (2004). Students’ perceptions of peer evaluation: An expectancy perspective. Journal of Education for Business, 79(5), 275–282. https://doi.org/10.3200/JOEB.79.5.275-282

Chowdhury, F. (2018). Application of rubrics in the classroom: A vital tool for improvement in assessment, feedback and learning. International Education Studies, 12(1), 61–68. https://doi.org/10.5539/ies.v12n1p61

Colston, N., Thomas, J., Ley, M., Ivey, T., & Utley, J. (2017). Collaborating for early‐age career awareness: A comparison of three instructional formats. Journal of Engineering Education, 106(2), 326–344. https://doi.org/10.1002/jee.20166

Cook, A., Hartman, M., Luo, N., Sng, J., Fong, N., Lim, W., Chen, M. I., Wong, M. L., Rajaraman, N., Lee, J. J., & Koh, G. (2017). Using peer review to distribute group work marks equitably between medical students. BMC Medical Education, 17, Article 172. https://doi.org/10.1186/s12909-017-0987-z

Deci, E. L., Koestner, R., & Ryan, R. M. (1999). A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychological Bulletin, 125(6), 627–668. https://doi.org/10.1037/0033-2909.125.6.627

Dunn, B. L., Fontanier, C., Luo, Q., & Goad, C. (2020). Student perceptions of bonus points in terms of offering, effort, grades, and learning. NACTA Journal, 65, 168–172.

Eguchi, H., Sakiyama, H., Naruse, H., Yoshihara, D., Fujiwara, N., & Suzuki, K. (2020). Introduction of team‐based learning improves understanding of glucose metabolism in biochemistry among undergraduate students. Biochemistry and Molecular Biology Education, 49(3), 383–391. https://doi.org/10.1002/bmb.21485

Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287–322. https://doi.org/10.3102/00346543070003287

Finlayson, M. (2014). Addressing math anxiety in the classroom. Improving Schools, 17(1), 99–115. https://doi.org/10.1177/1365480214521457

Francis, R., Riedner, R., & Paretti, M. (2022). Building research capabilities at the intersection of engineering education, systems engineering, and writing studies. In Proceedings of 9th Research in Engineering Education Symposium and 32nd Australasian Association for Engineering Education Conference (REES AAEE 2021). Research In Engineering Education Network. https://doi.org/10.52202/066488-0093

Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, 111(23), 8410–8415. https://doi.org/10.1073/pnas.1319030111

Furner, J. M., & Gonzalez-DeHass, A. (2011). How do students’ mastery and performance goals relate to math anxiety? Eurasia Journal of Mathematics, Science and Technology Education, 7(4), 227–242. https://doi.org/10.12973/ejmste/75209

Gaitonde, U., Tembe, B., & Kamble, S. (2017). Computer-assisted collaborative concept mapping in engineering education. International Journal of Information and Education Technology, 7(6), 469–473. https://doi.org/10.18178/ijiet.2017.7.6.914

Garcés, A., Sánchez‐Barba, L., Pérez‐Quintanilla, D., Conde, M., Martínez, G., Márquez, E., Montoro, Ó. R., & Vargas, C. (2023). Design and validation of a rubric to assess problem-solving skills of students in a chemistry context. In Proceedings of The 6th International Conference on New Trends in Teaching and Education. Diamond Scientific Publishing. https://doi.org/10.33422/6th.ntteconf.2023.05.111

Guenther, C. L., & Miller, R. L. (2011). Factors that promote student engagement. In R. L. Miller, E. A. Kowalewski, B. M. Kowalewski, B. C. Beins, K. D. Keith, & B. F. Peden (Eds.), Promoting Student Engagement (Vol. 1, pp. 10–17). Society for the Teaching of Psychology.

Guerrero, M., & Rod, A. B. (2013). Engaging in office hours: A study of student–faculty interaction and academic performance. Journal of Political Science Education, 9(4), 403–416. https://doi.org/10.1080/15512169.2013.835554

Head, L. Q., & Lindsey, J. D. (1983). Anxiety and the university student: A brief review of the professional literature. College Student Journal, 17, 176–182.

van Helden, G., Zandbergen, B., Specht, M., & Gill, E. (2022). Student perceptions on a collaborative engineering design course. In Proceedings of SEFI: Towards a New Future in Engineering Education, New Scenarios That European Alliances of Tech Universities Open Up. Universitat Politècnica de Catalunya. https://doi.org/10.5821/conference-9788412322262.1323

Holland, N., & Feigenbaum, L. (1998). Using peer evaluations to assign grades on group projects. Journal of Construction Education, 3(3), 182–188.

Hu, S., & Kuh, G. D. (2002). Being (dis)engaged in educationally purposeful activities: The influences of student and institutional characteristics. Research in Higher Education, 43, 555–575. https://doi.org/10.1023/A:1020114231387

Ingalls, V. (2018). Incentivizing with bonus in a college statistics course. REDIMAT-Journal of Research in Mathematics Education, 7(1), 93–103. https://doi.org/10.17583/redimat.2018.2497

Johri, A., & Olds, B. (2011). Situated engineering learning: Bridging engineering education research and the learning sciences. Journal of Engineering Education, 100(1), 151–185. https://doi.org/10.1002/j.2168-9830.2011.tb00007.x

Khalid, F. (2023). Accounting students’ perceptions towards group work. In European Proceedings of Finance and Economics (pp. 1006–1018). European Publisher. https://doi.org/10.15405/epfe.23081.93

Kohn, A. (1999). Punished by rewards: The trouble with gold stars, incentive plans, A’s, praise, and other bribes. Houghton Mifflin Harcourt.

Latiolais, M. P., & Laurence, W. (2009). Engaging math-avoidant college students. Numeracy, 2(2), Article 5.

Li, L. (2011). How do students of diverse achievement levels benefit from peer assessment? International Journal for the Scholarship of Teaching and Learning, 5(2). https://doi.org/10.20429/ijsotl.2011.050214

Li, L., Liu, X., & Steckelberg, A. L. (2010). Assessor or assessee: How student learning improves by giving and receiving peer feedback. British Journal of Educational Technology, 41(3), 525–536. https://doi.org/10.1111/j.1467-8535.2009.00968.x

Menezes, F., Rodrigues, R., & Kanchan, D. (2021). Impact of collaborative learning in electrical engineering education. Journal of Engineering Education Transformations, 34, 116–120. https://doi.org/10.16920/jeet/2021/v34i0/157117

Moll, J., & Gao, S. (2022, October). Awarding bonus points as a motivator for increased engagement in course activities in a theoretical system development course. In 2022 IEEE Frontiers in Education Conference (FIE) (pp. 1–8). IEEE.

O’Connor, K. (2013). Class participation: Promoting in-class student engagement. Education, 133(3), 340–344.

Panadero, E., Brown, G. T. L., & Strijbos, J. W. (2017). The future of student self-assessment: A review of known unknowns and potential directions. Educational Psychology Review, 28(4), 803–830. https://doi.org/10.1007/s10648-015-9350-2

Panadero, E., Jonsson, A., & Strijbos, J. W. (2016). Scaffolding self-regulated learning through self-assessment and peer assessment: Guidelines for classroom implementation. In D. Laveault & L. Allal (Eds.), Assessment for learning: Meeting the challenge of implementation (pp. 311–326). Springer. https://doi.org/10.1007/978-3-319-39211-0_18

Pulfrey, C., Buchs, C., & Butera, F. (2011). Why grades engender performance-avoidance goals: The mediating role of autonomous motivation. Journal of Educational Psychology, 103(3), 683–700. https://doi.org/10.1037/a0023911

Rassuli, A. (2012). Engagement in classroom learning: Creating temporal participation incentives for extrinsically motivated students through bonus credits. Journal of Education for Business, 87(2), 86–93. https://doi.org/10.1080/08832323.2011.570808

Rust, C., Price, M., & O’Donovan, B. (2003). Improving students’ learning by developing their understanding of assessment criteria and processes. Assessment & Evaluation in Higher Education, 28(2), 147–164. https://doi.org/10.1080/02602930301671

Samuel, T. S., Buttet, S., & Warner, J. (2023). “I can math, too!”: Reducing math anxiety in STEM-related courses using a combined mindfulness and growth mindset approach (MAGMA) in the classroom. Community College Journal of Research and Practice, 47(10), 613–626. https://doi.org/10.1080/10668926.2022.2050843

Schinske, J., & Tanner, K. (2014). Teaching more by grading less (or differently). CBE—Life Sciences Education, 13(2), 159–166. https://doi.org/10.1187/cbe.CBE-14-03-0054

Schuman, H., Walsh, E., Olson, C., & Etheridge, B. (1985). Effort and reward: The assumption that college grades are affected by quantity of study. Social Forces, 63(4), 945–966. https://doi.org/10.2307/2578600

Stump, G., Hilpert, J., Husman, J., Chung, W., & Kim, W. (2011). Collaborative learning in engineering students: Gender and achievement. Journal of Engineering Education, 100(3), 475–497. https://doi.org/10.1002/j.2168-9830.2011.tb00023.x

Temel, S., Scholten, V., Akdeniz, R., Fortuin, F., & Omta, O. (2013). University–industry collaboration in Turkish SMEs. The International Journal of Entrepreneurship and Innovation, 14(2), 103–115. https://doi.org/10.5367/ijei.2013.0109

Tessier, J. T. (2012). Effect of peer evaluation format on student engagement in a group project. Journal of Effective Teaching, 12(2), 15–22.

Tinto, V. (2012). Enhancing student success: Taking the classroom success seriously. Student Success, 3(1), 1–8. https://doi.org/10.5204/intjfyhe.v3i1.119

Topping, K. J. (2009). Peer assessment. Theory into Practice, 48(1), 20–27. https://doi.org/10.1080/00405840802577569

Trowler, V. (2010). Student engagement literature review. The Higher Education Academy. https://www.advance-he.ac.uk/knowledge-hub/student-engagement-literature-review

Wahid, S. N. S., Yusof, Y., & Razak, M. R. (2014). Math anxiety among students in higher education level. Procedia – Social and Behavioral Sciences, 123, 232–237. https://doi.org/10.1016/j.sbspro.2014.01.1419

Yoon, H., Park, W., Myung, S., Moon, S., & Park, J. (2018). Validity and reliability assessment of a peer evaluation method in team-based learning classes. Korean Journal of Medical Education, 30(1), 23–29. https://doi.org/10.3946/kjme.2018.78

Zeidner, M. (1998). Test anxiety: The state of the art. Springer.

Appendix

Bonus Point Self-Assessment Rubric and Peer Evaluation Rubric

Note: While some sections of the rubrics were created anew, others were adapted from preexisting ones to suit the context.

Individual Bonus Point Self-Assessment Rubric
Excellent (2 points) Good (1 point) Satisfactory/Needs Improvement (0 points)
Office Hours Office hours attended (5+). Office hours attended (2–4). Office hours attended (0–2).
Participation Actively participates in class discussions and asks thoughtful questions. Occasionally participates in class discussions and asks thoughtful questions. Rarely participates in class discussions; contributions are minimal or lack depth.
Understanding Can self-replicate all the work on all the assignments. Can self-replicate most work on all the assignments. Can self-replicate only some of the work on all the assignments.
Effort
I worked extremely hard for this class.
I worked through assignments even when things got challenging.
I actively sought clarifications and guidance from my professor and peers when I got stuck.

I worked relatively hard for this class.
I mostly worked through assignments even when things got challenging.
I often sought clarifications and guidance from my professor and peers when I got stuck.

I put in some effort in the class, however not as much as I should have.
I occasionally worked through challenges in assignments.
I rarely sought clarifications and guidance from my professor and peers when I got stuck.
Pre-requisite Knowledge The prerequisite math knowledge caused me to lose points on 3+ assignments. The prerequisite math knowledge caused me to lose points on 1–2 assignments. The prerequisite math knowledge did not cause me to lose points on any of the assignments.
Total Points Earned:     / 10

 

 

Group Work Peer Evaluation Rubric
Excellent (2 points) Good (1 point) Satisfactory/Needs Improvement (0 points)
Communication Always effectively communicated ideas and actively and respectfully listened to others. Mostly communicated ideas effectively and actively and respectfully listened to others. Occasionally/rarely communicated ideas effectively, actively, and respectfully listened to others.
Collaboration Worked well with all group members to achieve common goals. Worked well with most group members and contributed positively to the group dynamic. Minor conflicts or disagreements occasionally arose during collaboration.
Reliability Consistently fulfilled responsibilities, met deadlines, and could be relied upon to contribute to the project’s progress. Mostly fulfilled responsibilities and met deadlines, but occasional instances of missed deadlines may have occurred. Generally completed tasks on time, but reliability issues occasionally/often led to delays.
Contribution to Goals Consistently contributed to group goals and achieved the project’s success. Generally contributed to group goals and achieved the project’s success. Occasionally/rarely fulfilled assigned tasks and responsibilities to help achieve group goals.
Group Work Peer Evaluation Rubric: Scoresheet
Member Name #1 Self: #2: #3: #4:
  Communication
  Collaboration
  Reliability
  Contribution to goals
Point Total     / 8     / 8     / 8     / 8

 

Media Attributions

  • bonus-pt-self-assessment-results
  • q7-10

License

Icon for the Creative Commons Attribution 4.0 International License

Journal on Empowering Teaching Excellence, Fall 2025 Copyright © 2025 by Utah State University is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.