Approaches to Evaluating Blended Courses

Mateja R. Savoie-Roskos, Ph.D., MPH, RD; Stacy Bevan, MS, RD; Rebecca Charlton, MPH, RD; and Marlene Israelsen Graf, MS, RD

Abstract

Blended learning, sometimes referred to as hybrid or flexible learning, is becoming increasingly common in higher education. Unfortunately, many instructors receive limited training on how to effectively evaluate blended courses, and as a result, commonly rely solely on end-of-semester evaluations. Due to the more complex nature of how blended courses are designed and implemented, instructors should consider utilizing a variety of course evaluation methods. This article includes researched-based approaches for evaluating blended courses based on feedback from students, peers, and instructional designers. This combination of formalized feedback is offered as one strategy to ensure instructors achieve course learning objectives and meet student learning needs. Most importantly, feedback gathered through these various evaluation methods can be used for continued course improvement.

Introduction

Blended learning, sometimes referred to as hybrid or flexible learning, is becoming increasingly common in higher education. Although the overall layout and structure of blended courses can vary considerably, all blended learning courses consist of both synchronous and asynchronous instruction (Wengreen, Dimmick, & Israelsen, 2015). Synchronous instruction occurs in real-time and typically describes instructor-led face-to-face interaction in a classroom. Contrastingly, asynchronous learning usually occurs in an online environment where students and the instructor are not all present or online at the same time (Wengreen et al., 2015).

Flipped, or inverted learning, is a specific form of blended learning. While various definitions of flipped learning exist, it is generally a learning format where (a) students complete pre-class work individually before coming to class and engage in group work or collaborative learning activities during class; (b) lectures are recorded as videos for students to view outside of class and class time is used for discussion, application, and problem-solving; and/or (c) the learning environment during class time is student-centered instead of instructor-focused (Honeycutt, n.d.). For the purpose of this paper, blended learning will be used to refer to all of the aforementioned terms and forms of blended learning.

There are many benefits to using a blended learning model. Oftentimes, students demonstrate improved in-class engagement, attendance, and overall academic achievement in blended courses, as compared to traditional face-to-face courses (United States Department of Education [USDE], 2010; Wengreen et al., 2015). The combination of different learning environments, as seen in a blended model, minimizes the limitation of meeting one specific learning style, which can occur when one form of delivery is used (Wengreen, et al., 2015). For example, face-to-face courses foster learning through interaction and connection with an instructor and peers. Online courses, on the other hand, offer flexibility to students by expanding options on what, when, where, and how students learn (USDE, 2010). A blended course can offer the advantages of both of these learning formats and free up time for more student-centered learning in the synchronous setting (Moskal, Dziuban, & Hartman, 2013; O’Flaherty & Phillips, 2015; USDE, 2010; Wengreen et al., 2015). Most students appreciate the flexibility of the asynchronous component while also valuing the interactions with students and faculty offered in the synchronous component (Moskal et al., 2013). At USU, any course in which 21% to 79% of the time is spent in an asynchronous format can be designated as a blended course, once approval is obtained from a campus administrator. This application process is outlined on the Center for Innovative Design and Instruction (CIDI) website (http://cidi.usu.edu/requestforms/ blendedlearning).

Although blended courses are becoming more mainstream at USU and in higher education in general, many instructors receive limited training on how to effectively develop and evaluate blended courses. Determining the quality of blended courses requires comprehensive feedback from students, faculty, and instructional designers. Feedback provided through these evaluations helps determine the quality of in-class content, in addition to the online methods used, to ensure course objectives and student educational needs are being met (Smythe, 2012). The purpose of this article is to discuss blended learning resources and evaluation methods available to instructors at USU and other higher educational institutions.

Student Evaluation and Assessment

Student evaluation of teaching (SET), typically conducted at the end of each semester, is the most common way courses are evaluated in higher education (Dzuiban & Moskal, 2011). This form of evaluation, often referred to as summative evaluation, can help instructors improve overall course effectiveness and determine whether course objectives are being met. Student ratings are particularly well-suited in determining if a teacher has sufficient clarity, student-teacher connection, and commitment to the course to be an effective educator (Benton & Cashin, 2009). Furthermore, high student ratings of the instructional dimensions listed above are moderately correlated with higher exam scores and student achievement in the course being evaluated (Benton & Cashin, 2009).

However, student evaluations alone are not adequate for guiding course design and presentation of blended courses, as students are not trained in effective pedagogical methods (O’Flaherty & Phillips, 2015). For example, a review of 28 studies found that although student grades, attendance, and perceived development of skills increased, student reactions towards the course were negative (O’Flaherty & Phillips, 2015). It is possible that a students’ internal locus of control, including a willingness to take risks and engage innovative approaches, which are vital to the success of flexible learning environments, may impact summative evaluation results (Drennan, Kennedy, & Pisarski, 2005).

Because end-of-semester evaluations of blended courses have limitations, instructors should consider utilizing other student evaluation methods. For example, mid-semester evaluations can be used to get feedback on course content, teaching methods, and learning activities to help improve teaching and learning. One of the main benefits of mid-semester evaluations is the ability of the instructor to apply feedback to the course immediately (Bullock, 2003). Students’ attitudes about courses and instructors have been found to improve when instructors implement changes based on mid-semester evaluations, which may influence their overall learning experience in the course (Keutzer, 1993).

In addition to student evaluations, student assessment data can be used for course evaluation and improvement. For example, pre/post assessments can help determine changes in knowledge or skills that are aligned with course objectives, and have been found to be a valuable addition to evaluating teaching and course effectiveness (Stark-Wroblewski, Ahlering, & Brill, 2007). Because blended courses often utilize skill-based learning, assessments should incorporate the demonstration of these skills, in addition to changes in knowledge and understanding. Reviewing other course assessment data can also help instructors understand what course objectives and course content need revising for improved understanding.

Peer Evaluations

In addition to SET, instructors should consider scheduling regular peer evaluations for their blended courses. Peers can provide an added perspective in areas of course design and teaching approaches that students lack the ability to provide. To ensure desired information of the course effectiveness is obtained, the instructor should consider the following before initiating a peer evaluation: (1) the type and purpose of the peer evaluation, (2) the evaluator’s training or knowledge related to assessing blended courses, and (3) the evaluation rubric that will be used.

Peer evaluations may be summative or formative. Summative evaluations are comparative to a final grade or overall score, such as a course evaluation letter written from peers as part of the promotion and tenure process (Duke AHEAD, 2015; Vega Garcia, Stacy-Bates, Alger, & Marupova, 2017). Limitations of summative peer faculty evaluations include feedback not being communicated well, not being relevant, or not being applicable (Iqbal, 2014; Smith, 2012). Some of these drawbacks result from lack of formal training on how to conduct peer evaluations, lack of objective standards for comparing teaching, and not wanting to negatively impact the promotion and tenure progress of a colleague (Iqbal, 2014). In addition, one classroom observation may not be typical of overall teaching or provide enough context to fully assess teaching (Iqbal 2014; Smith 2012,).

Formative evaluations are found to be more appropriate to utilize when wanting specific feedback for course improvement or professional growth. They are initiated voluntarily by the instructors and benefit both parties by promoting active discussion and insights into effective teaching (Iqbal 2014; Smith, 2012; Vega Garcia et al., 2017). Ideally, a formative evaluation includes a pre-observation meeting to discuss areas the observed faculty wants assessed, the actual observation, and then a follow-up meeting to discuss specific insights into what was observed (Iqbal, 2014; Smith, 2012; Vega Garcia et al., 2017). The evaluation form or letter received following a formative evaluation may be added to promotion and tenure documentation to show improvements in teaching, or remain private and used solely for professional growth.

Peer evaluation of blended courses need to utilize an evaluation tool that focuses on both the course design, teaching in the online component, and the face-to-face classroom instruction. There should be a focus on how well each of these blends to meet the course objectives. Many evaluation rubrics to assess teaching have been based on the Bloom’s taxonomies of learning objectives and Chickering and Gamson’s Seven Principles of Good Practice in Undergraduate Education (Baldwin et al. 2017; Bloom, 1956; Chickering and Gamson, 1987, Yang et al., 2009). Some rubrics focus primarily on learner effectiveness, but Yang et al. acknowledged the importance of evaluating instructional design as well (Yang et al., 2009). Baldwin et al. reviewed 28 higher education online course evaluation instruments and found most rubrics only assessed student-faculty contact, cooperation among students, and active learning, while failing to assess prompt feedback, time on tasks, high expectations, and diverse talents and ways of learning (Baldwin, 2017). Bowyer et al. recognized the importance of acknowledging all aspects of teaching and learning, and then developed their own framework for evaluating blended courses (Bowyer et al., 2017).

Overall, the greatest benefits will come from peer evaluation when adequate planning, pre- and post-observation meetings, and training of peer evaluators takes place, and an appropriate evaluation tool for blended courses is utilized (Bowyer et al., 2017).

Instructional Design Evaluations

With blended courses, it is important not to forget the value of course development, instructional design, and use of various technologies (Smythe, 2012). “Good instructional design is vitally important to the success of a blended learning course, perhaps even more so than in a traditional classroom or in fully online courses.” (Glazer, 2012 p. 5) Oftentimes, these vital components of course quality are missed through the more common evaluation methods, such as those discussed above (Smythe, 2012). Working with instructional designers during the development of blended courses and throughout course improvement can help ensure the online learning environment is conducive to student engagement and success. More specifically, instructional designers help ensure course objectives are aligned with assessments and activities, the online course content complements the in-class instruction, and that the course is developed with intentionality. In addition, instructional designers can provide feedback and assistance with layout and design of online course content, developing or improving assessment rubrics, and ensuring materials are accessible, for example. Before a blended course is made available to students, instructors should strongly consider having an instructional designer evaluate the online portion of their course using a standardized course design rubric. Many universities, including USU, have such resources available for instructors.

Furthermore, course development trainings provided by instructional designers allow an opportunity for faculty to get continued feedback while the course is being developed. While it is not an official evaluation, this formative evaluation process can ensure the upfront time and resources spent developing a blended course are utilized efficiently and effectively. Utilizing on-campus course development support provided by instructional designers helps to ensure that the course and instructor adequately incorporate student engagement and assessment, which allow for optimal student outcomes (Moskal et al., 2013). If a course is already designed and implemented, instructional designers can be an excellent resource for continued course improvement. At USU, CIDI has a variety of resources for instructors, including a course mapping worksheet, course development assistance, seminars and workshops, and course evaluations. These resources can be especially beneficial for instructors new to blended or online learning.

Conclusion

Although blended courses are becoming more mainstream in higher education, many instructors receive minimal training on how to effectively develop and evaluate them. Due to the more complex nature of how blended courses are designed and implemented, instructors should consider a variety of course evaluation methods. A combination of formalized feedback from students, peers, and instructional designers before, during, and after the course has been offered is one strategy to ensure courses achieve learning objectives and meet student learning needs. Most importantly, feedback gathered through these various evaluation methods should be used for continued course improvement.

References

Baldwin, S., & Trespalacios, J. (2012). Evaluation instruments and good practices in online education. Online Learning, 21(2). doi: 10.24059/olj.v21i2.913.

Benton, S, & Cashin, W. (n.d.) Student ratings of teaching: a summary of research and literature. Retrieved from http://www.ideaedu.org/Portals/0/Uploads/Documents/ IDEA%20Papers/IDEA%20Papers/PaperIDEA_50.pdf.

Bloom, B. S. (1956). Taxonomy of Educational Objectives. Vol. 1: Cognitive Domain. New York: McKay.

Bowyer, J., & Chambers, L. (2017). Evaluating blended learning: Bringing the elements together. Research Matters, 23, 17-26.

Bullock, C.D. (2003). Online collection of midterm student feedback. New Directions for Teaching and Learning, 95–102.

Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE bulletin, 3, 7.

Drennan, J., Kennedy, J., & Pisarski, A. (2005) Factors affecting student attitudes toward flexible online learning in management education, The Journal of Educational Research, 98(6), 331-338, doi:10.3200/JOER.98.6.331-338.

Duke AHEAD.(2015). Assess and Evaluate Learning. Retrieved from https://dukeahead.duke.edu/educator-competencies/assessment.

Dzuiban, C., & Moskal, P. (2011). A course is a course is a course: Factor invariance in student evaluation of online, blended and face-to-face learning environments. Internet and Higher Education, 14, 236-341. doi:10.1016/j.iheduc.2011.05.003.

Glazer, F. S. (2012). Blended learning: Across the disciplines, across the academy. Steerling, VA: Stylus Publishing.

Honeycutt, B. (n.d.) Defining the flipped classroom and finding flappable moments. Retrieved from http://barbihoneycutt.com/defining-flipped-classroom-flippable-moments/.

Iqbal, I. (2014). Don’t tell it like it is: Preserving collegiality in the summative peer review of teaching. Canadian Journal of Higher Education, 44(1),108-124.

Keutzer, C. (1993). Midterm evaluation of teaching provides helpful feedback to instructors. Teaching of Psychology, 20(4), 238-240.

Moskal, P., Dziuban, C., & Hartman, J. (2013). Blended learning: A dangerous idea? Internet and Higher Education, 18, 15-23. doi.org/10.1016/j.iheduc.2012.12.001.

O’Flaherty, J., & Phillips, C. (2015). The use of flipped classrooms in higher education: A scoping review, The Internet and Higher Education, 25, 85-95. doi: 10.1016/j.iheduc.2015.02.002.

Smith, H. (2012). The unintended consequences of grading teaching. Teaching in Higher Education,17(6),747-754. doi: 10.1080/13562517.2012.744437.

Smythe, M. (2012). Toward a framework for evaluating blended learning. Proceedings of ASCILITE – Australian Society for Computers in Learning in Tertiary Education Annual Conference 2012. Australasian Society for Computers in Learning in Tertiary Education. Retrieved from https://www.learntechlib.org/p/42695/.

Stark-Wroblewski, K., Ahlering, R., & Brill, F. (2007). Toward a more comprehensive approach to evaluating teaching effectiveness: supplementing student evaluations of teaching with pre-post learning measures. Assessment & Evaluation in Higher Education, 32(4), 403-415. doi.org/10.1080/02602930600898536.

U.S. Department of Education, Office of Planning, Evaluation and Policy Development. (2010). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. Retrieved from https://www2.ed.gov/rschstat/ eval/tech/evidence-based-practices/finalreport.pdf.

Vega Garcia, S., Stacy-Bates, K., Alger, J., & Marupova, R. (2017). Peer evaluations of teaching in an online information literacy course. Libraries and the Academy, 17(3), 471-483. Retrieved from https://preprint.press.jhu.edu/portal/sites/ajm/ files/17.3garcia.pdf.

Wanner, T., & Palmer, E. (2015). Personalising learning: Exploring student and teacher perceptions about flexible learning and assessment in a flipped university course. Computers & Education, 88, 354-369. doi 10.1016/j.compedu.2015.07.008.

Wengreen, W., Dimmick, M., & Israelsen, M. (2015). Evaluation of a blended design in a large general education nutrition course. NACTA.

Yang, JF., Hsiano, CM., Liu, HY., & Chao-Ming Lin, N. (2009). Modes of delivery and learning objectives in distance education. International Journal of Instructional Media, 36(1):55-71.

License

Journal on Empowering Teaching Excellence, Spring 2018 Copyright © 2018 by Utah State University. All Rights Reserved.

Share This Book