By Susan Sportsman, PhD, RN, ANEF, FAAN
Online discussion boards provide the most effective learning when students are given the opportunity to demonstrate higher order thinking skills. Regardless of the topic of discussion, students must be able to demonstrate that they can:
- Find relevant information
- Examine underlying assumptions
- Demonstrate logical reasoning
- Question the validity of arguments, assertions, and in some cases, facts
- Analyze information to choose the most appropriate response(s)
- Predict relevant outcomes in unpredictable contexts
It is a complicated task, both to develop questions to encourage this higher order thinking and to evaluate the students’ responses. In the March 2020 CMC blog, we discussed ways to structure questions to enhance students’ thinking. Now let’s consider how we might evaluate the extent to which students’ responses reflect higher order thinking.
Rubrics to Evaluate Higher Order Thinking
Students at all levels have difficulty “just discussing” a topic without directions. Faculty-developed rubrics can provide clear expectations for students regarding how such discussion—or other sort of written assignments that might be subjective in nature—will be graded. Rubrics require faculty to clearly state the goals of the assignment, providing a map for a systematic way to assign grades. This specificity helps students know what they should produce. Rubrics can also reduce the likelihood of inconsistency when multiple graders are involved. So, what steps should faculty take to develop a rubric for specific course discussion boards?
Step 1: Determine the Objective of the Rubric
All of us have likely seen (and probably used) grading rubrics. Most rubrics for discussion boards are analytical, designed to highlight essential elements that are required for the post. A number of steps are necessary to develop an effective rubric. The first step involves determining the objective of the rubric: in our discussion, the objective will be to evaluate students’ abilities to demonstrate higher-level thinking when responding to a discussion of an identified topic. (MGH Institute of Health Professions, 2020).
Step 2: Identify Key Criteria for Evaluation
Identification of the key criteria to be used to evaluate students’ work is the second step in developing a rubric for discussion board posts. Some of the criteria should be included in the evaluation of any discussion board post: for example, the quality of the writing and the use of proper online etiquette. The remaining criteria should be chosen based upon the purpose of the discussion board assignment. Criteria that focus on developing higher-level thinking might include:
- Critical analysis of reading
- Participation in the learning community
- Relevance of post
- Interaction with others
Step 3: Establish Ideal Behaviors for Each Category
The third step is to establish behaviors in each category that represent an excellent post. Additional information on criteria that represent higher order thinking, as well as behaviors that represent each criterion, can be found here.
- Once the behaviors representing an excellent post are determined, faculty must identify how many performance levels should be included, typically three or four, in the rubric. Regardless of the number of levels, each category should represent a gradation of quality based upon the degree to which the identified behaviors are met (Teachers First, 2020). These categories may include:
- Regardless of the labels you give to the categories, the difference in quality of the work should be equally distributed across the continuum of levels.
Step 4: Choose the Number of Performance Levels and Labels
The fourth step to develop a rubric is to identify the behaviors that represent the levels you have included in the rubric. Although this may be challenging, MGH Institute for Health Professions suggests circling words in the description of the excellent post that can vary. These words will be the ones that you will change as you write the lower level behaviors. Avoid relying on comparative language across the performance language. For example, if the highest level of performance used “thoroughly written,” the middle level should not be “less thoroughly written.” Here are some examples of levels of behaviors.
Reliability and Validity
When developing a rubric, faculty should also consider the instrument’s reliability. A “good” rubric should be able to be used by various teachers and have them all arrive at similar scores for a given discussion. Reliability can also refer to the amount of time needed to grade the post whether it is the first post you grade or the 37th (Teachers First, 2020).
Before implementing the rubric, several faculty members with knowledge of the course should review it and provide feedback. It is also helpful to have several faculty members grade the same sample of work to determine the similarities of their scores. Discussing this grading experience as a group can also provide some understanding of the reliability of the rubric.
The validity of a rubric can be demonstrated by the extent to which the instrument focuses the student and the faculty who grades the posts on important behaviors that students should develop, rather than the behaviors that are easy to see. For example, the student should receive feedback (and some reduction in points) for poor grammar, punctuality or inappropriate citing of references. However, this behavior is not nearly as important as the student’s ability to use higher order thinking in discussion posts. The rubric can help students understand what higher order thinking skills are and ways to demonstrate them in their discussion post, giving them a target for their work. This, coupled with discussion questions that require the use of these skills, will prepare nursing students at any level for the practice arena.
Benner, P. Hughes, R., Sutphen, M. (2008) Clinical Reasoning, Decisionmaking, and Action: Thinking Critically and Clinically. Patient Safety and Quality: An Evidence-Based Handbook for Nurses. Rockville (MD) Agency for Healthcare Research and Quality (US). Chapter 6. April. https://www.ncbi.nlm.nih.gov/books/NBK2643/ Accessed, April 2020.
Baiyun, C., DoNoyelles A., Thompson, K., Sugar, A., Vargus, J. (2014) Discussion Rubricks. In Chen, B., DeNoyelles, A., Thompson, K. Teaching Online Pedagogical Repository, Orlando, FL. University of Central Florida Center for Distributed Learning, https//topr.online.ucf.edu/discussion-rubrics/. Revision: July 14, 2017. Accessed, April 2010.
Henning, M. (2020) Rubrics to the Rescue. https://www.teachersfirst.com/lessons/rubrics/index.cfm. Accessed, April 2020.
______ (2020) Rubrics. MGH Institute for health professions. https://www.mghihp.edu/faculty-staff-faculty-compass/rubrics. Accessed, April 2020.
3 thoughts on “Evaluating Online Discussions”
Thank you for this. I admit to times in the past when I just looked to see if the student answered the 2 questions and posted a response to a classmate’s posts. As long as it looked reasonable on a quick scan, it got the grade. This reminds me to go back to my rubric and slow down (or not get so far behind that I have to rush).
Thanks BJ! We’re glad it was helpful!
BJ–We have all taken this short cut, but as you say, taking some time to think about the outcome we want, can be very helpful. Thanks for your response.