The feedback system
Learner-centered courses encourages students to manage their own learning. This means they must be able to assess their knowledge. They need to know when they don’t understand something, so they can seek help.
CoreDogs has about a zillion (well, more then 150) exercises students can do. They enter their solutions directly into CoreDogs. If someone would check their solutions, students would know whether they understood the material.
Learning research tells us that feedback should be:
- Fast, available a day or two (at most) after the student asks for it.
- Frequent. CoreDogs has many small exercises. Students can do each one relatively quickly. This lets them continuously evaluate themselves, rather than having to wait for the next large assignment.
- Formative. If a solution is wrong, the feedback should say why it is wrong, and suggest how to correct the problem. Students should have a chance to correct it, and resubmit.
Note that I’m talking about feedback for learning (formative feedback), not feedback for grading (summative feedback). When I teach with CoreDogs, I give students regular formative feedback on their exercise solutions. If a solution isn’t correct, the student can change it, and ask for feedback again.
Grading is a separate process. I give exams with time limits, and no “do overs.” There are larger projects as well that are graded.
The problem with F4 – fast, frequent, formative feedback – is the time it takes for someone (instructor, mentor, ...) to evaluate each solution. With many students doing many exercises, it can get overwhelming.
CoreDogs’ solution is to streamline the work process for reviewing exercise solutions. “Streamlining” means to reduce both:
- The number of physical actions, like mouse clicks, needed to give feedback.
- Cognitive load, the mental effort required to provide feedback.
There is one aspect of typical CoreDogs use that creates a higher cognitive load than normal. When a grader in a traditional course gets items to grade, they are all usually of the same type. For example, fifty solutions for the midterm.
When students are in control of their learning, they work at different speeds. In a particular week of a course, a class might submit solutions to six different exercises. This creates a higher cognitive load for the grader, who must switch contexts while working through the day’s submissions.
Let’s see how the feedback system works. Note that the screen shots below come from the first operational feedback system. It might have changed a little by the time you read this.
Students must grant permission to reviewers, before reviewers can offer feedback. Here’s Renata’s navigation menu:
She clicks the Your exercises link. She then clicks the Settings tab, and enters the user names of people allowed to review her work. Here’s Renata naming me as a reviewer.
She can enter as many user names as she likes, separated by commas. This gives course designers and students some options, such as:
- Several reviewers for one large class.
- Students can review each others’ work.
- Older students can be mentors for younger students.
Asking for feedback
Suppose Renata is doing the exercise that asks her to find her computer’s IP address. This is (part of) what she sees:
She clicks Your solution, and an area drops down where she can type her solution.
When she’s done, she can request a review of her solution:
When I log in, the system tells me how many review requests are waiting.
When I enter the feedback system, I see a list review requests:
I can sort the list in various ways. If I sort by title, solutions to the same exercise are listed together. Processing the items this way would reduce the cognitive load of my switching from exercise to exercise.
I click on an item to start reviewing it, and see a feedback interface.
There are four parts to the screen (you’ll see a zoom of each one below). They are:
- The exercise.
- The student’s solution, and a rubric for assessing the solution.
- An area for entering feedback.
- Two lists of clickable phrases. They are canned feedback phrases reviewers can use.
I look at Renata’s solution, and compare it to the rubric.
Renata has entered an IP address. The rubric only has one requirement for a complete solution: entering an IP address. This means that Renata has completed the exercise.
I look at the clickable phrase area:
There are two lists of phrases. The top list shows general phrases that apply to any exercise. The bottom list gives phrases that are specific to this exercise. I click on a phrase:
The phrase appears in the feedback area:
The request is automatically marked as closed, that is, a reviewer has responded to it. I can add more comments if I want, like “Thnx, Renata!” I click the Append button. This appends my comment to an ongoing conversation with Renata about her solution.
I can mark the exercise as completed. That is, I judge Renata’s solution meets the exercise requirements.
The exercise is crossed off my list:
I can get a sense of progress as the waiting items are crossed off.
The feedback process is efficient. All the information I need to judge Renata’s solution is on one screen. The clickable phrases mean I can give appropriate feedback very quickly. The phrases match the rubric for the exercise, so my feedback can accurately reflect the rubric. It takes less than a minute to evaluate Renata’s solution.
Back to the student
When Renata looks at the exercise, she will see my comments and a completion stamp:
She can get reports on exercises she has done:
Independent learners need feedback. It should be fast, frequent, and formative. CoreDogs has a streamlined feedback work flow. It helps reviewers do their work quickly and accurately.