A story was recently shared at a technology conference I attended. It involved an airline that asked their pilots to reduce fuel consumption by a certain percent. The pilots achieved the goal, but in doing so, they reduced the use of climate control in the cabin which made the travelers less comfortable. They also flew more slowly which resulted in fewer on-time departures. Overall, they effectively achieved their fuel goals but missed the mark on customer satisfaction and efficient, on-time flights.
Implementing educational technology is a little bit like reducing fuel consumption. You may solve one problem while you create two others. An adaptive learning technology might provide immediate feedback to learners while also making them feel more disconnected from their instructor and peers. Artificial intelligence may promise to discover previously undetectable patterns in human behavior, and it might also reinforce pre-existing, negative biases.
"If adaptive learning is implemented in a lecture-based course, it has a good chance of producing better learning outcomes, more efficiency, and probably higher satisfaction"
Every instructional designer knows that the evaluation phase of a technology implementation project is essential, but it is complex to determine what should be evaluated, especially where the goal is an evaluation of educational quality. We want our innovations to reflect evidence-based best practices, but which evidence?
Thankfully, in our airline example, the airline was collecting passenger satisfaction data, and on-time departure data in addition to fuel consumption. They realized the error of their ways and abandoned the goal for pilots, instead of tackling fuel efficiency in a different way. I wonder if we’re this intentional when we implement new technologies. Peter Honebein writes about an iron triangle in instructional design- appeal, efficiency, and effectiveness. Since learning about these three dimensions of design, I see them existing in tension in every experience I have, from restaurants to Uber rides to classrooms. Quality is found somewhere in the middle-- in the tension between the dimensions.
When implementing a new instructional technology, look for methods to measure the appeal, efficiency, and effectiveness of that technology. The chances are good that if one corner of the triangle is too heavily emphasized by a vendor (i.e., your students are going to love these educational games-- appeal), there will be sacrifices in other areas (i.e., learning outcome attainment-- effectiveness).
In a recent adaptive learning pilot program at my institution, we measured student satisfaction, the time learners spent using the technology (all online), and the students’ final exam scores. We compared these three metrics with students from the same course that did not use adaptive learning and found very few differences. Other studies have found significant differences when adaptive learning is used, so why did our results differ? We believe that the starting point for instructional technology implementation is foundational and often not well described.
If adaptive learning is implemented in a lecture-based course, it has a good chance of producing better learning outcomes, more efficiency, and probably higher satisfaction. The effectiveness of large-scale lecture-based learning has long been questionable. On the other hand, if adaptive learning is implemented in a small, project-based learning course, the story could be quite different.
The bottom line is that during the implementation and evaluation of any educational technology, proactive consideration for the measures of success should include appeal, efficiency, and effectiveness. Otherwise, you may reduce your expenses but end up with uncomfortable, dissatisfied customers at the end of the journey.
Let’s talk about a practical example in the category of assessment technology. Adaptive learning involves providing learners with content that aligns with their specific areas of academic need. Sometimes the content has been pre-programmed by a subject-matter expert, and sometimes the technology uses algorithms and artificial intelligence to predict the content that the learner is going to find most helpful. Most adaptive learning is some combination of the two approaches.
In our pilot project, six sections of a course incorporated the adaptive learning technology while six others used traditional methods. The traditional methods did not involve long lectures or large crowds of students. Rather the design of the courses incorporated small group problem-solving and online instructor interaction via discussion forums. The metrics for comparison between the two courses involved student and faculty satisfaction survey results, the time students and faculty spent in the course platform, and students’ final grades on the exam.
Results indicated that students and faculty were more satisfied with the course that included the small group problem-solving than they were with adaptive learning. They reported feeling like they were stuck in endless loops of robotic question-and-answer when they were using the adaptive learning tool, and they also struggled to trust the feedback provided by the technology. They also reported lower levels of satisfaction with and connection to their instructor. Exam scores were nearly identical for the two groups, and students in the adaptive learning course spent a bit more time in the course platform.
Learning outcome data, satisfaction data, and system data (time spent in course) were collected and provided to a group of faculty members and the instructional designer. The group reviewed the business case for the adaptive learning tool. The tool promised to reduce grading time for faculty while improving student learning outcomes. While these results could certainly occur in other environments, in our environment, very few benefits were realized, and for now, that particular tool was not implemented. We learned the following:
- Plan your criteria for success before you pilot a new technology
- Include metrics that show appeal, efficiency, and effectiveness
- Collaborate. It takes time, but the end result saves time and frustration in the future
- When implementing formative assessment technology, make intentional efforts to humanize other aspects of the learning experience.
- Provide avenues for learners to question the credibility/accuracy of the technology.
Technology has tremendous potential to transform teaching and learning, but it will be increasingly important for designers to work collaboratively, design intentionally, and evaluate objectively when we implement technologies for the future.