If you are involved in higher education in any way, you have heard about Massive Open Online Courses (MOOCs). I first heard of them back in the Fall of 2011 when I was one of 160,000 students to sign up for an online class in Artificial Intelligence (AI). I have PhD in Computer Science and my area of specialization was Machine Learning, a branch of AI. I have taught AI at the undergraduate level. So I wasn’t signing up for the course because I wanted to learn the content. Instead, I wanted to understand the delivery method of the course. In particular, I wanted to figure out how two preeminent AI researchers, Sebastian Thrun and Peter Norvig, would teach a class to over 150,000 students. I spent a couple of weeks working through online video lectures and problem sets, commenting on my fellow students’ answers and reading their comments on my answers. I stopped participating after a few weeks as other responsibilities ate up my time. It isn’t clear to me how many other people signed up for the class for reasons similar to mine. 23,000 people finished the course and received a certification of completion. Based on the “success” of this first offering, Thrun left his tenured faculty position at Stanford University and founded a start up company called Udacity.
There has been a lot of hype about MOOCs in general and Udacity in particular. It’s interesting to me that many of these MOOCs seem to employ pedagogical techniques that are highly criticized in face-to-face classrooms. In this advertisement, for example, Udacity makes a big deal about the prestige and reputations of the people involved in talking at students about various topics. Want to build a blog? Listen to Reddit co-founder Steve Huffman talk about building blogs. In other words, these classes rely heavily on video lectures. The lecture format for face-to-face classrooms, for example, is much maligned as being ineffective for student learning and mastery of course content. Why, then, do we think online courses which use video lectures (from people who have little training in how to effectively structure a lecture) will be effective? The ad also makes a big deal about the fact that the average length of their video lectures is 1 minute. Is there any evidence that shorter lectures are more effective? It depends on what else students are asked to do. The ad makes bold claims about the interactivity, the hands-on nature of these courses. But how interactivity is implemented is unclear from the ad.
Several people have written thoughtful reviews of Udacity courses based on participating in those courses. Robert Talbert, for example, wrote about his mostly positive experiences in an introductory programming class in The Chronicle of Higher Education. Interestingly, his list of positive pedagogical elements looks like a list of game elements. The course has clear goals, both short and long-term. There is immediate feedback on student learning as they complete frequent quizzes which are graded immediately by scripts. There is a balance between challenge presented and ability level so that as the student becomes more proficient as a programmer, the challenges presented become appropriately more difficult. And the participants in the course feels a sense of control over their activities. This is classic gamification and should result in motivated students.
So why are so many participants having trouble with these courses now? Earlier this year, Udacity set up a partnership with San Jose State to offer courses for credit for a nominal fee (typically $150 per course). After just two semesters, San Jose State put the project on hold earlier this week because of alarmingly high failure rates. The courses were offered to a mix of students, some of whom were San Jose State students and some of whom were not. The failure rate for San Jose State students was between 49 and 71 percent. The failure rate for non-San Jose State students was between 55 and 88 percent. Certainly, in a face-to-face class or in a non-MOOC class, such high failure rates would cause us to at least entertain the possibility that there was something wrong with the course itself. And so it makes sense that San Jose State wants to take a closer look at what’s going on in these courses.
One article about the Udacity/San Jose State project that made me laugh because of its lack of logic is this from Forbes magazine. The title of the article is “Udacity’s High Failure Rate at San Jose State Might Not Be a Sign of Failure.” Huh? What the author means is that a bunch of students failing a class doesn’t mean there’s something wrong with the class itself. He believes that the purpose of higher education is to sort people into two categories–those smart enough to get a degree and those not smart enough. So, his logic goes, those who failed this class are simply not smart enough to get a degree. I would argue with his understanding of the purpose of higher education but let’s grant him his premise. What proof do we have from this set of MOOCs that they accurately sort people into these two categories? Absolutely none. So it still makes sense to me that San Jose State would want to investigate the MOOCs and what happened with them. Technology is rarely a panacea for our problems. I think the MOOC experiment is likely to teach us some interesting things about effective online teaching. But I doubt they are going to fix what ails higher education.