We are fortunate that 7 of our teachers and professors agreed to sit down with us to talk about their experiences teaching and teaching with Ponder.
We have a lot of anecdotal evidence that Ponder has a noticeable impact on student engagement and class discussion. But how big is the impact? and compared to what? Many of our teachers had tried using blogs or discussion forums. But many more had never tried any technology to support student reading. In short, we wanted to turn subjective, observed correlations into objective, reproducible causations.
A Controlled Study
To measure Ponder’s effect more rigorously, we need a controlled environment where, to the extent possible, the only difference between two groups of students is the specific online discussion-focused tool they are using for class. Only then can we draw any credible conclusions about Ponder’s effectiveness.
Last summer, two researchers at San Francisco State University designed a long-term controlled study of Ponder’s impact on the classroom learning environment. Geoff Desa, Ph. D. and Meg Gorzycki, Ed.D. completed a review with the SFSU Institutional Review Board to ensure that the study met their guidelines. For those who are curious, or those who would like to replicate the process, take a look at the IRB Protocol Review Documents.
Everything but the name…
The study involved 4 classes of ~30 students each taking the identical class with the same professor during an intensive, 5-week summer semester. Two of the classes were instructed to use SFSU’s iLearn discussion forums to post and discuss articles relating to the class – the “control” for the experiment – and two of the classes were instructed to use Ponder. In both cases, identical scripts were used by the professor to introduce the tools, with only the name of the tool changed. The “quantity and quality” of their class contributions in these tools were to be incorporated into their class participation grades.
The professor used one script to introduce the tools to all 4 classes. The only difference was the name of the tool.
For those of you who are familiar with both discussion forums and Ponder, you might be wondering how the same script could be used to describe both, since they are functionally quite different from one another. You are right to wonder! Not only are they quite different, but most students were probably familiar with iLearn or something very much like it, and completely unfamiliar with Ponder or anything remotely like it. Still, it was essential to keep the number of differences between two experimental cohorts as small as possible. Consequently, students in the Ponder classes were left to figure out how to use it on their own. Despite this, students encountered few problems getting up and running with Ponder.
Over 90% of Ponder students participated, and on average, each of the students in the Ponder group contributed 26.9 responses while spending 329 minutes reading 208 documents – all of which was self-directed reading that fell outside of the assigned materials for the course. By contrast, students in the control group posted only an average of 2.2 times, and a quarter never contributed at all. The classes with Ponder were not only more inclusive, but generated over 12 times the volume of participation. In iLearn, there were a total of 123 posts. In Ponder there were 1747.
In iLearn, there were a total of 123 posts. In Ponder there were 1747.
In addition, preliminary analysis of data show that:
- The average final grade for Ponder students was B+ as compared to B for the control.
- There was a positive correlation between Ponder activity and quizzes, class participation, and group projects.
- There was a negative correlation between iLearn activity and class participation.
- More students participated in class discussion in the Ponder section with more participation per student.
What conclusions can we draw from these numbers?
Ponder is either more engaging and fun to use than iLearn, or it’s simply easier to use so students are more likely to use it. It’s hard to tell because none of the students experienced both so we can’t ask for subjective feedback comparing the two.
Either way, what’s important from the perspective of educational impact is that the students in the Ponder classes produced higher quality work, as demonstrated by the average half-grade improvement from the control.
When the final grades came in, Ponder students averaged a statistically significant half grade higher than the students in the control group.
The study itself has many more components to it, and the qualitative survey data had more students complaining about being confused by Ponder’s interface, unsurprising given the circumstances. The survey results also showed that students from the Ponder group appreciated the online component of the course more, said it made them want to read more and would recommend it to other professors more than the iLearn control group.
So what’s next?
These early results are very encouraging, but what we’re really curious about is forthcoming analysis of Ponder data that will start to paint a picture of how student reading habits change over time with Ponder.
- Do students read from a broader range of sources by the end of the course? Do they read longer articles? Do they read more deeply around individual topics?
- How do students affect each other’s reading? Do students discover new areas of interest through their peers? In other words, we don’t just want to know if their topic clouds grow over the course of the semester, but whether the students’ topic clouds increase in overlap.
- Which students are the best at engaging other students in reading?
In short, what we’re interested in measuring with Ponder is not only student engagement but “intellectual curiosity” as defined by the reading behaviors listed above.
The second phase of the study is running right now, and we are looking to repeat and broaden the data to other levels and institutions. If you would be interested in collaborating with us or the SFSU researchers, get in touch!