Reflections Newsletter

What do students actually do with feedback from clicker questions?, Fall 2010

By Ken N. Meadows, Educational Researcher, Teaching and Learning Services

Traditionally in academia, research is made public through conference presentation and publication in academic journals. These are obviously critical avenues for research dissemination, but I do not believe that they reach a wide enough audience for research on teaching (i.e., the scholarship of teaching and learning). Ideally, research on teaching would reach all post-secondary educators. Although that may be beyond the reach of Reflections, I would like to take full advantage of this opportunity to share the findings from a research project that Debra Dawson, Tom Haffie, and I (Dawson, Meadows, & Haffie, 2010) conducted on how students use the feedback that answering clicker questions provides.

Clickers have become common place in large university classrooms and are increasingly a common focus in research on teaching. The research has demonstrated that clickers are positively received by both students and faculty members (e.g., Addison, Wright, & Milner, 2009), but their impact on student learning is less well understood. One of the most commonly cited benefits of clickers is that they provide students with feedback on their learning (Barnett, 2006), but what is unclear is what students actually do with that feedback. The assumption seems to be that students modify their learning strategies based on the feedback, but is that what actually happens? This was the primary question that motivated our research. That said, we decided it was first necessary to establish whether or not clicker questions have diagnostic value for students’ performance in the course. This is a critical question because, if students’ clicker performance is not predictive of their performance in the course (i.e., it does not have that diagnostic value), there would be no reason for students to make changes to their learning strategies based on that performance.

To examine these questions, we conducted two large-scale online surveys in successive years of introductory Biology. In both surveys, we asked participants for their permission to access their clicker performance, grades for the course, as well as their university admission average. We found that clicker performance (operationalized for this research as the proportion of correctly answered clicker questions relative to the number of clicker questions the participant answered) for the course and participants’ test grades were positively and significantly correlated (in study 1; r = .63, p < .001) as was their clicker performance before the first midterm and their grade on that midterm (study 2 ; r = .51, p < .001). Those relationships remained significant across both studies when partialling out admission average, gender, clicker participation (number of clicker questions answered), learning self-efficacy (study 1 only), and science self-efficacy (study 1 only), although the magnitude of the relationship was diminished (partial r’s = .47 and .31, for studies 1 and 2, respectively).

Thus, participants’ clicker performance was a strong predictor of their performance on the tests. This message is important to convey to students so that they understand that if they are performing below their desired level on the clicker questions, they are likely to underperform on the tests as well. They may therefore need to adjust their learning strategies to achieve their desired performance level. Of course, to have diagnostic value it is important that instructors use clicker questions that are congruent with the questions that are asked on the tests (Carnaghan & Webb, 2005).

To determine if students modify their learning strategies based on their clicker feedback, we examined the relationship between students’ clicker performance and their self-reported helpseeking behaviours in study 1. Overall, neither of the two helpseeking measures were significantly correlated with participants’ clicker performance (r’s = -.08 and -.01, ns). Participants’ clicker performance appears to have no appreciable impact on whether they sought help or not.

Of course, although participants do not seem to be seeking help when they are getting the clicker questions wrong, that does not mean that they are not adjusting other learning strategies to increase their understanding of the material and their performance on the clicker questions. In study 2, we examined the relationship between students’ clicker performance and self-reported changes in their use of eight learning strategies (i.e., rehearsal, elaboration, organization, critical thinking, self-regulation, effort regulation, peer learning, help-seeking). Eighty-five percent of participants indicated that they had changed one or more of their learning strategies based on the clicker feedback, but these self-reported changes were not significantly related to participants’ actual clicker performance (r’s ranged from -.16 to .02, ns). Thus, although they reported changing their learning strategies based on their clicker performance, these reported changes were not related to their actual clicker performance.

Although the clicker performance-test performance relationship was made explicit to students at the beginning of the course in the year that study 2 was undertaken, there was still no relation between the changes students reported making in their learning strategies and their clicker performance (or their help-seeking in study 1). Svinicki (2004) suggests that postsecondary students tend to try harder rather than modify their learning strategies. Further, Pintrich and Zusho (2007) suggest that first–year students may not be aware of different learning strategies. It may be that students’ reported changes in learning strategy use reflects more of an effort to try harder than the adoption of a new strategy or more effective use of an existing strategy. To help students succeed academically, it may be necessary to teach them a range of possible learning strategies and how to effectively employ them.

Hopefully this summary of our research has piqued your interest. To read the full article (i.e., Dawson, Meadows, & Haffie, 2010), including more detailed pedagogical conclusions and suggestions, please go to http://ir.lib.uwo.ca/cjsotl_rcacea/vol1/iss1/6

References

Addison, S., Wright, A., & Milner, R. (2009). Using clickers to improve student engagement and performance in an introductory biochemistry class. Biochemistry and Molecular Biology Education, 37, 84-91.

Barnett, J. (2006). Implementation of personal response units in very large lecture classes: Student perceptions. Australasian Journal of Educational Technology, 22, 474-494.

Carnaghan, C., & Webb, A. (2005). Investigating the effects of group response systems on learning outcomes and satisfaction in accounting education. Retrieved from http://www.arts.uwaterloo.ca/ ACCT/research/publications/

Dawson, D. L., Meadows, K. N., & Haffie, T. (2010). The effect of performance feedback on student help-seeking and learning strategy use: Do clickers make a difference? The Canadian Journal for the Scholarship of Teaching and Learning, 1(1). Retrieved from http://ir.lib.uwo.ca/cjsotl_rcacea/ vol1/iss1/6

Pintrich, P., & Zugho, A. (2007). Student motivation and self-regulated learning in the college classroom. In R. Perry & J. Smart (Eds.) The scholarship of teaching and learning in higher education: An evidence-based perspective. The Netherlands: Springer.

Svinicki, M. (2004). Learning and motivation in the postsecondary classroom. Bolton, MA: Anker.


Teaching Support Centre
Room 122, The D.B. Weldon Library
Western University
London, Ontario N6A 3K7
(519) 661-2111, ext. 84622
tsc@uwo.ca