There is a substantial body of educational research validating the efficacy of quizzes as a formative assessment tool, emphasising their utility in student motivation, engagement, feedback, self-monitoring and learning... [ view full abstract ]
There is a substantial body of educational research validating the efficacy of quizzes as a formative assessment tool, emphasising their utility in student motivation, engagement, feedback, self-monitoring and learning autonomy (Boud 2000, Kerka and Wonacott 2000, Tuttle 2001, Gaytan and McEwen 2007, Gikandi et al 2011, Kearns 2012). These qualities are particularly important in online learning environments, where all or part of the instruction is engaged in by the student without a teacher or lecturer. For example, knowledge checks, a specific standard for formative quiz use, are often used in digitally delivered instruction to enhance conceptual understanding, encourage periodic self-evaluation and emphasise important material (Lewis 2010). Good-quality, timely feedback on student progress is acknowledged in the literature as a key component of effective assessment (Brown et al 1996, Nicol and Milligan 2006, Hattie 2012), and quizzes may be used to provide this seamlessly in an online, learner-centred context. Much research into the use of online quizzes for formative assessment focuses on the STEM subjects (Dufresne et al 2002, Roberts 2006, Salas-Morera et al 2012, Berrais 2015, Cohen and Sasson 2016), while research on their use in the humanities is much less well represented.
This presentation looks at the use of online knowledge-check quizzes as a formative assessment strategy and describes the findings of a small research study, conducted in Hibernia College, that used VLE learning analytics to examine the implementation of such quizzes in a blended-learning post-primary teaching programme. Using captured historic VLE data for a single cohort (n=126), patterns of use of formative Knowledge Check quizzes were analysed with particular regard to completion and retakes. Three hypotheses were tested using appropriate data correlation methods. Completion levels for Knowledge Checks were correlated with completion levels for other online tasks to see whether an increase in task workload resulted in a decrease in quiz engagement. A second statistical test compared levels of quiz re-attempts with completion levels for other online tasks, to see whether different patterns of quiz attempts were linked to different levels of online engagement. A third statistical test was used to ascertain the relationship, if any, between student gender and different patterns of quiz attempts, to see if gender might be a factor in quiz engagement. The findings of this study suggested that the decrease in engagement with quizzes was not connected to task workload increase, and that there was a relationship between quiz re-attempts and higher module engagement.
The results of the analysis are presented and discussed in the context of what they might tell us about student engagement with online formative strategies in humanities-based subjects. Options are considered for improving the design of quizzes for enhanced engagement and formative value in this specific teaching and learning context; the potential and limitations of learning analytics in informing evidence-based improvements in digital learning design is also assessed.
Topics: Assessment and feedback in a digital age , Topics: Learning analytics: research and practice