Evaluating Predictive Models of Student Success: Closing the Methodological Gap

Joshua Patrick Gardner
Christopher Brooks

Abstract


Model evaluation – the process of making inferences about the performance of predictive models – is a critical component of predictive model-ing research in learning analytics. In this work, we present an overview of the state-of-the-practice of model evaluation in learning analytics, which overwhelmingly uses only na ̈ıve methods for model evaluation or, less commonly, statistical tests which are not appropriate for predictive model evaluation. We then provide an overview of more appropriate methods for model evaluation, presenting both frequentist and a preferred Bayesian method. Finally, we apply three methods – the na ̈ıve average commonly used in learning analytics, frequentist null hypothesis significance test(NHST), and hierarchical Bayesian model evaluation – to a large set ofMOOC data. We compare 96 different predictive modeling techniques,including different feature sets, statistical modeling algorithms, and tuning hyperparameters for each, using this case study to demonstrate the different experimental conclusions these evaluation techniques provide.

Full Text:

PDF


DOI: https://doi.org/10.18608/jla.2018.52.7

Share this article: