Evaluating Predictive Models of Student Success: Closing the Methodological Gap


  • Joshua Patrick Gardner The University of Michigan - Ann Arbor
  • Christopher Brooks The University of Michigan - Ann Arbor




Predictive modeling, methodology, model evaluation, bayesian, machine learning, MOOCs


Model evaluation – the process of making inferences about the performance of predictive models – is a critical component of predictive model-ing research in learning analytics. In this work, we present an overview of the state-of-the-practice of model evaluation in learning analytics, which overwhelmingly uses only na ̈ıve methods for model evaluation or, less commonly, statistical tests which are not appropriate for predictive model evaluation. We then provide an overview of more appropriate methods for model evaluation, presenting both frequentist and a preferred Bayesian method. Finally, we apply three methods – the na ̈ıve average commonly used in learning analytics, frequentist null hypothesis significance test(NHST), and hierarchical Bayesian model evaluation – to a large set ofMOOC data. We compare 96 different predictive modeling techniques,including different feature sets, statistical modeling algorithms, and tuning hyperparameters for each, using this case study to demonstrate the different experimental conclusions these evaluation techniques provide.




How to Cite

Gardner, J. P., & Brooks, C. (2018). Evaluating Predictive Models of Student Success: Closing the Methodological Gap. Journal of Learning Analytics, 5(2), 105-125. https://doi.org/10.18608/jla.2018.52.7



Special Section: Methodological Choices in Learning Analytics