Using Instruction-Embedded Formative Assessment to Predict State Summative Test Scores and Achievement Levels in Mathematics
Keywords:Intelligent Tutoring Systems, Formative Assessment, Mathematics Education, Accountability, Assessment, Predictive Modeling
If we wish to embed assessment for accountability within instruction, we need to better understand the relative contribution of different types of learner data to statistical models that predict scores and discrete achievement levels on assessments used for accountability purposes. The present work scales up and extends predictive models of math test scores and achievement levels from existing literature and specifies six categories of models that incorporate information about student prior knowledge, socio-demographics, and performance within the MATHia intelligent tutoring system. Linear regression, ordinal logistic regression, and random forest regression and classification models are learned within each category and generalized over a sample of 23,000+ learners in Grades 6, 7, and 8 over three academic years in Miami-Dade County Public Schools. After briefly exploring hierarchical models of this data, we discuss a variety of technical and practical applications, limitations, and open questions related to this work, especially concerning to the potential use of instructional platforms like MATHia as a replacement for time- consuming standardized tests.
Anozie, N. O., & Junker, B. W. (2006). Predicting end-of-year accountability assessment scores from monthly student records in an online tutoring system. AAAI Workshop on Educational Data Mining (AAAI-06), 17 July 2006, Boston, MA, USA. https://www.aaai.org/Papers/Workshops/2006/WS-06-05/WS06-05-001.pdf
Ayers, E., & Junker, B. W. (2008). IRT modeling of tutor performance to predict end-of-year exam scores. Educational and Psychological Measurement, 68(6), 972–987. http://dx.doi.org/ 10.1177/0013164408318758
Baker, R. S .J. d., Corbett, A. T., Roll, I., & Koedinger, K. R. (2008). Developing a generalizable detector of when students game the system. User Modeling and User-Adapted Interaction, 18, 287–314. http://dx.doi.org/10.1007/s11257-007-9045-6
Baker, R. S. J. d., Gowda, S. M., Wixon, M., Kalka, J., Wagner, A. Z., Salvi, A., Aleven, V., Kusbit, G. W., Ocumpaugh, J., & Rossi, L. (2012). Towards sensor-free affect detection in Cognitive Tutor Algebra. In K. Yacef, O. Zaiane, A. Hershkovitz, M. Yudelson, & J. Stamper (Eds.), Proceedings of the 5th International Conference on Educational Data Mining (EDM2012), 19–21 June 2012, Chania, Greece (pp. 126–133). International Educational Data Mining Society.
Beck, J. E., Lia, P., & Mostow, J. (2004). Automatically assessing oral reading fluency in a tutor that listens. Technology, Instruction, Cognition and Learning, 2(1–2), 61–81.
Binet, A. (1909). Les idées modernes sur les enfants. [Modern concepts concerning children.] Paris: Flammarion.
Black, P. J., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy and Practice, 5(1), 7–73. http://dx.doi.org/10.1080/0969595980050102
Bloom, B. S. (1968). Learning for mastery. Evaluation Comment, 1(2). https://eric.ed.gov/?id=ED053419
Breiman, L. (1996). Bagging predictors. Machine Learning, 24(2), 123–140. http://dx.doi.org/10.1023/A:1018054314350
Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32. http://dx.doi.org/10.1023/A:1010933404324
Brookhart, S. M. (2009). Editorial: Special issue on the validity of formative and interim assessment. Educational Measurement: Issues and Practice, 28(3), 1–4.
Buckingham, B. R. (1921). Intelligence and its measurement: A symposium XIV. Journal of Educational Psychology, 12, 271–275. http://dx.doi.org/10.1037/h0066019
Campione, J. C., & Brown, A. L. (1985). Dynamic assessment: One approach and some initial data. Technical Report No. 361. Champaign, IL: University of Illinois at Urbana-Champaign, Center for the Study of Reading.
Carpenter, S. K., Cepeda, N. J., Rohrer, D., Kang, S. H. K., & Pashler, H. (2012). Using spacing to enhance diverse forms of learning: Review of recent research and implications for instruction. Educational Psychology Review, 24, 369–378. http://dx.doi.org/10.1007/s10648-012-9205-z
Corbett, A. T., & Anderson, J. R. (1994). Knowledge tracing: Modeling the acquisition of procedural knowledge. User Modeling and User-Adapted Interaction, 4(4), 253–278. http://dx.doi.org/10.1007/BF01099821
Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems. Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI’17), 19–25 August 2017, Melbourne, Australia (pp. 4691–4697). Palo Alto, CA: AAAI Press.
Demšar, J. (2006). Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7, 1–30.
Evans, C., & Lyons, S. (2017). Comparability in balanced assessment systems for state accountability. Educational Measurement Issues and Practice, 36(3), 24–34. http://dx.doi.org/10.1111/emip.12152
Feng, M., Heffernan, N. T., & Koedinger, K. R. (2006). Predicting state test scores better with intelligent tutoring systems: Developing metrics to measure assistance required. In M. Ikeda, K. Ashlay, & T.-W. Chan (eds.), Intelligent Tutoring Systems, ITS 2006. Lecture Notes in Computer Science, vol 4053. Springer, Berlin, Heidelberg
Florida Department of Education. (2014). FCAT 2.0 and Florida EOC Assessments Achievement Levels. http://www.fldoe.org/core/fileparse.php/3/urlt/achlevel.pdf
Florida Department of Education. (2017). Florida Standards Assessment: 2016–17 FSA English Language Arts and Mathematics Fact Sheet. http://www.fldoe.org/core/fileparse.php/5663/urlt/ELA-MathFSAFS1617.pdf
Friedman, M. (1940). A comparison of alternative tests of significance for the problem of m rankings. Annals of Mathematical Statistics, 11(1), 86–92.
García, S., Fernández, A., Luengo, J., & Herrera, F. (2010). Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Information Sciences, 180, 2044–2064. https://dx.doi.org/10.1016/j.ins.2009.12.010
Gardner, J., & Brooks, C. (2018). Evaluating predictive models of student success: Closing the methodological gap. Journal of Learning Analytics, 5(2), 105–125. https://dx.doi.org/10.18608/jla.2018.52.7
Gettinger, M., & White, M. A. (1980). Evaluating curriculum fit with class ability. Journal of Educational Psychology, 72, 338–344. http://dx.doi.org/10.1037/0022-06126.96.36.1998
Grigorenko, E. L., & Sternberg, R. J. (1998). Dynamic testing. Psychological Bulletin, 124(1), 75. http://dx.doi.org/10.1037/0033-2909.124.1.75
Harlen, W., & James, M. (1997). Assessment and learning: Differences and relationships between formative and summative assessment. Assessment in Education: Principles, Policy & Practice, 4(3), 365–379, http://dx.doi.org/10.1080/0969594970040304
Hart, R., Casserly, M., Uzzell, R., Palacios, M., Corcoran, A., & Spurgeon, A. (2015). Student testing in America’s great city schools: An inventory and preliminary analysis. Washington, DC: Council of Great City Schools.
Heritage, M. (2010). Formative assessment and next-generation assessment systems: Are we losing an opportunity? Washington, D.C.: Council of Chief State School Officers.
Hodges, J. L., & Lehmann, E. L. (1962). Ranks methods for combination of independent experiments in analysis of variance. Annals of Mathematical Statistics, 33, 482–497.
Hoerl, A. E., & Kennard, R. W. (1970). Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 12(1), 55–67.
Joshi, A., Fancsali, S. E., Ritter, S., Nixon, T., & Berman, S. (2014). Generalizing and extending a predictive model for standardized test scores based on cognitive tutor interactions. In J. Stamper et al. (Eds.), Proceedings of the 7th International Conference on Educational Data Mining (EDM2014), 4–7 July 2014, London, UK (pp. 369–370). International Educational Data Mining Society. http://educationaldatamining.org/EDM2014/uploads/procs2014/posters/45_EDM-2014-Poster.pdf
Junker, B. W. (2006). Using on-line tutoring records to predict end-of-year exam scores: Experience with the ASSISTments project and MCAS 8th grade mathematics. In R. W. Lissitz (Ed.), Assessing and modeling cognitive development in school: Intellectual growth and standard setting. Maple Grove, MN: JAM Press.
Koedinger, K. R., Corbett, A. T., & Perfetti, C. (2012). The knowledge–learning–instruction framework: Bridging the science–practice chasm to enhance robust student learning. Cognitive Science, 36(5), 757–798. https://dx.doi.org/10.1111/j.1551-6709.2012.01245.x
Kruskal, W. H., & Wallis, W. A. (1952). Use of ranks in one-criterion variance analysis. Journal of the American Statistical Association, 47, 583–621.
Lazarín, M. (2014, October). Testing overload in America’s schools. Washington, DC: Center for American Progress. https://cdn.americanprogress.org/wp-content/uploads/2014/10/LazarinOvertestingReport.pdf
Lehman, B., Hebert, D., Jackson, G. T., & Grace, L. (2017). Affect and experience: Case studies in games and test-taking. Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’17), 6–11 May 2017, Denver, Colorado, USA (pp. 917–924). New York: ACM. http://doi.org/10.1145/3027063.3053341
Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspectives, 1(1), 3–62. http://dx.doi.org/10.1207/S15366359MEA0101_02
Moses, S. (2017, March 28). State testing starts today; Opt out CNY leader says changes are “smoke and mirrors.” https://www.syracuse.com/schools/index.ssf/2017/03/opt-out_movement_ny_teacher_union_supports_parents_right_to_refuse_state_tests.html
Nelson, H. (2013, July). Testing more, teaching less: What America’s obsession with student testing costs in money and lost instructional time. Washington, D.C.: American Federation of Teachers. http://www.aft.org/sites/default/files/news/testingmore2013.pdf
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Broadway Books.
Pane, J. F., Griffin, B. A., McCaffrey, D. F., & Karam, R. (2014). Effectiveness of cognitive tutor algebra I at scale. Educational Evaluation and Policy Analysis, 36(2), 127–144. https://dx.doi.org/10.3102/0162373713507480
Pardos, Z. A., Baker, R. S., San Pedro, M., Gowda, S. M., & Gowda, S. M. (2014). Affective states and state tests: Investigating how affect and engagement during the school year predict end-of-year learning outcomes. Journal of Learning Analytics, 1(1), 107–128. https://doi.org/10.18608/jla.2014.11.6
Pardos, Z. A., Heffernan, N. T., Anderson, B., Heffernan, C. L., & Schools, W. P. (2010). Using fine-grained skill models to fit student performance with Bayesian networks. In C. Romero, S. Ventura, M. Pechenizkiy, & R. S. J. d. Baker (Eds.), Handbook of educational data mining (pp. 417–426). Boca Raton, FL: CRC Press.
Pashler, H., Cepeda, N. J., Wixted, J., & Rohrer, D. (2005). When does feedback facilitate learning of words? Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 3–8. http://doi.org/10.1037/0278-73188.8.131.52
PDK/Gallup. (2015). 47th annual PDK/Gallup Poll of the Public’s Attitudes Toward the Public Schools: Testing Doesn't Measure Up For Americans. Phi Delta Kappan, 97(1).
Perie, M., Marion, S., & Gong, B. (2009). Moving toward a comprehensive assessment system: A framework for considering interim assessments. Educational Measurement: Issues and Practice, 28(3), 5–13. https://dx.doi.org/10.1111/j.1745-3992.2009.00149.x
Raftery, A. E. (1995). Bayesian model selection in social research. Sociological Methodology, 25, 111–163.
Razzaq, L., Feng, M., Nuzzo-Jones, G., Heffernan, N. T., Koedinger, K. R., Junker, B., ... & Livak, T. (2005). Blending assessment and assisting. In C.-K. Looi, G. I. McCalla, B. Bredeweg, & J. Breuker (Eds.), Proceedings of the 12th International Conference on Artificial Intelligence in Education (AIED 2005), 18–22 July 2005, Amsterdam, The Netherlands (pp. 555–562). Amsterdam, The Netherlands: IOS Press.
Roediger, H. L., & Karpicke, J. D. (2006). The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science, 1, 181–210. https://dx.doi.org/10.1111/j.1745-6916.2006.00012.x
Ritter, S., Anderson, J. R., Koedinger, K. R., & Corbett, A. (2007). Cognitive Tutor: Applied research in mathematics education. Psychonomic Bulletin & Review, 14(2), 249–255. https://dx.doi.org/10.3758/BF03194060
Ritter, S., Joshi, A., Fancsali, S. E., & Nixon, T. (2013). Predicting standardized test scores from Cognitive Tutor interactions. In S. K. DʼMello et al. (Eds.), Proceedings of the 6th International Conference on Educational Data Mining (EDM2013), 6–9 July 2013, Memphis, TN, USA (pp. 169–176). International Educational Data Mining Society/Springer.
Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6(2), 461–464. http://dx.doi.org/10.1214/aos/1176344136
Shute, V. J., & Ke, F. (2012). Games, learning, and assessment. In D. Ifenthaler, D. Eseryel, & X. Ge (Eds.), Assessment in game-based learning. New York: Springer. http://dx.doi.org/10.1007/978-1-4614-3546-4
Shute, V. J., Levy, R., Baker, R., Zapata, D., & Beck, J. (2009). Assessment and learning in intelligent educational systems: A peek into the future. In S. D. Craig & D. Dicheva (Eds.), Proceedings of the 14th International Conference on Artificial Intelligence in Education (AIED ʼ09), Vol. 3: Intelligent Educational Games, 6–10 July 2009, Brighton, UK (pp. 99–108). Amsterdam, The Netherlands: IOS Press.
Shute, V. J., & Moore, G. R. (2017). Consistency and validity in game-based stealth assessment. In H. Jiao & R. W. Lissitz (Eds.), Technology enhanced innovative assessment: Development, modeling, and scoring from an interdisciplinary perspective (pp. 31–51). Charlotte, NC: Information Age Publishing.
Sirin, S. (2005). Socioeconomic status and academic achievement: A meta-analytic review of research. Review of Educational Research, 75(3), 417–453. http://www.jstor.org/stable/3515987
Slade, S., & Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1509–1528. https://dx.doi.org/10.1177/0002764213479366
Snow, R. E., & Lohman, D. F. (1989). Implications of cognitive psychology for educational measurement. In R. L. Linn (Ed.), Educational Measurement (3rd ed., pp. 263–331). New York: American Council on Education, Macmillan.
State of Minnesota. (2017). Standardized student testing: 2017 evaluation report. Office of the Legislative Auditor. http://www.auditor.leg.state.mn.us/ped/pedrep/studenttesting.pdf
Tagami, T. (2018, February 15). Alternative testing bill passes Georgia Senate. Politically Georgia. https://politics.myajc.com/news/state--regional-education/alternative-testing-bill-passes-georgia-senate/AANXBjII7whHivfY2QuCKI/
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B, 58(1), 267–288. https://dx.doi.org/10.1111/j.2517-6161.1996.tb02080.x
US Department of Education. (2016). Table 215.30: Enrollment, poverty, and federal funds for the 120 largest school districts, by enrollment size in 2014 (selected years, 2013–14 through 2016). Digest of Education Statistics. National Center for Education Statistics. https://nces.ed.gov/programs/digest/d16/tables/dt16_215.30.asp
Venables, W. N., & Ripley, B. D. (2002). Modern applied statistics with S (4th ed.). New York: Springer.
Walker, T. (2018, January 4). Educators strike big blow to overuse of standardized testing in 2017. neaToday: News and Features from the National Education Association. http://neatoday.org/2018/01/04/educators-strike-big-blow-to-the-overuse-of-standardized-testing/
Wiliam, D. (2011). Embedded formative assessment. Bloomington, IN: Solution Tree Press.
Zerr, C. L., Berg, J. J., Nelson, S. M., Fishell, A. K., Savalia, N. K., & McDermott, K. B. (2018). Learning efficiency: Identifying individual differences in learning rate and retention in healthy adults. Psychological Science, 29(9). http://dx.doi.org/10.1177/0956797618772540
How to Cite
Copyright (c) 2019 Journal of Learning Analytics
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons License, Attribution - NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0) license that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).