Topic Dependency Models: Graph-Based Visual Analytics for Communicating Assessment Data
Educational environments continue to evolve rapidly to address the needs of diverse, growing student populations while embracing advances in pedagogy and technology. In this changing landscape, ensuring consistency among the assessments for different offerings of a course (within or across terms), providing meaningful feedback about student achievements, and tracking student progress over time are all challenging tasks, particularly at scale. Here, a collection of visual Topic Dependency Models (TDMs) is proposed to help address these challenges. It uses statistical models to determine and visualize student achievements on one or more topics and their dependencies at a course level reference TDM (e.g., CS 100) as well as assessment data at the classroom level (e.g., students in CS 100 Term 1 2016 Section 001), both at one point in time (static) and over time (dynamic). The collection of TDMs share a common two-weighted graph foundation. Exemplar algorithms are presented for the creation of the course reference and selected class (static and dynamic) TDMs; the algorithms are illustrated using a common symbolic example. Studies on the application of the TDM collection on datasets from two university courses are presented; these case studies utilize the open-source, proof of concept tool under development.
Albert, D., Nussbaumer, A., & Steiner, C. (2010). Towards generic visualisation tools and techniques for adaptive e-learning. In Proc. 18th int. conf. computers in education (pp. 61–65).
Beheshitha, S. S., Hatala, M., Gaˇsevi´c, D., & Joksimovi´c, S. (2016). The role of achievement goal orientations when studying effect of learning analytics visualizations. In Proceedings of the sixth international conference on learning analytics & knowledge (pp. 54–63).
Bodily, R., Kay, J., Aleven, V., Jivet, I., Davis, D., Xhakaj, F., & Verbert, K. (2018). Open learner models and learning analytics dashboards: a systematic review. In Proceedings of the 8th international conference on learning analytics and knowledge (pp. 41–50).
Bostock, M., Ogievetsky, V., & Heer, J. (2011). D3 data-driven documents. IEEE transactions on visualization and computer graphics, 17(12), 2301–2309.
Bull, S., Ginon, B., Boscolo, C., & Johnson, M. (2016). Introduction of learning visualisations and metacognitive support in a persuadable open learner model. In Proceedings of the sixth international conference on learning analytics & knowledge (pp. 30–39).
Bull, S., & Kay, J. (2010). Open learner models. In Advances in intelligent tutoring systems (pp. 301–322). Springer.
Campbell, D., & Stanley, J. (1966). Experimental and quasi-experimental designs for research.
Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education, 31(2), 219-233. Retrieved from http://dx.doi.org/10.1080/03075070600572132 doi: 10.1080/03075070600572132
Cen, H., Koedinger, K., & Junker, B. (2006). Learning factors analysis-a general method for cognitive model evaluation and improvement. In Intelligent tutoring systems (Vol. 4053, pp. 164–175).
Cooper, K., & Khosravi, H. (2018). Graph-based visual topic dependency models: Supporting assessment design and delivery at scale. In Proceedings of the 8th international conference on learning analytics and knowledge (pp. 11–15). New York, NY, USA: ACM. Retrieved from http://doi.acm.org/10.1145/3170358.3170418 doi:
Corbett, A. T., & Anderson, J. R. (1994). Knowledge tracing: Modeling the acquisition of procedural knowledge. User modeling and user-adapted interaction, 4(4), 253–278.
Dawson, S., Macfadyen, L., Risko, E. F., Foulsham, T., & Kingstone, A. (2012). Using technology to encourage self-directed learning: The collaborative lecture annotation system.
Denny, P., Hamer, J., Luxton-Reilly, A., & Purchase, H. (2008). Peerwise: students sharing their multiple choice questions. In Proceedings of the fourth international workshop on computing education research (pp. 51–58).
D’Mello, S., Lehman, B., Sullins, J., Daigle, R., Combs, R., Vogt, K., Graesser, A. (2010). A time for emoting: When affect-sensitivity is and isn’t effective at promoting deep learning. In Intelligent tutoring systems (pp. 245–254).
Ellis, C. (2013). Broadening the scope and increasing the usefulness of learning analytics: The case for assessment analytics. British Journal of Educational Technology, 44(4), 662–664.
Englund, C., Olofsson, A. D., & Price, L. (2017). Teaching with technology in higher education: understanding conceptual change and development in practice. Higher Education Research & Development, 36(1), 73–87.
GitHub. (2017). Topic dependency models tool repository. Retrieved from https://github.com/hkhosrav/Topic-Dependency-Models
Holzer, A., Voznuik, A., Gillet, D., Rodrıguez-Triana, M. J., Schwendimann, B. A., Prieto, L. P., Dillenbourg, P. (2016). Perceiving learning at a glance: A systematic literature review of learning dashboard research. IEEE Transactions on Learning Technologies.
Hunter, J. D. (2007). Matplotlib: A 2d graphics environment. Computing In Science & Engineering, 9(3), 90–95. doi:10.1109/MCSE.2007.55
Khan, I., & Pardo, A. (2016). Data2u: Scalable real time student feedback in active learning environments. In Proceedings of the sixth international conference on learning analytics & knowledge (pp. 249–253).
Khosravi, H. (2017). Recommendation in personalised peer-learning environments. arXiv preprint arXiv:1712.03077.
Khosravi, H., Cooper, K., & Kitto, K. (2017). Riple: Recommendation in peer-learning environments based on knowledge gaps and interests. JEDM-Journal of Educational Data Mining, 9(1), 42-67.
Khosravi, H., & Cooper, K. M. (2017). Using learning analytics to investigate patterns of performance and engagement in large classes. In Proceedings of the 2017 acm sigcse technical symposium on computer science education (pp. 309–314).
Kirk, A. (2012). Data visualization: a successful design process. Packt Publishing Ltd.
May, M., George, S., & Prevot, P. (2011). Travis to enhance online tutoring and learning activities: Real–time visualization of students tracking data. Interactive Technology and Smart Education, 8(1), 52-69. Retrieved from http://dx.doi.org/10.1108/17415651111125513 doi: 10.1108/17415651111125513
Oxman, S., Wong, W., & Innovations, D. (2014). White paper: Adaptive learning systems. Integrated Education Solutions.
Pardos, Z., & Heffernan, N. (2011). Kt-idem: introducing item difficulty to the knowledge tracing model. User Modeling, Adaption and Personalization, 243–254.
Pavlik Jr, P. I., Cen, H., & Koedinger, K. R. (2009). Performance factors analysis–a new alternative to knowledge tracing. Online Submission.
Piech, C., Bassen, J., Huang, J., Ganguli, S., Sahami, M., Guibas, L. J., & Sohl-Dickstein, J. (2015). Deep knowledge tracing. In Advances in neural information processing systems (pp. 505–513).
Riordan T., L. G. (2009). Collaborative and systemic assessment of student learning: From principles to practice. In J. G. (Ed.), Assessment, learning and judgement in higher education. Springer.
Schwendimann, B. A., Rodr´ıguez-Triana, M. J., Vozniuk, A., Prieto, L. P., Boroujeni, M. S., Holzer, A., Dillenbourg, P. (2016). Understanding learning at a glance: An overview of learning dashboard studies. In Proceedings of the sixth international conference on learning analytics & knowledge (pp. 532–533).
Sha, L., & Hong, P. (2017). Neural knowledge tracing. In International conference on brain function assessment in learning (pp. 108–117).
Singh, A., Karayev, S., Gutowski, K., & Abbeel, P. (2017). Gradescope: a fast, flexible, and fair system for scalable assessment of handwritten work. In Proceedings of the fourth (2017) acm conference on learning@ scale (pp. 81–88).
Yudelson, M. V., Koedinger, K. R., & Gordon, G. J. (2013). Individualized bayesian knowledge tracing models. In International conference on artificial intelligence in education (pp. 171–180).
Share this article:
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.