A New Era in Multimodal Learning Analytics

Twelve Core Commitments to Ground and Grow MMLA

Authors

DOI:

https://doi.org/10.18608/jla.2021.7361

Keywords:

ethics, data collection, data mining, artificial intelligence, data dissemination, multimodal, sensor data, research paper

Abstract

Multimodal learning analytics (MMLA) has increasingly been a topic of discussion within the learning analytics community. The Society of Learning Analytics Research is home to the CrossMMLA Special Interest Group and regularly hosts workshops on MMLA during the Learning Analytics Summer Institute (LASI). In this paper, we articulate a set of 12 commitments that we believe are critical for creating effective MMLA innovations. Moreover, as MMLA grows in use, it is important to articulate a set of core commitments that can help guide both MMLA researchers and the broader learning analytics community. The commitments that we describe are deeply rooted in the origins of MMLA and also reflect the ways that MMLA has evolved over the past 10 years. We organize the 12 commitments in terms of (i) data collection, (ii) analysis and inference, and (iii) feedback and data dissemination and argue why these commitments are important for conducting ethical, high-quality MMLA research. Furthermore, in using the language of commitments, we emphasize opportunities for MMLA research to align with established qualitative research methodologies and important concerns from critical studies.

References

Abrahamson, D., Shayan, S., Bakker, A., & Van Der Schaaf, M. (2016). Eye-tracking Piaget: Capturing the emergence of attentional anchors in the coordination of proportional motor action. Human Development, 57(4-5), 218–244. https://doi.org/10.1159/000443153

Albrechtslund, A., & Ryberg, T. (2011). Participatory surveillance in the intelligent building. Design Issues, 27(3), 35–46. https://doi.org/10.1162/DESI_a_00089

Anderson, J. (2002). Spanning seven orders of magnitude: A challenge for cognitive modeling. Cognitive Science, 26(1), 85–112. https://doi.org/10.1207/s15516709cog2601_3

Anderson, K., Dubiel, T., Tanaka, K., Poultney, C., Brenneman, S., & Worsley, M. (2019). Chemistry pods: A multimodal real time and retrospective tool for the classroom. In Proceedings of the 2019 International Conference on Multimodal Interaction (ICMI 2019), 14–18 October 2019, Suzhou, China (pp. 506–507). New York: ACM. https://doi.org/10.1145/3340555.3358662

Baker, R. S. (2019). Challenges for the future of educational data mining: The Baker learning analytics prizes. Journal of Educational Data Mining, 11(1), 1–17. https://doi.org/10.5281/zenodo.3554745

Bassiou, N., Tsiartas, A., Smith, J., Bratt, H., Richey, C., Shriberg, E., … Alozie, N. (2016). Privacy-preserving speech analytics for automatic assessment of student collaboration. In Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH 2016), 8–12 September 2016, San Francisco, CA, USA (pp. 888–892). ISCA. https://doi.org/10.21437/Interspeech.2016-1569

Beardsley, M., Martínez Moreno, J., Vujovic, M., Santos, P., & Hernández‐Leo, D. (2020). Enhancing consent forms to support participant decision making in multimodal learning data research. British Journal of Educational Technology, 51(5), 1631–1652. https://doi.org/10.1111/bjet.12983

Blikstein, P. (2013). Multimodal learning analytics. Proceedings of the Third International Conference on Learning Analytics and Knowledge (LAK 2013), 8–13 April 2013, Leuven, Belgium (pp. 102–106). New York: ACM. https://doi.org/10.1145/2460296.2460316

Blikstein, P., & Worsley, M. (2016). Multimodal learning analytics and education data mining: Using computational technologies to measure complex learning tasks. Journal of Learning Analytics, 3(2), 220–238. https://doi.org/10.18608/jla.2016.32.11

Buckingham Shum, S., Ferguson, R., & Martinez-Maldonado, R. (2019). Human-centred learning analytics. Journal of Learning Analytics, 6(2), 1–9. https://doi.org/10.18608/jla.2019.62.1

Camp, J., & Connelly, K. (2008). Beyond consent: Privacy in ubiquitous computing (Ubicomp). In A. Acquisti, S. de C. di Vimercati, S. Gritzalis, & C. Lambrinoudakis (Eds.), Digital privacy: Theory, technologies, and practices (pp. 1–17). Boca Raton, FL, USA: Auerbach Publications. https://doi.org/10.1201/9781420052183-26

Ceneda, D., Gschwandtner, T., May, T., Miksch, S., Schulz, H.-J., Streit, M., & Tominski, C. (2016). Characterizing guidance in visual analytics. IEEE Transactions on Visualization and Computer Graphics, 23(1), 111–120. https://doi.org/10.1109/TVCG.2016.2598468

Chandrasegaran, S., Bryan, C., Shidara, H., Chuang, T.-Y., & Ma, K.-L. (2019). TalkTraces: Real-time capture and visualization of verbal content in meetings. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI 2019), 4–9 May 2019, Glasgow, UK (pp. 1–14). New York: ACM. https://doi.org/10.1145/3290605.3300807

Ciordas-Hertel, G.-P. (2020). How to complement learning analytics with smartwatches? Fusing physical activities, environmental context, and learning activities. In Proceedings of the 2020 International Conference on Multimodal Interaction (ICMI 2020), 25–29 October 2020, online (pp. 708–712). New York: ACM. https://doi.org/10.1145/3382507.3421151

Corrin, L., Kennedy, G., French, S., Buckingham Shum, S., Kitto, K., Pardo, A., … Colvin, C. (2019). The ethics of learning analytics in Australian higher education. Discussion Paper. Retrieved from http://utscic.edu.au.s3.amazonaws.com/wp-content/uploads/2021/03/22150303/LA_Ethics_Discussion_Paper.pdf

Crescenzi‐Lanna, L. (2020). Multimodal learning analytics research with young children: A systematic review. British Journal of Educational Technology, 51(5), 1485–1504. https://doi.org/10.1111/bjet.12959

Cukurova, M., Kent, C., & Luckin, R. (2019). Artificial intelligence and multimodal data in the service of human decision-making: A case study in debate tutoring. British Journal of Educational Technology, 50(6), 3032–3046. https://doi.org/10.1111/bjet.12829

Cukurova, M., Luckin, R., Millán, E., & Mavrikis, M. (2018). The NISPI framework: Analysing collaborative problem-solving from students’ physical interactions. Computers & Education, 116(Jan 2018), 93–109. https://doi.org/10.1016/j.compedu.2017.08.007

D’Angelo, C. M., Smith, J., Alozie, N., Tsiartas, A., Richey, C., & Bratt, H. (2019). Mapping individual to group level collaboration indicators using speech data. In Proceedings of the 13th International Conference on Computer Supported Collaborative Learning—A Wide Lens: Combining Embodied, Enactive, Extended, and Embedded Learning in Collaborative Settings (CSCL 2019), 17–21 June 2019, Lyon, France. ISLS. https://repository.isls.org/handle/1/1637

Delgado Kloos, C., Hernández Leo, D., & Asensio-Pérez, J. I. (2012). Technology for learning across physical and virtual spaces. Journal of Universal Computer Science, 18(15): 2093–2096. http://www.jucs.org/jucs_18_15/technology_for_learning_across/abstract.html

Di Mitri, D., Schneider, J., Klemke, R., Specht, M., & Drachsler, H. (2019). Read between the lines: An annotation tool for multimodal data for learning. In Proceedings of the Ninth International Conference on Learning Analytics and Knowledge (LAK 2019), 4–8 March 2019, Tempe, AZ, USA (pp. 51–60). New York: ACM. https://doi.org/10.1145/3303772.3303776

Di Mitri, D., Schneider, J., Specht, M., & Drachsler, H. (2018). From signals to knowledge: A conceptual model for multimodal learning analytics. Journal of Computer Assisted Learning, 34(4), 338–349. https://doi.org/10.1111/jcal.12288

Di Mitri, D., Schneider, J., Trebing, K., Sopka, S., Specht, M., & Drachsler, H. (2020). Real-time multimodal feedback with the CPR Tutor. In I. I. Bittencourt, M. Cukurova, K. Muldner, R. Luckin, & E. Millán (Eds.), Artificial Intelligence in Education (pp. 141–152). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-52237-7_12

Dindar, M., Järvelä, S., & Haataja, E. (2020). What does physiological synchrony reveal about metacognitive experiences and group performance? British Journal of Educational Technology, 51(5), 1577–1596. https://doi.org/10.1111/bjet.12981

Domínguez, F., Echeverría, V., Chiluiza, K., & Ochoa, X. (2015). Multimodal selfies: Designing a multimodal recording device for students in traditional classrooms. Proceedings of the 2015 ACM International Conference on Multimodal Interaction (ICMI 2015), 9–13 November 2015, Seattle, WA, USA (pp. 567–574). New York: ACM. https://doi.org/10.1145/2818346.2830606

Donnelly, P. J., Blanchard, N., Samei, B., Olney, A. M., Sun, X., Ward, B., … D’Mello, S. K. (2016). Automatic teacher modeling from live classroom audio. Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization (UMAP 2016), 13–17 July 2016, Halifax, NS, Canada (pp. 45–53). New York: ACM. https://doi.org/10.1145/2930238.2930250

Drachsler, H., & Greller, W. (2016). Privacy and analytics: It’s a DELICATE issue. A checklist for trusted learning analytics. In Proceedings of the Sixth International Conference on Learning Analytics and Knowledge (LAK 2016), 25–29 April 2019, Edinburgh, UK. New York: ACM. https://doi.org/10.1145/2883851.2883893

Echeverria, V. (2020). Designing and Validating Automated Feed-back for Collocated Teams Using Multimodal Learning Analytics. PhD Thesis. University of Technology Sydney (UTS), Sydney, Australia.

Echeverria, V., Martinez-Maldonado, R., & Buckingham Shum, S. (2019). Towards collaboration translucence: Giving meaning to multimodal group data. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI 2019), 4–9 May 2016, Glasgow, UK (pp. 1–16). New York: ACM. https://doi.org/10.1145/3290605.3300269

Echeverria, V., Martinez-Maldonado, R., Power, T., Hayes, C., & Buckingham Shum, S. (2018). Where is the nurse? Towards automatically visualising meaningful team movement in healthcare education. In Proceedings of the International Conference on Artificial Intelligence in Education (AIED 2018), 27–30 June 2018, London, UK (pp. 74–78). Cham: Springer. https://doi.org/10.1007/978-3-319-93846-2_14

Freeman, E., Wilson, G., Vo, D.-B., Ng, A., Politis, I., & Brewster, S. (2017). Multimodal feedback in HCI: Haptics, non-speech audio, and their applications. In The Handbook of Multimodal-Multisensor Interfaces: Foundations, User Modeling, and Common Modality Combinations—Volume 1 (pp. 277–317). New York: ACM. https://doi.org/10.1145/3015783.3015792

Grafsgaard, J. F., Wiggins, J. B., Vail, A. K., Boyer, K. E., Wiebe, E. N., & Lester, J. C. (2014). The additive value of multimodal features for predicting engagement, frustration, and learning during tutoring. In Proceedings of the Sixteenth ACM International Conference on Multimodal Interaction (ICMI 2014), 12–16 November 2014, Istanbul, Turkey (pp. 42–49). New York: ACM. https://doi.org/10.1145/2663204.2663264

Häkkinen, P. (2013). Multiphase method for analysing online discussions. Journal of Computer Assisted Learning, 29(6), 547–555. https://doi.org/10.1111/jcal.12015

Hall, R., & Stevens, R. (2015). Interaction analysis approaches to knowledge in use. In A. A. diSessa, M. Levin, and N. J. S. Brown (Eds.), Knowledge and interaction: A synthetic agenda for the learning sciences (Chapter 3). Routledge. Retrieved from https://peabody.vanderbilt.edu/departments/tl/teaching_and_learning_research/space_learning_mobility/Hall_Stevens_2016.pdf

Hamraie, A., & Fritsch, K. (2019). Crip technoscience manifesto. Catalyst: Feminism, Theory, Technoscience 5(1). https://doi.org/10.28968/cftt.v5i1.29607

Huang, K., Bryant, T., & Schneider, B. (2019). Identifying collaborative learning states using unsupervised machine learning on eye-tracking, physiological and motion sensor data. Paper presented at the 12th International Conference on Educational Data Mining (EDM 2019), 2–5 July 2019, Montreal, Canada. https://eric.ed.gov/?id=ED599214

Ifenthaler, D., & Schumacher, C. (2019). Releasing personal information within learning analytics systems. In D. Sampson, J. M. Spector, D. Ifenthaler, P. Isaías, and S. Sergis (Eds.), Learning Technologies for Transforming Large-Scale Teaching, Learning, and Assessment (pp. 3–18). Springer. https://doi.org/10.1007/978-3-030-15130-0_1

Jermann, P., Gergle, D., Bednarik, R., & Brennan, S. (2012). Duet 2012: Dual eye tracking in CSCW. In Proceedings of the 2012 ACM Conference on Computer Supported Cooperative Work. Companion (CSCW 2012), 11–15 February 2012, Seattle, WA, USA (pp. 23–24). ACM: New York. https://doi.org/10.1145/2141512.2141525

Jordan, B., & Henderson, A. (1995). Interaction analysis: Foundations and practice. The Journal of the Learning Sciences, 4(1), 39–103. https://doi.org/10.1207/s15327809jls0401_2

Kitto, K., & Knight, S. (2019). Practical ethics for building learning analytics. British Journal of Educational Technology, 50(6), 2855–2870. https://doi.org/10.1111/bjet.12868

Kröger, J. (2019). Unexpected inferences from sensor data: A hidden privacy threat in the Internet of Things. In L. Strous & V. G. Cerf (Eds.), Internet of things. Information processing in an increasingly connected world (pp. 147–159). Springer International Publishing. https://doi.org/10.1007/978-3-030-15651-0_13

Lahat, D., Adali, T., & Jutten, C. (2015). Multimodal data fusion: An overview of methods, challenges, and prospects. Proceedings of the IEEE, 103(9), 1449–1477. https://doi.org/10.1109/JPROC.2015.2460697

Lang, C., Woo, C., & Sinclair, J. (2020). Quantifying data sensitivity: Precise demonstration of care when building student prediction models. In Proceedings of the 10th International Conference on Learning Analytics and Knowledge (LAK 2020), 23–27 March 2020, Frankfurt, Germany (pp. 655–664). New York: ACM. https://doi.org/10.1145/3375462.3375506

Lave, J., & Wenger, E. (1991). Situated learning. Cambridge University Press. https://doi.org/10.1017/cbo9780511815355

Lehman, B., D’Mello, S., & Strain, A. (2011). Inducing and tracking confusion with contradictions during critical thinking and scientific reasoning. In Proceedings of the International Conference on Artificial Intelligence in Education (AIED 2011), 28 June–11 July 2011, Auckland, New Zealand (pp. 171–178). Springer. https://doi.org/10.1007/978-3-642-21869-9_24

Limbu, B. H., Jarodzka, H., Klemke, R., & Specht, M. (2019). Can you ink while you blink? Assessing mental effort in a sensor-based calligraphy trainer. Sensors, 19(14), 3244. https://doi.org/10.3390/s19143244

Looi, C.-K., Wong, L.-H., & Milrad, M. (2015). Guest editorial: Special issue on seamless, ubiquitous, and contextual learning. IEEE Computer Architecture Letters, 8(01), 2–4. https://doi.org/10.1109/TLT.2014.2387455

Lopes, P., & Baudisch, P. (2017). Interactive systems based on electrical muscle stimulation. Computer, 50(10), 28–35. https://doi.org/10.1109/MC.2017.3641627

Martinez-Maldonado, R., Echeverria, V., Fernandez Nieto, G., & Buckingham Shum, S. (2020). From data to insights: A layered storytelling approach for multimodal learning analytics. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI 2020), 25–30 April 2020, Honolulu, HI, USA (pp. 1–15). New York: ACM. https://doi.org/10.1145/3313831.3376148

Martinez-Maldonado, R., Hernandez-Leo, D., Pardo, A., & Ogata, H. (2017). Second cross-LAK: Learning analytics across physical and digital spaces. In Proceedings of the Seventh International Conference on Learning Analytics and Knowledge (LAK 2017), 13–17 March 2017, Vancouver, BC, Canada (pp. 510–511). New York: ACM. https://doi.org/10.1145/3027385.3029432

Martinez-Maldonado, R., Hernandez-Leo, D., Pardo, A., Suthers, D., Kitto, K., Charleer, S., … Ogata, H. (2016). Cross-LAK: Learning analytics across physical and digital spaces. In Proceedings of the Sixth International Conference on Learning Analytics and Knowledge (LAK 2016), 25–20 April 2016, Edinburgh, UK (pp. 486–487). https://doi.org/10.1145/2883851.2883855

Martinez-Maldonado, R., Kay, J., Buckingham Shum, S., & Yacef, K. (2019). Collocated collaboration analytics: Principles and dilemmas for mining multimodal interaction data. Human-Computer Interaction, 34(1), 1–50. https://doi.org/10.1080/07370024.2017.1338956

Mohan, A., Sun, H., Lederman, O., Full, K., & Pentland, A. (2018). Measurement and feedback of group activity using wearables for face-to-face collaborative learning. In Proceedings of the IEEE 18th International Conference on Advanced Learning Technologies (ICALT 2018), 9–13 July 2018, Mumbai, India (pp. 163–167).

Muñoz-Cristóbal, J. A., Rodríguez-Triana, M. J., Bote-Lorenzo, M. L., Villagrá-Sobrino, S. L., Asensio-Pérez, J. I., & Martínez-Monés, A. (2017). Toward multimodal analytics in ubiquitous learning environments. CEUR Workshop Proceedings, 1828, 60–67.

Nigay, L., & Coutaz, J. (1993). A design space for multimodal systems: Concurrent processing and data fusion. In Proceedings of the INTERACT ’93 and CHI ’93 Conference on Human Factors in Computing Systems, Amsterdam, Netherlands (pp. 172–178). New York: ACM. https://doi.org/10.1145/169059.169143

Noroozi, O., Alikhani, I., Järvelä, S., Kirschner, P. A., Juuso, I., & Seppänen, T. (2019). Multimodal data to design visual learning analytics for understanding regulation of learning. Computers in Human Behavior, 100, 298–304. https://doi.org/10.1016/j.chb.2018.12.019

Obrenovic, Z., & Starcevic, D. (2004). Modeling multimodal human-computer interaction. Computer, 37(9), 65–72. https://doi.org/10.1109/MC.2004.139

Ochoa, X. (2017). Multimodal learning analytics. In C. Lang, G. Siemens, A. Wise, & D. Gašević (Eds.), Handbook of learning analytics (pp. 129–141). Society for Learning Analytics Research (SoLAR). https://doi.org/10.18608/hla17.011

Ochoa, X., Chiluiza, K., Granda, R., Falcones, G., Castells, J., & Guamán, B. (2018). Multimodal transcript of face-to-face group-work activity around interactive tabletops. CEUR Workshop Proceedings, 2163, 1–6.

Ochoa, X., Chiluiza, K., Méndez, G., Luzardo, G., Guamán, B., & Castells, J. (2013). Expertise estimation based on simple multimodal features. In Proceedings of the 15th ACM International Conference on Multimodal Interaction (ICMI 2013), 9–13 December 2013, Sydney, Australia (pp. 583–590). New York: ACM. https://doi.org/10.1145/2522848.2533789

Ochoa, X., & Dominguez, F. (2020). Controlled evaluation of a multimodal system to improve oral presentation skills in a real learning setting. British Journal of Educational Technology, 51(5), 1615–1630. https://doi.org/10.1111/bjet.12987

Oviatt, S. (2018). Ten opportunities and challenges for advancing student-centered multimodal learning analytics. In Proceedings of the 20th ACM International Conference on Multimodal Interaction (ICM 2018), 16–20 October 2018, Boulder, CO, USA (pp. 87–94). New York: ACM. https://doi.org/10.1145/3242969.3243010

Oviatt, S., DeAngeli, A., & Kuhn, K. (1997). Integration and synchronization of input modes during multimodal human-computer interaction. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 1997), 22–27 March 1997, Atlanta, GA, USA (pp. 415–422). New York: ACM. https://doi.org/10.1145/258549.258821

Pérez-Sanagustín, M., Santos, P., Hernández-Leo, D., & Blat, J. (2012). 4SPPIces: A case study of factors in a scripted collaborative-learning blended course across spatial locations. International Journal of Computer-Supported Collaborative Learning, 7(3), 443–465. https://doi.org/10.1007/s11412-011-9139-3

Prieto, L. P., Martinez-Maldonado, R., Spikol, D., Hernández Leo, D., Rodriguez-Triana, M. J., & Ochoa, X. (2017). Editorial: Joint Proceedings of the Sixth Multimodal Learning Analytics (MMLA) Workshop and the Second Cross-LAK Workshop. CEUR Workshop Proceedings, 1828, 1–3.

Richey, C., D’Angelo, C., Alozie, N., Bratt, H., & Shriberg, E. (2016). The SRI speech-based collaborative learning corpus. In Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH 2016), 8–12 September 2016, San Francisco, CA, USA. (pp. 1550–1554). https://doi.org/10.21437/Interspeech.2016-1541

Scherer, S., Worsley, M., & Morency, L.-P. (2012). First international workshop on multimodal learning analytics. In Proceedings of the 14th ACM International Conference on Multimodal Interaction (ICMI 2012), 22–26 October 2012, Santa Monica, CA, USA (pp. 609–610). New York: ACM. https://doi.org/10.1145/2388676.2388803

Schneider, B., & Blikstein, P. (2015). Unraveling students’ interaction around a tangible interface using multimodal learning analytics. Journal of Educational Data Mining, 7(3), 89–116. https://doi.org/10.5281/zenodo.3554729

Schneider, B., & Pea, R. (2013). Real-time mutual gaze perception enhances collaborative learning and collaboration quality. International Journal of Computer-Supported Collaborative Learning, 8(4), 375–397. https://doi.org/10.1007/s11412-013-9181-4

Schneider, B., & Pea, R. (2014). The effect of mutual gaze perception on students’ verbal coordination. In Proceedings of the Seventh International Conference on Educational Data Mining (EDM 2014), 4–7 July 2014, London, UK (pp. 138–144). Retrieved from https://educationaldatamining.org/EDM2014/uploads/procs2014/long%20papers/138_EDM-2014-Full.pdf

Schneider, B., & Pea, R. (2015). Does seeing one another’s gaze affect group dialogue? A computational approach. Journal of Learning Analytics, 2(2), 107–133. https://doi.org/10.18608/jla.2015.22.9

Sharma, K., & Giannakos, M. (2020). Multimodal data capabilities for learning: What can multimodal data tell us about learning? British Journal of Educational Technology, 51(5), 1450–1484. https://doi.org/10.1111/bjet.12993

Sharma, K., Giannakos, M., & Dillenbourg, P. (2020). Eye-tracking and artificial intelligence to enhance motivation and learning. Smart Learning Environments, 7, 1–19. https://doi.org/10.1186/s40561-020-00122-x

Smith, J., Bratt, H., Richey, C., Bassiou, N., Shriberg, E., Tsiartas, A., … Alozie, N. (2016). Spoken interaction modeling for automatic assessment of collaborative learning. In Proceedings of Speech Prosody 2016, 31 May–3 June 2016, Boston, MA, USA (pp. 277–281). http://doi.org/10.21437/SpeechProsody.2016-57

Tsai, Y.-S., Whitelock-Wainwright, A., & Gašević, D. (2020). The privacy paradox and its implications for learning analytics. In Proceedings of the 10th International Conference on Learning Analytics & Knowledge (LAK 2020), 23–27 March 2020, Frankfurt, Germany (pp. 230–239). New York: ACM. https://doi.org/10.1145/3375462.3375536

Vrzakova, H., Amon, M. J., Stewart, A., Duran, N. D., & D’Mello, S. K. (2020). Focused or stuck together: Multimodal patterns reveal triads’ performance in collaborative problem solving. In Proceedings of the 10th International Conference on Learning Analytics & Knowledge (LAK 2020), 23–27 March 2020, Frankfurt, Germany (pp. 295–304). New York: ACM. https://doi.org/10.1145/3375462.3375467

Vujovic, M., & Hernández-Leo, D. (2019). Shall we learn together in loud spaces? Towards understanding the effects of sound in collaborative learning. In Proceedings of the Computer-Supported Collaborative Learning Conference (CSCL 2019), 17–21 June 2019, Lyon, France (pp. 891–892). https://repository.isls.org/handle/1/4551

Wise, A. F., & Schwarz, B. B. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12(4), 423–467. https://doi.org/10.1007/s11412-017-9267-5

Worsley, M. (2012). Multimodal learning analytics: Enabling the future of learning through multimodal data analysis and interfaces. In Proceedings of the 14th ACM International Conference on Multimodal Interaction (ICMI 2012), 22–26 October 2012, Santa Monica, CA, USA (pp. 353–356). New York: ACM. https://doi.org/10.1145/2388676.2388755

Worsley, M. (2018). Multimodal learning analytics’ past, present, and potential futures. In Companion Proceedings of the Eighth International Conference on Learning Analytics and Knowledge (LAK 2018), 5–9 March 2018, Sydney, Australia. SoLAR. http://bit.ly/lak18-companion-proceedings

Worsley, M., Abrahamson, D., Blikstein, P., Grover, S., Schneider, B., & Tissenbaum, M. (2016). Situating multimodal learning analytics. In Proceedings of the 2016 International Conference for the Learning Sciences (ICLS 2016), 20–24 June 2016, Singapore (pp. 1346–1349). https://www.isls.org/icls/2016/docs/ICLS2016_Volume_2.pdf

Worsley, M., & Blikstein, P. (2011a). Toward the development of learning analytics: Student speech as an automatic and natural form of assessment. Paper presented at the Annual Meeting of the American Education Research Association (AERA 2010), Denver, CO, USA. Retrieved from http://marceloworsley.com/papers/aera_2011.pdf

Worsley, M., & Blikstein, P. (2011b). What’s an expert? Using learning analytics to identify emergent markers of expertise through automated speech, sentiment and sketch analysis. In Proceedings of the Fourth Annual Conference on Educational Data Mining (EDM 2011), 6–8 July 2011, Eindhoven, Netherlands (pp. 235–240). https://educationaldatamining.org/EDM2011/wp-content/uploads/proc/edm2011_paper18_short_Worsley.pdf

Worsley, M., & Blikstein, P. (2013). Towards the development of multimodal action based assessment. In Proceedings of the Third International Conference on Learning Analytics and Knowledge (LAK 2013), 8–13 April 2013, Leuven, Belgium (pp. 94–101). New York: ACM. https://doi.org/10.1145/2460296.2460315

Worsley, M., & Blikstein, P. (2014). Analyzing engineering design through the lens of computation. Journal of Learning Analytics, 1(2), 151–186. https://doi.org/10.18608/jla.2014.12.8

Worsley, M., & Blikstein, P. (2018). A multimodal analysis of making. International Journal of Artificial Intelligence in Education, 28(3), 385–419. https://doi.org/10.1007/s40593-017-0160-1

Worsley, M., & Ochoa, X. (2020). Towards collaboration literacy development through multimodal learning analytics. In Companion Proceedings of the 10th International Conference on Learning Analytics & Knowledge (LAK 2020), 23–27 March 2020, Frankfurt, Germany (pp. 585–595). SoLAR. https://www.solaresearch.org/wp-content/uploads/2020/06/LAK20_Companion_Proceedings.pdf

Worsley, M., Scherer, S., Morency, L.-P., & Blikstein, P. (2015). Exploring behavior representation for learning analytics. In Proceedings of the 2015 International Conference on Multimodal Interaction (ICMI 2015), 9–13 November 2015, Seattle, WA, USA (pp. 251–258). New York: ACM. https://doi.org/10.1145/2818346.2820737

Wu, H.-Y., Rubinstein, M., Shih, E., Guttag, J., Durand, F., & Freeman, W. (2012). Eulerian video magnification for revealing subtle changes in the world. ACM Transactions on Graphics (TOG), 31(4), 1–8. https://doi.org/10.1145/2185520.2185561

Xu, T., White, J., Kalkan, S., & Gunes, H. (2020). Investigating bias and fairness in facial expression recognition. arXiv:2007.10075. https://doi.org/10.1007/978-3-030-65414-6_35

Downloads

Published

2021-11-16

How to Cite

Worsley, M., Martinez-Maldonado, R., & D’Angelo, C. (2021). A New Era in Multimodal Learning Analytics: Twelve Core Commitments to Ground and Grow MMLA. Journal of Learning Analytics, 8(3), 10-27. https://doi.org/10.18608/jla.2021.7361

Most read articles by the same author(s)