RiPPLE: A Crowdsourced Adaptive Platform for Recommendation of Learning Activities




Adaptive learning, Crowdsourcing, Recommender Systems


This paper presents a platform called RiPPLE (Recommendation in Personalised Peer-Learning Environments) that recommends personalized learning activities to students based on their knowledge state from a pool of crowdsourced learning activities that are generated by educators and the students themselves. RiPPLE integrates insights from crowdsourcing, learning sciences, and adaptive learning, aiming to narrow the gap between these large bodies of research while providing a practical platform-based implementation that instructors can easily use in their courses. This paper provides a design overview of RiPPLE, which can be employed as a standalone tool or embedded into any learning management system (LMS) or online platform that supports the Learning Tools Interoperability (LTI) standard. The platform has been evaluated based on a pilot in an introductory course with 453 students at The University of Queensland. Initial results suggest that the use of the RiPPLE platform led to measurable learning gains and that students perceived the platform as beneficially supporting their learning.


Abdi, S., Khosravi, H., Sadiq, S., & Gasˇevic ́, D. (2019). A multivariate Elo-based learner model for adaptive educational systems. In C. F. Lynch, A. Merceron, M. Desmarais, & R. Nkambou (Eds.), Educational Data Mining 2019: 12th International Conference on Educational Data Mining (EDM2019), 2–5 July 2019, Montreal, Canada (pp. 228–233). International EducationalDataMiningSociety. Retrievedfrom

Anderson, J. R., Boyle, C. F., & Reiser, B. J. (1985). Intelligent tutoring systems. Science, 228(4698), 456–462.

Anderson, J. R., Corbett, A. T., Koedinger, K. R., & Pelletier, R. (1995). Cognitive tutors: Lessons learned. Journal of the LearningSciences,4(2),167–207.

Austin, P. C. (2011). An introduction to propensity score methods for reducing the effects of confounding in observational studies.MultivariateBehavioralResearch,46(3),399–424.

Banks, J., Cochran-Smith, M., Moll, L., Richert, A., Zeichner, K., LePage, P., . . . McDonald, M. (2005). Teaching diverse

learners. In L. Darling-Hammond & J. Bransford (Eds.), Preparing Teachers for a Changing World: What Teachers Should Learn and Be Able to Do (pp. 232–274). San Francisco, California, USA: Jossey-Bass.

Barak, M., & Rafaeli, S. (2004). On-line question-posing and peer-assessment as means for web- based knowledge sharing in learning. International Journal of Human-Computer Studies, 61(1), 84–103.

Bates, S. P., Galloway, R. K., & McBride, K. L. (2012). Student-generated content: Using PeerWise to enhance engagement and outcomes in introductory physics courses. AIP Conference Proceedings, 1413(1), 123–126.

Bates, S. P., Galloway, R. K., Riise, J., & Homer, D. (2014). Assessing the quality of a student-generated question repository. Physical Review Special Topics — Physics Education Research, 10(2), 020105.

Biggs, J. (2012). What the student does: Teaching for enhanced learning. Higher Education Research & Development, 31(1), 39–55.

Bloom, B. S., Englehart, M., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of Educational Objectives: Handbook I, Cognitive Domain. New York, USA: David McKay.

Bodily, R., Kay, J., Aleven, V., Jivet, I., Davis, D., Xhakaj, F., & Verbert, K. (2018). Open learner models and learn- ing analytics dashboards: A systematic review. In Proceedings of the 8th International Conference on Learning Analytics and Knowledge (LAK ’18), 5–9 March, 2018, Sydney, Australia (pp. 41–50). New York, USA: ACM.

Bovill, C. (2013). Students and staff co-creating curricula: A new trend or an old idea we never got around to implementing? In C. Rust (Ed.), Improving Student Learning through Research and Scholarship: 20 Years of ISL (pp. 96–108). Oxford, UK:OxfordCentreforStaffandLearningDevelopment. Retrievedfrom

Brusilovsky, P. (2012). Adaptive hypermedia for education and training. In P. J. Durlach & A. M. Les- gold (Eds.), Adaptive Technologies for Training and Education (p. 46–66). Cambridge University Press.

Brusilovsky, P., & Milla ́n, E. (2007). User models for adaptive hypermedia and adaptive educational systems. In P. Brusilovsky, A. Kobsa, & W. Nejdl (Eds.), The Adaptive Web: Methods and Strategies of Web Personalization. Lecture Notes in ComputerScience(Vol.4321,pp.3–53).SpringerBerlinHeidelberg.

Bull, S., Ginon, B., Boscolo, C., & Johnson, M. (2016). Introduction of learning visualisations and metacognitive sup- port in a persuadable open learner model. In Proceedings of the Sixth International Conference on Learning An- alytics & Knowledge (LAK ’16), 25–29 April, 2016, Edinburgh, Scotland (pp. 30–39). New York, USA: ACM.

Bull, S., & Kay, J. (2010). Open learner models. In R. Nkambou, J. Bourdeau, & R. Mizoguchi (Eds.), Advances in Intelligent TutoringSystems(Vol.308,pp.301–322).SpringerBerlinHeidelberg.

Chambers, J. M., Cleveland, W. S., Kleiner, B., & Tukey, P. A. (1983). Graphical Methods for Data Analysis. Boca Raton, Florida, USA: CRC Press.

Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159. Collins, A., & Halverson, R. (2018). Rethinking Education in the Age of Technology: The Digital Revolution and Schooling in America. New York, USA: Teachers College Press.

Cooper, K., & Khosravi, H. (2018). Graph-based visual topic dependency models: Supporting assessment design and delivery at scale. In Proceedings of the 8th International Conference on Learning Analytics and Knowledge (LAK ’18), 5–9 March,2018,Sydney,Australia(pp.11–15).NewYork,USA:ACM.

Denny, P., Hamer, J., & Luxton-Reilly, A. (2009). Students sharing and evaluating MCQs in a large first year engineering course. In 20th Annual Conference for the Australasian Association for Engineering Education (AAEE ’09), 6–9 December 2009, The University of Adelaide, Adelaide, Australia (pp. 575–580). Barton, Australia: Engineers Australia.

Denny, P., Hamer, J., Luxton-Reilly, A., & Purchase, H. (2008). PeerWise: Students sharing their multiple choice questions. In Proceedings of the Fourth International Workshop on Computing Education Research (ICER ’08), 6–7 September 2008, Sydney, Australia(pp.51–58).NewYork,USA:ACM.

Draper, S. W. (2009). Catalytic assessment: Understanding how MCQs and EVS can foster deep learning. British Journal of Educationa lTechnology, 40(2),285–293.

Essa, A. (2016). A possible future for next generation adaptive learning systems. Smart Learning Environments, 3(1), 16.

Falmagne, J.-C., Cosyn, E., Doignon, J.-P., & Thie ́ry, N. (2006). The assessment of knowledge, in theory and in practice. In R. Missaoui & J. Schmidt (Eds.), Formal Concept Analysis. Lecture Notes in Computer Science (Vol. 3874, pp. 61–79). Springer Berlin Heidelberg.

Galloway, K. W., & Burns, S. (2015). Doing it for themselves: Students creating a high quality peer-learning environment. Chemistry Education Research and Practice,16(1),82–92.

Heffernan, N. T., & Heffernan, C. L. (2014). The ASSISTments ecosystem: Building a platform that brings scientists and teachers together for minimally invasive research on human learning and teaching. International Journal of Artificial IntelligenceinEducation, 24(4),470–497.

Heffernan, N. T., Ostrow, K. S., Kelly, K., Selent, D., Van Inwegen, E. G., Xiong, X., & Williams, J. J. (2016). The future of adaptive learning: Does the crowd hold the key? International Journal of Artificial Intelligence in Education, 26(2), 615–644.

Jose, F. (2016). White Paper: Knewton Adaptive Learning: Building the World’s Most Powerful Recommendation Engine for Education. Retrievedfrom

Karataev, E., & Zadorozhny, V. (2017). Adaptive social learning based on crowdsourcing. IEEE Transactions on Learning Technologies,10(2),128–139.

Khosravi, H., & Cooper, K. (2018). Topic dependency models: Graph-based visual analytics for communicating assessment data. Journal of Learning Analytics, 5(3),136–153.

Khosravi, H., Cooper, K., & Kitto, K. (2017). RiPLE: Recommendation in peer-learning environments based on knowledge gaps and interests. Journal of Educational Data Mining, 9(1), 42–67. Retrieved from

King, A. (1992). Facilitating elaborative learning through guided student-generated questioning. Educational Psychologist, 27(1),111–126.

Kitto, K., Lupton, M., Davis, K., & Waters, Z. (2017). Designing for student-facing learning analytics. Australasian Journal of Educational Technology, 33(5), 152–168.

Ma, W., Adesope, O. O., Nesbit, J. C., & Liu, Q. (2014). Intelligent tutoring systems and learning outcomes: A meta-analysis. Journal of Educational Psychology,106(4),901–918.

Matthews, K. E. (2017). Five propositions for genuine students as partners practice. International Journal for Students as Partners,1(2).

May, M., George, S., & Pre ́voˆt, P. (2011). TrAVis to enhance online tutoring and learning activities: Real time visualization of students tracking data. Interactive Technology and Smart Education, 8(1), 52–69.

Meer, N., & Chapman, A. (2014). Co-creation of marking criteria: Students as partners in the assessment process. Business and Management Education in HE,1–15.

Mojarad, S., Essa, A., Mojarad, S., & Baker, R. S. (2018). Studying adaptive learning efficacy using propensity score matching. In A. Pardo et al. (Eds.), Companion Proceedings of the 8th International Conference on Learning Analytics and Knowledge (LAK ’18), 5–9 March 2018, Sydney, Australia. Society for Learning Analytics Research.

Morrison, B. B., & DiSalvo, B. (2014). Khan Academy gamifies computer science. In Proceedings of the 45th ACM Technical Symposium on Computer Science Education (SIGCSE ’14), 5–8 March 2014, Atlanta, Georgia, USA (pp. 39–44). New York,USA:ACM.

Mulryan-Kyne, C. (2010). Teaching large classes at college and university level: Challenges and opportunities. Teaching in Higher Education,15(2),175–185.

Oxman, S., Wong, W., DV X Innovations, & DeVry Education Group. (2014). White Paper: Adaptive Learning Systems. Integrated Education Solutions. Retrieved from hitePaper.pd f

Park, O. c., & Lee, J. (2004). Adaptive instructional systems. In D. Jonassen (Ed.), Handbook of Research on Educational Communications and Technology (2nd ed., pp. 651–684). Mahwah, New Jersey, USA: Lawrence Erlbaum Associates Publishers.

Pela ́nek, R. (2016). Applications of the Elo rating system in adaptive educational systems. Computers & Education, 98(C), 169–179.

Pressley, M., Wood, E., Woloshyn, V. E., Martin, V., King, A., & Menke, D. (1992). Encouraging mindful use of prior knowledge: Attempting to construct explanatory answers facilitates learning. Educational Psychologist, 27(1), 91–109.

Purchase, H., Hamer, J., Denny, P., & Luxton-Reilly, A. (2010). The quality of a PeerWise MCQ repository. In Proceedings of the Twelfth Australasian Conference on Computing Education (ACE ’10), 1 January 2010, Brisbane, Australia (Vol. 103, pp. 137–146). Darlinghurst, Australia: Australian Computer Society, Inc.

Ritter, S., Anderson, J. R., Koedinger, K. R., & Corbett, A. (2007). Cognitive Tutor: Applied research in mathematics education. Psychonomic Bulletin & Review, 14(2),249–255.

Ritter, S., Carlson, R., Sandbothe, M., & Fancsali, S. E. (2015). Carnegie Learning’s adaptive learn- ing products. In O. Santos et al. (Eds.), Educational Data Mining 2015: 8th International Conference on Educational Data Mining (EDM2015), 26–29 June 2015, Madrid, Spain. Retrieved from f

Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika,70(1),41–55.

Smart Sparrow. (2016). Smart Sparrow — Adaptive eLearning Platform. Retrieved from

Solemon, B., Ariffin, I., Din, M. M., & Anwar, R. M. (2013). A review of the uses of crowdsourcing in higher education. International Journal of Asian Social Science, 3(9), 2066–2073.

Sullivan, G. M. (2011). Getting off the “gold standard”: Randomized controlled trials and education research. Journal of Graduate Medical Education,3(3),285–289.

Tackett, S., Raymond, M., Desai, R., Haist, S. A., Morales, A., Gaglani, S., & Clyman, S. G. (2018). Crowdsourcing for assessment items to support adaptive learning. Medical Teacher, 40(8), 838–841.

Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2018). Developing evaluative judgement: Enabling students to make decisions about the quality of work. Higher Education,76(3),467–481.

VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4),197–221.

Walsh, J. L., Harris, B. H., Denny, P., & Smith, P. (2018). Formative student-authored question bank: Perceptions, question quality and association with summative performance. Postgraduate Medical Journal, 94(1108), 97–103.

Whitehill, J., Aguerrebere, C., & Hylak, B. (2019). Do learners know what’s good for them? Crowdsourcing subjective ratings of OERs to predict learning gains. In C. F. Lynch, A. Merceron, M. Desmarais, & R. Nkambou (Eds.), Educational Data Mining 2019: 12th International Conference on Educational Data Mining (EDM2019), 2–5 June 2019, Montreal, Canada(pp.462–467). Retrievedfrom

Williams, J. J., Kim, J., Rafferty, A., Maldonado, S., Gajos, K. Z., Lasecki, W. S., & Heffernan, N. (2016). Axis: Generating explanations at scale with learnersourcing and machine learning. In Proceedings of the Third (2016) ACM Conference on Learning @ Scale (L@S 2016), 25–26 April 2016, Edinburgh, Scotland (pp. 379–388). New York, USA: ACM.

Yilmaz, B. (2017). Effects of Adaptive Learning Technologies on Math Achievement: A Quantitative Study of ALEKS Math Software (Unpublished doctoral dissertation). University of Missouri-Kansas City.




How to Cite

Khosravi, H., Kitto, K., & Williams, J. J. (2019). RiPPLE: A Crowdsourced Adaptive Platform for Recommendation of Learning Activities. Journal of Learning Analytics, 6(3), 91–105.

Most read articles by the same author(s)