Over the past decade, high frequency sensors (such as eye-trackers, motion sensors, wearables) have become affordable and reliable, which is opening new doors for capturing students’ behavior. Educational researchers can now collect significantly larger datasets: the field of Multi-modal Learning Analytics (MMLA) is about exploiting this new development, namely that sensors and data mining techniques have both reached a level of maturity that allows researchers to tackle new research questions and develop new educational interventions.
In this project, we are looking at markers of productive collaboration. Joint Visual Attention, for example, has been studied for decades by developmental psychologists and educational researchers. The extent to which people synchronize their attention has been shown to be a robust predictor of collaboration quality, and sometimes learning gains in groups of students. This project leverages eye-tracking technology to more rigorously study JVA across a variety of contexts.
Bertrand Schneider, Tancredi Castellano Pucci, Jeff Balkanski, Joseph Reilly
Schneider, B., Tancredi, C. P., Balkanski, J., Reilly, J. (submitted). Unpacking Collaborative Processes of Hands-on Learning Activities Using Dual Mobile Eye-Tracking. ACM Conference on Computer Supported Cooperative Work.