EZ-MMLA toolkit

While Multimodal Learning Analytics (MMLA) is becoming a popular methodology in the LAK community, most educational researchers still rely on traditional instruments for capturing learning processes (e.g., click-stream, log data, self-reports, qualitative observations). MMLA has the potential to complement and enrich traditional measures of learning by providing high frequency data on learners’ behavior, cognition and affects. However, there is currently no easy-to-use toolkit for recording multimodal data streams. Existing methodologies rely on the use of physical sensors and custom-written code for accessing sensor data. In this project we designed the EZ-MMLA toolkit. This toolkit was implemented as a website that provides easy access to the latest machine learning algorithms for collecting a variety of data streams from webcams: attention (eye-tracking), physiological states (heart rate), body posture (skeletal data), hand gestures, emotions (from facial expressions and speech), and lower-level computer vision algorithms (e.g., fiducial / color tracking). This toolkit can run from any browser and does not require special hardware or programming experience. This project is under development and can be accessed at mmla.gse.harvard.edu.

ez-mmla

Figure 1: some capabilities of the website (from left to right): emotion detection, skeletal tracking, hand tracking, heart rate estimation.

 

To learn more

Hassan, J., Leong, J. & Schneider, B. (accepted). Multimodal Data Collection Made Easy: The EZ-MMLA Toolkit. ACM International Conference on Learning Analytics, LAK ’21. [ pdf ]