Context: One the most compelling aspects of Fabrication Labs / Makerspaces is that they integrate formal concepts in STEAM (Science, technology, engineering, arts, and mathematics) with 21st century skills – such as collaboration, critical thinking, creativity and communication skills. Students learn complex technical skills by working in teams, by creating their own personalized artifacts and by coming up with creative solutions to problems in their lives and communities. Those learning experiences can have a powerful impact on their ability to adapt to the challenges of the future and develop essential skills that are not well taught in traditional curricula.
Problem: Maker spaces, however, have a lot of space to grow in order to accommodate the needs of different students, especially from non-technical and underrepresented participants. This proposal hopes to address this issue in fabrication labs, by detecting and preventing challenges that students usually experience (e.g., moments of frustration, feeling of isolation, perfectionism, poor time management, etc) using a semi-automated approach.
Approach: we propose to personalize learning in makerspaces and automate parts of the process using Multi-Modal Learning Analytics (MMLA). The goal of this research is to design a system that detects student needs, provides personalized feedback to students and teachers.
Figure 1: Left side: the client associated with each Kinect sensor records a screenshot of its view of the room, along with the body positions of each participant in the space. This information is sent along to the server (right side). The server merges the different streams of information, removes duplicates and performs face recognition.
Maria McLaughlin, Ryan Jiang, Jhenna Voorhis, Kahyin Cheong, William Yao, Lucia Ramirez, Iuliand Radu, Bertrand Schneider