Third Multimodal Learning Analytics Workshop and Grand Challenges (MLA 2014)
Istanbul - Turkey, 12 Novembre 2014
Hosted by the ACM International Conference on Multimodal Interaction (ICMI 2014)
Istanbul - Turkey, 12-16 November 2014
Advances in Learning Analytics are expected to contribute new empirical findings, theories, methods, and metrics for understanding how students learn. It also could contribute to improving pedagogical support for students’ learning through assessment of new digital tools, teaching strategies, and curricula.
The most recent direction within this area is Multimodal Learning Analytics, which emphasises the analysis of natural rich modalities of communication during situated learning activities. This includes students’ speech, writing, and nonverbal interaction (e.g., gestures, facial expressions, gaze, etc.). A primary objective of multimodal learning analytics is to analyse coherent signal, activity, and lexical patterns to understand the learning process and provide feedback to learners in order to improve it. The Third International Workshop on Multimodal Learning Analytics will bring together international researchers in multimodal interaction and systems, cognitive and learning sciences, educational technologies, and related areas to advance research on multimodal learning analytics.
Following the First International Workshop on Multimodal Learning Analytics in Santa Monica in 2012 and the ICMI Grand Challenge on Multimodal Learning Analytics in Sydney in 2013, this third workshop will also incorporate two data-driven grand challenges. It will be held at ICMI 2014 in Istanbul, Turkey on November 12th 2014. This year, the workshop has been expanded to include a session for hand-on training on multimodal learning analytic techniques and two dataset-based challenges. Students and postdoctoral researchers are especially welcome to participate.
March 24, 2014: Both datasets are made available to interested participants
July 1, 2014: Deadline for workshop papers
August 1, 2014: Deadline for grand challenge papers
August 21, 2014 Notification of acceptance
September 15, 2014: Camera ready papers due
November 12, 2014: Workshop event
The workshop will focus on the presentation of multimodal signal analysis techniques that could be applied in Multimodal Learning Analytics. Instead of requiring research results, that usually are presented at the Learning Analytics and Knowledge (LAK) or Multidimodal Interaction (ICMI) conferences, this event will require presenters to concentrate on benefits and shortcomings of methods used for multimodal analysis of learning signals.
* Grand Challenges
Following the successful experience of the Multimodal Learning Analytics Grand Challenge in ICMI 2013, this year this event will provide two data sets with diverse research questions to be tackled by interested participants:
** Math Data Challenge:
The Math Data Corpus (Oviatt, 2013) is available for analysis. It involves 12 sessions, with small groups of three students collaborating while solving mathematics problems (i.e., geometry, algebra). Data were collected on their natural multimodal communication and activity patterns during these problem-solving and peer tutoring sessions, including students’ speech, digital pen input, facial expressions, and physical movements. In total, approximately 18 hours of multimodal data is available during these situated problem-solving sessions. Coding also is available on the lexical content of speech, written representations, and problem-solving performance and learning. In total, approximately 18 hours of multimodal data is available during these situated problem-solving sessions.
** Presentation Quality Challenge:
This challenge includes a data corpus that involves 40 oral presentations of Spanish-speaking students in groups of 4 to 5 members presenting projects (entrepreneurship ideas, literature reviews, research designs, software design, etc.). Data were collected on their natural multimodal communication in regular classroom settings. The following data is available: speech, facial expressions and physical movements in video, skeletal data gathered from Kinect for each individual, and slide presentation files. In total, approximately 10 hours of multimodal data is available for analysis of these presentations.
Xavier Ochoa, ESPOL, Ecuador ([log in to unmask])
Marcelo Worsley, Stanford, USA ([log in to unmask])
Katherine Chiluiza, ESPOL, Ecuador ([log in to unmask])
Saturnino Luz, Trinity College Dublin, Ireland ([log in to unmask])
Xavier Ochoa ([log in to unmask])
For news of CHI books, courses & software, join CHI-RESOURCES
mailto: [log in to unmask]
To unsubscribe from CHI-ANNOUNCEMENTS send an email to
mailto:[log in to unmask]
For further details of CHI lists see http://listserv.acm.org