Dear all,

(apologies for cross-posting)

Please find and distribute the Call for Participation for the EmAI Embodied Perspectives on Musical AI – workshop below.

Call for Participation

We are now welcoming submissions for the workshop Embodied Perspectives on Musical AI that will take place in a hybrid format at the University of Oslo, Norway, on 21-22 November 2022.

Submission deadline: September 1, 2022, Anywhere on earth (AoE)

Embodiment, or, more concretely, musical embodiment, denotes how the body shapes our musical experiences. For example, you may exert more or less effort depending on the uncertainty of some musical situations or while playing technically challenging tasks. Such varying levels of effort during a live performance can lead to particular affective states resulting in bodily arousals. Or, you can also use your body functionally, such as full-body swaying to facilitate keeping the groove or nodding your head to signal your bandmate to return to the tune’s main melody. From an enactive perspective, our perception is shaped by our actions. As such, cognition emerges not just through information processing but mainly from the dynamic interaction between the agent and the environment. All in all, we experience the music with our body, using more modalities than hearing, regardless if we perform or listen to it. 

Most musical artificial intelligence (AI) and multi-agent systems (MAS) focus on the music information found in the auditory domain. Modeling instrumental acoustics, synthesizing raw audio, or generating symbolic music data are highly complex tasks that AI can already accomplish to some extent. Still, it is unclear how these technologies can collaborate with musicking humans. How will machines perceive humans as diverse embodied entities, and how will humans communicate with machines exploiting multiple modalities? How can embodiment theories contribute to creating intelligent musical agents? At large, how will we make music with AI in the future? The interaction and artistic contexts, perceptual-motor constraints, affective states, environmental features, sensing devices, and computational systems are just a few that can come into play in an attempt to answer such questions.

An interdisciplinary research model encompassing natural sciences, humanities, cognition, and performing arts is often necessary to understand conventional forms of musical collaboration and create novel music technologies. Thus, in this two-day workshop, we will convene a group of scholars, artists, and engineers from diverse disciplines to emphasize embodied perspectives on musical human-AI interactions. Together we will explore theories and practices in this intriguing domain and discuss musical AI's past, present, and future through the lenses of embodied cognition.


The two-day workshop will consist of thematic sessions in addition to two keynote lectures and one evening performance. We invite submissions for the thematic sessions. The aim is to encourage dialogue so that presenters will give short 10-minute presentations followed by a panel discussion. The proposals can include demos, completed projects, works-in-progress, and provocations. We solicit anything of relevance to the topic of Embodied Perspectives on Musical AI. Short proposal texts (200-300 words) can be submitted at:


RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo

Interaction and Robotics Lab (Çağrı Erdem, Sayed Mojtaba Karbasi, Riccardo Simionato, Alexander Refsum Jensenius)

Çağrı Erdem [he/him]
Doctoral Research Fellow
RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion
University of Oslo  


[log in to unmask]

If you don't already have a password for the LISTSERV.ACM.ORG server, we recommend that you create one now. A LISTSERV password is linked to your email address and can be used to access the web interface and all the lists to which you are subscribed on the LISTSERV.ACM.ORG server.

To create a password, visit:


Once you have created a password, you can log in and view or change your subscription settings at: