Workshops
Workshops are scheduled on Friday 21 November. Registration will be available onsite. Please contact the organizers for further questions.
1. MoNoDeC: The Mobile Node Controller Platform
Nick Hwang & Anthony Marasco
INFO
10:30-12:30 am | Salle Varese (1rst floor)
Workshop limited to 10 participants
contact: hwangn@uww.edu
MoNodeC is a web-based multichannel audio system that utilizes audience mobile phones and IoT-hardware-driven speakers as point sources for configurable and dynamic immersive audio, as well as an audience interface through Web Audio protocols. Audience participants register their current location within a customizable audience space (rows, cloud, or free-form) on their mobile phones. Their mobile phones become a point source within the immersive experience (performance or installation). During a performance or installation, audience members interact with the mobile interface, which affects the experience in various ways, such as altering the musical form, contributing to a collective canvas, or modifying the timbre of their localized instrument. A performance/installation ‘controller’ sends audio, control, and interface data to participants throughout the experience.
The workshop will demonstrate the use of MoNodec in the context of sound diffusion throughout an audience/performance space, featuring example compositions and shorter sound files. Part of the demonstration will involve newer features, including active audience participation of MoNoDec, where a digital instrument is presented to their mobile interfaces and they ‘perform’ that instrument. Part of the demonstration will also include ‘autonomous hubs’: IoT-based speakers that receive playback and diffusion data. These autonomous hubs are designed for use in larger sound diffusion performance settings, as well as installation and fixed-point source scenarios.
2. Ensonification: Accessible Web-Based Data Sonification Ensemble Improvisation
Tristan Peng, Chris Chafe, Hongchan Choi, Nilam Ram
INFO
02:00-04:00 pm | Salle Varese (1rst floor)
Workshop limited to 10 participants
contact: pengt@ccrma.Stanford.EDU
Ensonification is a new approach to creating and performing sonifications, where ensemble performers interpret data visualizations as musical scores through improvisation. Its main goal is to be accessible to both performers and audiences, but a key challenge is the creation of these visualization scores, which often require technical skills and domain knowledge. The Ensonification Interface addresses this by providing an easy-to-use web platform for creating scores and enhancing the sonification experience. It features collaborative human-AI music performance and a customizable sample-based sonification system using Web Audio API and Audio Worklets.
