Marcelo M. Wanderley
Schulich School of Music - McGill University
Montreal, July 2002.
Traditionally the main research directions in gesturally controlled real-time computer music have been the design of novel input devices (Paradiso 1997) and the research on new sound synthesis algorithms (Borin, De Poli and Sarti 1997).
New input devices - also known as (gestural or musical) controllers, control surfaces, or (hardware) interfaces - currently allow the acquisition of virtually all possible performer gestures and movements (Mulder 1994). Conversely, existing synthesis algorithms are capable of creating an unlimited range of sounds in real-time using affordable hardware.
But once gestural (or performance) variables - signals resultant from performer movements - are available in digital form, one needs to devise methods to relate them to the available synthesis variables - the inputs of the sound generating system. This relationship is commonly known in computer music as (parameter) mapping.
The three aforementioned parts - the gestural controller, the synthesis algorithm and the mapping strategies between them - constitute what can be called a digital musical instrument (DMI) (Wanderley 2001). But as cited above, with the focus on devices and on synthesis algorithms, the proposition of DMIs with simple one-to-one mappings between gestural variables and synthesis variables was the rule.
Recently, we witness the emergence of a trend to broaden the scope to include considerations on the intrinsic role of different mapping strategies, including considerations on their influence on instrument design (Bowler, Purvis, Manning, and Bailey 1990) (Winkler 1995) (Garnett and Goudeseune 1999) (Hunt, Wanderley and Kirk 2000).1)
In fact, mapping is many times viewed through different perspectives: a) as a constituent part of a DMI as explained above or b) as part of a composition. In both cases, gestural variables are mapped to sound synthesis variables, but in the first case, mapping strategies are determinants on instrument expressivity (Rovan, Wanderley, Dubnov and Depalle 1997)(Favilla 1997)(Hunt and Kirk 2000), whilst in the second case they are the essence of the composition itself (Doornbusch 2000). On a higher level, effort is being made to bridge these two aspects into a higher-level view of mapping as the key to system design (Oppenheim 2001).
In this issue of Organised Sound, we set our goal to analyse in detail the various approaches to the definition of mapping strategies in both the design of new digital musical instruments and as part of interactive music systems.
Questions addressed in this issue include:
The ten original contributions that follow focus on the role of mapping and on the design of mapping strategies, providing an overview of several of the main existing developments concerning mapping in computer music:
This edition of Organised Sound constitutes, to the best of our knowledge, the first editorial attempt to explicitly address the several questions related to mapping strategies in real-time computer music. It nevertheless cannot, due to space constraints, completely set the discussing on parameter mapping.
We therefore welcome the reader to further participate in the ICMA/EMF Working Group on Interactive Systems and Instrument Design in Music (ISIDM), where one interest group discusses the different aspects of mapping (Hunt and Wanderley 2000-2002). There the reader will find links to many of the papers referenced in the contributions that follow, as well as texts and on-line discussions of the several topics related to parameter mapping, including its importance (Hunt, Wanderley and Paradis 2002) and limitations (Chadabe 2002).