[IN]VISIBLE

the instrument

The Instrument

During this chapter I will outline how I came to form my personal performance setup (or instrument), which will be presented at the end of the chapter. This performance setup will be used in both the lecture-performance and the concert accompanying this thesis, where its full potential will be demonstrated in an actual performance situation. By gradually extending a basic, general instrumental model (in relation to the performer) I will develop a more generic instrumental model for computer music performance, which then will serve as a basic model for my own performance setup. In parallel, I will discuss the affordances and constraints of these instrumental models in regard to gesture, expression and communication. Furthermore two case studies regarding the composition of my instrument will be presented in this chapter. In addition, I will demonstrate that it is impossible to detach the choices made with regard to the formation of the instrument from the idiomatic nature of the compositions performed on the instrument (see subchapter Mapping, Gesture & Expression).



Basic Instrumental Models

From my experience, most instrumental models34 are built in a similar fashion and contain the same agents: a performer conveys energy through the interface of the instrument which is then processed and eventually comes out of the instrument in the form of sound (fig. 1).

Screen Shot 2016-08-08 at 15.15.45

An easy example can be found in the piano: the performer strikes a key, which activates the hammer to hit a string (or strings), which then results in an audible tone. Depending on the nature of the agents of which the instrument is comprised, every instrument has its own intrinsic affordances and constraints. Although this general instrumental model is also applicable to the computer musician’s
 instrument 3 we find that all agents of the instrument are dynamic, thus leaving the computer musician with a myriad of options (fig. 2).

Screen Shot 2016-08-08 at 15.19.30

The input agent can be anything ranging from a control interface to the air surrounding the performer. Similarly, the processing can be done by various types of software, the output can be anything ranging from mono to 4D sound. Due to these dynamic agents, the affordances and constraints of the computer musician’s instrument are continuously shifting. This obstacle often proves though to overcome, not only for the electronics performer, but also for the composer; since composing for an instrument requires a certain degree of knowledge on it, this problem also complicates the composer’s directive when writing with in a electronic or electro-acoustic aesthetic in mind. Since I’m positioning myself as a multi-threaded performer I’m somewhat bypassing this dilemma. By operating in a closed ecosystem, the compositional choices are immediately reflected upon the construction of the instrument and vice versa
4. Accordingly, this means that the affordances and constraints of the instrument are known to the composer before the process of composition takes place. It speaks for itself that this greatly increases the possibilities toward gestural behaviour, expression, and virtuosity in electronics performances.


Mapping, Gesture and Expression

The music instrument is more than a mere machine; it is an energy conversion device with expressive goals. (Dufourt, 1995)

Intrinsic to the computer musician’s instrument is the concept of mapping. Mapping can be described as the liaison or correspondence between control parameters (derived from performer actions) and sound synthesis parameters (Hunt, Wanderley, Kirk, 2000) (fig. 3, after Wanderley 2000; Leman 2008; Wessel and Wright 2001). This separation is impossible in the case of traditional acoustic instruments, where the gestural interface is also part of the sound production unit (Iazzetta 1997).

Screen Shot 2016-08-08 at 18.09.26

Compared to traditional acoustic instruments, which have a direct connection between the mechanical structure of the instrument and the actions of the performer’s body, a causal relation between the input device and the gestures produced by the performer is initially absent. These connections have to be designed for every instrumental construction. In other words, apart from creating connections in order to control the sound synthesis part of the processing agent in our instrumental model, mapping also directly influences the gestural affordances and constraints of the instrument. This brings us to a pertinent question: How can we design these mappings in order to guarantee an effective performance instrument which allows for a gestural language capable of conveying expression during performance?

Although mapping is often described as ‘the designing of constraints’ (Magnusson, 2010)
6, it is from this decoupled nature of the instrument that affordances regarding expression emerge. In other words, the performer is able to design the input agent of the instrument to his or her expressive needs during performance. In the matter of this subject we can state that the more options an instrument has regarding mapping, the more affordances in regard to expressivity the performer has at his or her disposal when creating the instrument. This reveals a correlation between the mapping strategy and expressivity, which is subsequently linked to the gestural behaviour of the performer7. However, this correlation does not guarantee a high degree of precision in controlling the sound engine of the instrument. A Leap Motion controller8 f.i. can easily be regarded as a control interface capable of complex mapping strategies, but since the air surrounding the performer is not easily quantifiable on sight, it becomes hard to pinpoint specific values during performance. This makes the Leap Motion a desired interface in certain compositional and performative aesthetics9. This in its turn reveals a correlation between the choice for an input device and the idiomatic nature of the composition, the performance or both. In this regard we can conclude that apart from the ergonomics of the input device (which of course also plays an important role in this context), modes of control, mapping strategy, gestural behaviour and expression are inextricably linked to one another and that they are greatly influenced by the aesthetics of the composition to be performed.


Input and Throughput

Nowadays, controlling software is possible in multifarious ways. Nevertheless, these performance actions all stem from the same basic idea of discrete and continuous value changes. A discrete value change can be performed by f.i. pushing a button, changing the state of the coupled parameter in the software from one value to another at once. A continuous value change represents a gradual, continuous change of a parameter f.i. augmenting the volume of an audio channel by sliding up a fader. While creating the instrument, the choice for using either discrete or continuous value changes is greatly influenced by the aesthetics of the composition to be performed. For example, discrete value changes are more suitable for the performance of a percussive passage where continuous value changes are best applied when creating a gradually changing soundscape. As mentioned above the mapping engine also allows for alternative connections between controls and parameters. For example a (physical) fader can be mapped to a discrete parameter and similarly, a button can be used for controlling a fader. Most DAW software features these basic mapping capabilities with which the above mentioned mappings can be created. However, specialised programming environments10 for electronics performance allow for an advanced mode of interaction between the input and throughput agent of the instrument. One could for example program a volume slider to go from 0 to 12711 in 2 seconds by pressing a button or play the piano in a certain key or mode by manipulating a fader. Although the above is applicable to most standard performance interfaces there are a few types of interfaces that allow for an even more advanced interaction. Through research12 interesting performance platforms for advanced human computer interaction (HCI) were found in screen-based and grid-based controllers. The modular nature of these controller types is a huge asset in terms of ‘interface building’. For this research, I especially focused on the Lemur13 application and Monome grids14.

Lemur is a professional MIDI and OSC controller app that can control DJ software, live electronic music performance software, visual synthesis software, stage lighting and more. Apart from an extensive out-of-the-box functionality it offers custom controller making in a comprehensive programming environment. These features are extended by the option of scripting (in a language similar to C# or Java) and the possibility of using the mobile device’s sensors (f.i. gyroscope, accelerometer) adding f.i. gravitational properties to the presented controls. (liine.net - last accessed July 2016) This make the Lemur application a tremendously versatile controller.

Monome grids are controller/display boxes that can be interfaced to music software. The control surface is a completely uniform matrix of identical, anonymous, back-lit buttons. One of the strong suits of this concept is the decoupled nature of the buttons and the LEDs: pushing a button might provoke the button to get lit but it can also result in a f.i. whole row of buttons lighting up. In other words, the way the LEDs behave is completely at the discretion of the programmer’s imagination. This concept will play an important role in the chapter Communication. Monome grids require a connection with a computer16 to run. Once connected it communicates through OSC with any software that can handle this protocol. Although there’s no official (or commercial) software released by Monome, many users contributed over the years (and still) to a vast library of apps written in various programming environments, ranging from Max to ChucK17. Some models are equipped with an accelerometer that detects tilt in two dimensions. Monome grids come in various dimensions: 8x8, 16x8, 16x16 and 32x16 grids. The models I used during my previous and current research are the 2012 128 (16x8) and the 256 (16x16), both equipped with a variable LED brightness which spans 16 stages. Although it may not be as versatile as the Lemur application, it has one big advantage: buttons. This greatly improves the tactile experience of the controller. Since both control surfaces rely extensively on (software-driven) visual representation of controls they open the possibility of leaving the computer out of the performative space18. Although this takes out one visual feedback system from the instrumental model, another one is put into place (fig. 4).

Screen Shot 2016-08-08 at 18.07.33


Apart from providing communication between the performer and the sound engine in the software, the visual aspect of these interfaces can be programmed to display other animations related to the performance such as the process of sound manipulation.

Except for providing information on control interfaces, my 2012 research project (Controllerism, a Minimalistic Approach)
also presented a couple of options regarding software. Since I’d be performing live electronics, the choice for an advanced and specialised type of music performance software was obvious. A couple of trends can be distinguished: digital audio workstation (DAW) performance software, visual programming environments and text-based programming environments19.

Ableton Live is by far the most popular DAW performance software of this age. In contrast to many other software sequencers, Ableton Live is designed to be an instrument for live performances as well as a tool for composing, recording, arranging, mixing and mastering. It is also used by DJs, as it offers a suite of controls for beatmatching, crossfading, and other effects used by turntablists, and was one of the first music applications to automatically beatmatch songs20. Apart from standard DAW functions and s pecific functions regarding performance, Ableton Live can be extended to be used in conjunction with Cycling74’s Max, which operates as a M4L device inside Ableton Live. Similar software can be found in the recently released Bitwig.

An example of a visual programming environment is
Max. During its history, it has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations. The Max program is modular. Most routines exist as shared libraries. An application programming interface (API) allows third-party development of new routines (named external objects). Thus, Max has a large user base of programmers unaffiliated with Cycling '74 who enhance the software with commercial and non-commercial extensions to the program. Because of its extensible design and graphical user interface (GUI), which represents the program structure and the user interface simultaneously, Max has been described as the lingua franca for developing interactive music performance software22. The graphical environment of Max can be extended by Gen, a real-time code generation and compilation environment. Similar to Max is Pure Data (Pd) which stems from the initial versions of Max, developed by Miller Puckette.

An example of code-based performance software is
SuperCollider. SuperCollider is an environment and programming language originally released in 1996 by James McCartney for real-time audio synthesis and algorithmic composition. Since then it has been evolving into a system used and further developed by both scientists and artists working with sound. It is an efficient and expressive dynamic programming language providing a framework for acoustic research, algorithmic music, interactive programming and live coding23. Similar software can be found in f.i. ChucK.

Before ever starting a case study I decided to exclude code-based performance software. This decision was made for various reasons: Firstly, I was already well acquainted with DAW performance software and visual programming environments. Secondly, the creation of performance mappings in code-based environments is a cumbersome task in comparison to the other proposed environments. This left me with two types of hardware and software to test during my case studies.


Case Studies : Coalesce [5], CollidR

In order to test both instruments in a live performance situation I composed Coalesce [5] and CollidR. Coalesce [5] is a composition for live electronics and alto-saxophone24 which can be regarded as a study for electronics performance. The score is set up in such a way that the composition can be performed with any type of computer-based instrument. On the other hand, CollidR is an improvisation based on collidr.maxpat, a program I initially wrote for monome 128 grids. Having both idioms of fixed composition and improvisation side by side would provide me with valuable information on the composition of my instrument which I plan to use in both presented situations. Although the lecture-performances including these performances were mainly focused on audience understanding of electronics performance they presented interesting case studies in relation to the instrument as well.


Coalesce [5] | score | software

Coalesce05_LemurperformanceInterface
The piece was performed twice during a lecture-performance on November 14th 2014 at the Royal Conservatory in Brussels entitled ‘One setup to rule ‘em all’, once with a ‘Lemur-Ableton Live’ setup and once with a ‘Monome-Ableton Live’ setup. Although a detailed report of this case study will be presented in my defence’s lecture-performance I already want to point out a couple of notable differences found in the modus operandi applied during the preparation of both performances and their performative outcome. During the preparation of the Lemur performance the mapping was done in a classical way: the interface contained several faders for manipulating the effects and some push buttons to activate these effects, play notes, trigger samples etc., which were mapped to the associated parameters in Ableton Live. In order to obtain a similar result on the monome I faced several options: I could create an interface in Max and then create MIDI mappings to Ableton Live by routing the MIDI signals internally. Instead, I decided stay as much as possible in the Ableton Live ecosystem and created a Max For Live device which allowed me to control the associated parameters through Ableton’s API26.

The difference in approach in terms of instrument building and practice between the first and second performance was noticeable. Not only did composing the monome interface take a lot more time, it was also more complicated and, in regard to practicing the piece, more difficult compared with the Lemur interface. The latter might a.o. be accreditable to the fact that, although an adequate visual representation of the controls, the performance interface was still just comprised of a series of buttons which had a disorienting effect. In both cases the study trajectory was very similar to preparing a piece for an acoustic performance. It included a lot of slow practice in order to properly sequence the actions necessary to perform the piece. In an attempt to solve the problem of composing the interface I created ‘Control’, a dynamic MIDI surface map for monome grids. More info on this topic can be found below. Although this case study turned out in favour of the Lemur-based performance setup, it will become clear that the monome setup is more adequate in context of performer-audience communication. This will be discussed in the chapter ‘Communication’.


CollidR | download the software



This piece was also performed twice, this time during a lecture-performance at the Royal Conservatory in Brussels on January 23rd 2015 called ‘When Glints Collide - composing and performing with (inter)active interfaces’. This time the setup would include the same input devices as the previous case study but with Max as the main throughput agent instead of Ableton Live.
Since the performance required more space than available on both control interfaces I added an interface page in the Lemur template to the Lemur setup and a monome arc 4 tot the monome setup. Primarily due to the short time interval between the two case studies it immediately became clear that I wouldn’t be able to confine myself to the use of Max. Since Ableton Live provided me with most, if not all performance needs I decided to use Max solely as a mapping engine for both performances. CollidR27 is built completely on collidr.maxpat, a program which uses the idea of Newton’s Cradle29 to generate notes and/or control changes. The patch is largely inspired on other monome patches such as flin by Brian Crabtree (tehn) and Kradle by Jon Sykes (madebyrobot). Although Ableton Live was used during the performances, the piece (and its performance) could not exist without Max since it depended heavily on the process of Newton’s Cradle, something which which would be hard to program in Ableton Live. In regard to the instrument, having a visual programming environment at my disposal proved to be a valuable asset. As input agents, both monome and Lemur proved to be stable and viable solutions to the problems presented. Learning how to perform the piece was never hard; since I was interacting with the visual representation of a process (run by the computer) which I created myself, I was well aware of it’s affordances and constraints even before I started practicing the performance. These affordances and constraints also heavily influenced the options regarding gesture and expression during performance (f.i. changing a pattern in CollidR.maxpat required both hands). A report on this case study will be presented during my defence’s lecture-performance with the performance of Coalesce [6], a piece built with similar performative and improvisational aesthetics in mind, constructed upon the same mapping and synthesis engine of this case study. Although both setups provided a more than satisfactory performance result, it will again show that monome grids have the advantage in performer-audience communication.

Control
| download the software

control
Control is a dynamic MIDI surface map for monome grids. In other words, it turns a monome grid into an open platform for custom MIDI controller making. Control gives the user the ability to draw controls (faders, momentary buttons, toggle buttons, xy pads and keyboards) onto any place on the grid by directly interfacing with it. A paging system is implemented, allowing up to 16 simultaneous layouts (= 16 MIDI channels) which can be saved, together with their presets. After creating these layouts the controls can be mapped to the user’s DAW of choice. Control exists as a standalone macOS app, a maxpatch and a M4L device30.





Personalised Instrumental Model

As mentioned above, it is impossible to detach the choices made in regard of the formation of the instrument from the idiomatic nature of the compositions performed on the instrument. At the same time, the instrument has an undeniable influence on the aesthetic nature of the compositions. This conundrum is a common fact for the multi-threaded performer who is burdened with pursuing a balance between all given factors in order to create as much artistic freedom as possible in the context of both the composition of the instrument and the aesthetic language applied in the compositions. Nevertheless, choices have to be made. What follows is a description of the choices made in regard to the formation of my performance setup. During the lecture-performance I will give an in-depth presentation of the pros and cons of these choices.

As processing/throughput agent of the instrument I adopted both Ableton Live and Max. Ableton Live comes with many built-in features regarding live electronics performance. The most prominent feature is it’s ability to incorporate Max patches as native devices. Considering Max its extension possibilities with Gen this combination brings the best of the three worlds mentioned above.

As input agent of the instrument I decided to use both the Lemur application and Monome grids simultaneously in my setup. Both presented interfaces are quasi equally clear in terms of instrument-performer communication, present similar affordances and constraints regarding gestural behaviour and expression
31 and allow for a vast set of possibilities regarding interface building. Nevertheless it’s the tactile experience presented in monome grid’s ergonomic design that gives it a slight advantage over the Lemur application. Therefore the monome grid(s) will be used to interface directly with the music while the Lemur app is used for manipulating macro functions in the software (changing scenes, launching click tracks, etc.). This devision thus implies a split between the figurative and effective gestures32 produced during performance. Another welcome byproduct of this approach is the possibility to exclude the laptop as a visual feedback system for the performer. Although the presence of the computer is a prerequisite for the actual performance, it can be obscured (on stage) in order to enhance the perception of the performance as being an instrumental one, which will prove positive in terms of performer-audience communication (see chapter ‘Communication’) This brings me to present the following personalised instrumental model (fig. 5):

Screen Shot 2016-08-08 at 18.07.45
  1. This performance setup will be used in both the lecture-performance and the concert accompanying this thesis.
  2. In this chapter I’ll be discussing gesture, expression and communication in regard to instrument-performer relations. A detailed description of the findings concerning gesture, expression and communication in the context of performer-audience communication can be found in the ‘Communication’ chapter.
  3. When referring to the computer musician I mean the embodied computer musician, using bodily actions to engage in electronics performance on stage.
  4. Since this closed ecosystem is very much controlled by artistic and aesthetic choices made in regard to the compositions I won’t be discussing the output agent of the electronics performer’s instrument as my compositions presented in accompaniment of this thesis are written with a stereo output setting in mind.
  5. A detailed overview on literature regarding mapping strategies: Hunt A., Wanderley M. M., Kirk R. (2000) Towards a Model for Instrumental Mapping in Expert Musical Interaction. ICMA Vol. 2000
  6. We can easily adopt this notion of ‘designing constraints’ in the whole domain of instrument building.
  7. Expression (…) is accepted as a feature communicated by a performer through the nuance of deviation and deformation of a musical text (Gurevich and Trevino, 2007), inherent in both the physical interaction (Dahl and Friberg, 2007; Sloboda, 1988) and extra-musical gesture (Rodger, 2010; Vines et al., 2006) and subsequently perceived and understood by the spectator (Poepel, 2004) (Fyans, 2015)
  8. www.leapmotion.com (last accessed July 2016)
  9. This was demonstrated during my lecture-performance ‘Kojiki: creating an efficient electronics performance setup’. December 2013, De Singel, Antwerp
  10. programs such as Max, Pure Data, SuperCollider, Ableton Live + Max For Live, Bitwig to name a few.
  11. MIDI standard: https://en.wikipedia.org/wiki/MIDI (last accessed July 2016)
  12. Van Esser B. (2012) Controllerism, a minimalistic approach (Trobadrors research project KCB)
  13. https://liine.net/en/products/lemur (last accessed July 2016)
  14. http://monome.org/ (last accessed July 2016)
  15. This concept will play an important role in the chapter ‘Communication’
  16. More recently, Monome released a couple of modular synth modules that are able to interface with the grids directly. This is also the case for the 2013 ‘aleph’, an adaptable soundcomputer.
  17. The idea of open-source is very strongly embedded in the monome community.
  18. This will prove beneficiary in optimising audience understanding of electronics performance. See chapter ‘Communication’
  19. cross-overs of these types exist as well (f.i. Audio Mulch)
  20. Coalesce [5] can also be performed as a live electronics solo piece.
  21. Although the lecture-performances including these performances were mainly focused on audience understanding of electronics performance they presented interesting case studies in terms of the instrument as well.
  22. the same video can be found in portfolio/Media/Video/CollidR.mp4.
  23. The maxpatch can be found in portfolio/Software/Max/_CollidR_03.maxpat
  24. This will be showcased during my defence’s lecture-performance
  25. Wanderley et al. (2005). I will elaborate on this topic in the next chapter.
  26. Since the performance required more space than available on both control interfaces I added an interface page in the Lemur template and a monome arc 4.
  27. The proposed instrumental models only concern the modus operandi of the instrument in relation to the performer, without any relationship to the performed composition, its aesthetics or the audience.