[IN]VISIBLE

communication

Communication

“Precisely because musical sound is abstract, intangible, and ethereal [...] the visual experience of its production is crucial to both musicians and audience alike for locating and communicating the place of music and musical sound within society and culture. [...] Music, despite its phenomenological sonoric ethereality, is an embodied practice, like dance and theater."(Leppert 1993: xx-xxi)

Performer-audience communication can indeed easily be regarded as one of the biggest and therefore most debated problems the contemporary computer musician faces in a live performance situation. When discussing the relationship between gesture and sound in any type of live performance, John Croft points out to the following:

“[…] simultaneity is also closely linked to two fundamental principles of live performance: first, we expect a meaningful relationship between what we see the performer do and the sound that this action generates; second, as Simon Emmerson points out, “we expect a type of behaviour from an instrument that relates to its size, shape, and known performance practice” (Croft, 2007).

Furthermore, this performer-system bias, present in spectator’s assessment of a performance, leads to the following statement:

“… in the context of electronic musical interactions it is commonly assumed that expression is a feature, quality or quantity of an electronic musical device, rather than a holistic product of a performer’s interaction as perceived by a spectator” (Fyans, 2015).

Both quotes reveal two different yet correlating sides of the same problem: gestural behaviour in regard to instrumental expressivity and synchronicity of performance actions and sound during performance. Research shows that, in order to create an intelligible communication model between the performer and the audience, the presence of both aspects is a prerequisite (Fyans, 2015). However, in this chapter I will move away from the idea that gesture is a predominant factor in performer-audience communication in live electronics performance. Instead, I will advocate a different approach, based on luminous performance platforms, by using their front as a canvas to visualise animations, synchronous with the audio output and/or the audible process. Since these animations take place on the input device itself, they can subsequently be linked to different classes of gestures, performed to produce the sounds coming from the speakers. At the end of this chapter, a solution will be presented in the form of a suite of modules that allow for easy action/sound/animation-association composing for monome grids in Max.



Gesture, Expression and Simultaneity

Although the topic of gesture and expression in electronics performance has been presented in a multitude of scholar articles1 there is a lack of musicological studies taking body gesture into account.

This is due to the fact that the body has never been considered as a support for musical expression. When electric and electronic means of musical production and diffusion eliminated the presence of the musician and his instrument we had to change the way we experience music. "Traditionally, to attend a music performance is to apprehend through the sight the intention which is loaded in the instrumentalist's gesture. In the mediation of the technological work, this prediction does not work all the time" (Battier, 1992). The symbiotic relation between the player's body and his instrument plays a special role in the comprehension of the musical discourse. For example, a violent gesture produced by the player reinforces the effect of a sudden sound attack in the same way that the body expression of a singer can lead to a richer phrase articulation
(Fyans, 2015).

Although there are many approaches in analyses of gesture in relation of instrumental performance I would like to adopt a classification made by François Delalande. He distinguishes three classes of gestures:

Effective gesture - necessary to mechanically produce the sound (f.i. key press on a piano).

Accompanist gesture
- body movements associated to effective gestures (chest or elbow movement).

Figurative gesture - perceived by the audience but without a clear correspondence to a physical movement (f.i. a pianist phrasing a melody with the ‘other’ hand, but also a trombone player applying a mute) (Cadoz, Wanderley, 2000)

When reviewing the examples of the pianist and the singer above, we can classify both as demonstrating effective as well as accompanist gestures. When transposing this idea to the context of electronics performance we are presented with a problem. The nature of the computer musician’s instrument seems to be too dynamic for the spectator to form an appropriate mental model2. This can a.o. be accredited to the fact that the instrument imposes differentiating effective and accompanying gestural behaviour during a performance as a result of various factors (such as mapping or the ergonomic layout of the input device), thus resulting in a malfunctioning communication model in regard to expressivity. This leaves us with one type of gesture which is completely free from this erroneous communication model: figurative gesture3. Although most performative meaning is derived from effective and accompanist gestures, a solid solution in the context of electronics performance has still to present itself.

Apart from our specific and cultural understanding of an instrument, the degree of simultaneity between the performer’s actions and the sounds they produce is an other important factor for the acquirement of an appropriate mental model of the instrument. There where in many acoustic instruments the gesture-sound relationship is guaranteed, in the computer musician’s instrument it is (initially) absent. As described in the chapter ‘The Instrument’, reasons for this discrepancy can a.o. be found in the split nature of the processing agent of the electronics performer’s instrument. As mentioned before, one-on-one mappings are more likely to create an accurate mental model where one-to-many or many-to-one mappings easily obstruct a clear notion of synchronicity (Wanderley, 2001). This also reveals a correlation between the feasibility of synchronicity and the compositional idiom in which the computer musician operates.


Luminous Control Interfaces

Luminous control interfaces have been around since the late 90’s. The luminous aspect of these control interfaces is mainly present to inform the performer on the state of performance parameters in the throughput agent of the instrument. Nevertheless, throughout the years some performers decided to turn their interface towards the public during performances4. In this regard we find two typical yet completely different examples in Daedalus5 and Madeon6. The former is known for his use of a monome 256 grid in conjunction with MLR, a sample cutting program written especially for the interface by Brian Crabtree. MLR is able to visualise loops by changing the position of a lit LED in a row of buttons one step at a time, conform with the playhead position in the buffer the sample is saved to. Basically, the performer launches these loops and then cuts them up during performance7. This can result in a dynamic performance in which the animations displayed on the performance platform are a result of the process which takes place in the throughput agent of the instrument and is controlled by the performer during performance.

A completely different approach can be found when observing Madeon playing his Launchpads
8. During performance9, Madeon is using a very straight forward approach in regard to mapping. Three control interfaces are presented to the audience: one for playing notes, one for launching samples and one for manipulating software parameters. The careful observer may notice there’s much less of a process going on which needs real time controlling compared to Daedalus’ performance. Furthermore, the lit LEDs function most of the time only as indicators on sample states inside Ableton Live or display a simple animation when playing a note. Despite the low level of performative involvement there’s still a link between the performer’s throughput agent and the audience. Although the examples mentioned above are of a completely different nature, they both exhibit a sense of synchronicity, not so much manifesting itself on the gestural level but on a visual level. This leads to the following question: Can luminous control interfaces provoke accurate mental models of the computer musician’s performance setup and therefore improve audience understanding of electronics performance?


Case studies : Coalesce [5], CollidR

Preparing Coalesce05
In order to get a clear view of the importance and the role of gesture and simultaneity in regard to audience understanding of my own luminous performance setup, I set up two case studies. As mentioned in the previous chapter, surveys were conducted during the performances. Although the surveys hold information on both performance setups proposed in the previous chapter, I will only discuss the results produced by the monome grid performances, as these are the ones holding information for my eventual performance practice and therefor the feasibility of my hypothesis on audience understanding of electronics performance. During both surveys, the participants (40 in total) were mostly bachelor conservatory students, with no or little prior knowledge on ‘advanced’ electronic music performance. The performances were presented to the participants without any introduction to the performance or its structure. In both cases the performance platform was directed towards the audience. In the next section I will layout and briefly comment upon the results of these case studies10.

When asked if the participants experienced a clear link between the musical output and the visual feedback of the control interface a big majority (29/40) answered to have experienced a clear to very clear connection. Of both performances, the relationship between output and visual feedback during CollidR was perceived as less transparent compared to that during Coalesce [5]. This may be accreditable to 3 things: first, the interface of Coalesce [5] was composed especially with a high degree of synchronicity between audio output and visual feedback in mind. Second, the process displayed on the performance interface was rather complex. Third, the performance of CollidR included a number of ‘interface changes’ (
figurative gestures for f.i. toggling between ‘process view’ to ‘tempo view’ or ‘velocity view’) necessary to perform the piece.

When asked if the participants experienced a clear link between the musical output and the gestural behaviour of the performer a small majority of people (22/40) answered to have experienced a clear to very clear link. There was no noticeable difference in the results between both performances.

When asked wether or not the performance should be enhanced a small majority (23/40) answered ‘yes’. There was no noticeable difference in the results between both performances. When asked how the performance should be enhanced the participants were presented with three options: exaggerated gestural behaviour, applying visuals or using a different control interface. Although there was no unambiguous answer to be read the answers mildly suggested an enhancement by the use of visuals.

In conclusion of these case studies several things became clear: in regard to gesture, the proposed performance setup doesn’t accommodate a sufficient level of audience understanding of the performance. At the same time it became clear that the visual aspect of the performance assisted to a better performance understanding. Nevertheless, there seemed to be a limit to this understanding, characterised by the complexity of the visual animations and their connection with gesture and music.

The rather disappointing results in regard to gesture didn’t come as a surprise. Since I was clearly not operating in a simple one-on-one mapping situation this result was to be expected. The most important result drawn from these case studies is that visualisation facilitates an optimisation of the communication model which exists between the electronics performer and the audience. Furthermore the (half) result on enhancement would push me in the direction of researching
external11 visuals during performance. This will be discussed in the following sections of this chapter.


Multimedia

In electronic music performances, projected visuals are often applied. Although this tactic closes the gap between the audible and visible element during performance it brings along a shift in the performative space, more particular from an instrumental to a multimedial setting. When entering the realm of multimedia we must take into account a different set of rules.

First and foremost, in this medium sound in general is not usually the main focus of an audience’s attention12 (Chion 1994). Furthermore, the combination of a sonic and a visual element results in a third audio-visual element. This can be an abstract aesthetic effect or even a meaning as a result of two intersecting sounds/images. A presence of added value depends on the nature of the construction of this combination. This implies a shift in focus from the separate audio and visual components, to the relationship between audio and visual components (Grierson, 2005).

Screen Shot 2016-08-08 at 19.55.31
In relation to synchronicity (not so much in terms of time but as structural elements) Nicholas Cook distinguishes three modes of multimedia: conformance, complementation and contest. Their occurrence depends on the degree of similarity between the constituent media. When an instance of multimedia (IMM) is comprised of media which is consistent with each other it can be classified as conform. On the other side of the spectrum, where contradictory media form the basis of an IMM, it can be classified as contesting. Between these two we find a complementary class in which the media, which constitutes the IMM, is contrary but not contradictory (Zbikowski, 2003) (fig. 1, after Zbikowski, 2003). The previously mentioned notion of added value has best chances in the ‘complementary’ class.

In regard to the analysis of multimedia an interesting connection was proposed by Lawrence Zbikowski. In ‘Music Theory, Multimedia, and the Construction of Meaning’ Zbikowski applies the idea of conceptual blending, which originates from the field of linguistics
13 to the discipline of musical multimedia.

“In order to study conceptual blends, the rhetorician Mark Turner and the linguist Gilles Fauconnier developed the notion of conceptual integration networks (CINs). Each CIN consists of at least four circumscribed and transitory domains called mental spaces. Mental spaces temporarily recruit structure from more-generic conceptual domains in response to immediate circumstances and are constantly modified as our thought unfolds. […] Turner and Fauconnier use CINs to formalize the relationships between the mental spaces involved in a conceptual blend, to specify what aspects of the input spaces are imported into the blend, and to describe the emergent structure that results from the process of conceptual blending.” (Zbikowski, 2002).

Screen Shot 2016-08-08 at 19.56.04
As posited in this quote a general example of a CIN (fig. 2, after Zbikowski, 2003) contains at least four interconnected mental spaces: two input spaces, one, generic space and one blend space. The input spaces represent the different types of media that are present in an instance of multimedia. The solid double-headed arrow indicates a correlation between the structural elements present in both types of input media. The generic space provides a conceptual framework for each of the input spaces which are then projected into the blend space in which, due to correlating input spaces, a new structure emerges. The dashed arrows indicate the direction into which information is flowing. They’re double headed since in some cases it is possible for structures to be projected from the blend space to the generic space. It should be noted that mental spaces are dynamic structures, as are the CINs that are built from them.


Case study: Harmonium #2 | download the software

Stacks Image 268

In order to determine wether or not a multimedial approach would be feasible with regard to electronics performance and, if so, which mode of multimedia would be suitable in regard to its audience understanding, I set up a case study based on four abridged performances
 of Harmonium #2 by James Tenney (1976). These performances were presented on February 2nd 2016 at the Royal Conservatory in Brussels during a lecture-recital entitled “Exploring visualisation strategies in computer music performance”. A survey was conducted during the performances with 20 participants which were mostly conservatory students with no or little knowledge on ‘advanced’ electronic music performance. For the occasion I created a maxpatch (see portfolio/Software/Max/Harmonium 2(56)/_Harmonium2(56).maxpat) which allows for an electronic, polyphonic (three voices) rendering of Harmonium #2 by the use of sine tones. Although a strict theoretical composition on just intonation and harmonic series, the piece exhibits a strong meditative character. Throughout the piece an initial A major chord slowly and gradually evolves into other ‘just’ chords. The single notes that make up the chords are to be performed in a wave-like manner, gradually becoming louder, keeping the maximum volume for a short time and then slowly diminishing it. Apart from facilitating a pitch perfect performance, the sound quality of sine tones in this electronic version only enhances the meditative character of the piece.

FullSizeRender
The performance setup used for this piece is comprised of a monome 256 grid and Max. Apart from the instrumental mapping and sound engine, the patch was adapted to include three types of visualisation, mindful of the three modes multimedia:
  1. conformance: 6 VU-meters moving up and down synchronous to the volume of the different voices. A change in pitch is presented as a change in colour.
  2. complementation: an abstract pixellised video in which the volume of the different voices control the degree of pixellation and the overall visibility of the video.
  1. contest: an abstract video of morphing light flashes that shows no direct link with the music.
The performance platform of the monome 256 grid is constructed in such a way that the different tones can be activated on the righthand side of the grid while the volume is manipulated and automised on the lefthand side. Both the volume manipulation/automation and the changing of the notes are included in the on-grid visualisation. The first performance was done without any projected visualisation using only the monome grid which was orientated towards the audience. In preparation of each performance the audience was presented with the accompanying CIN (fig 3.). In the next section I will layout and briefly comment upon the results of this case study15.

Screen Shot 2016-08-08 at 20.00.30



When asked to rate the added value of the performance with projected visuals the
complementation performance received a score of 40% followed by the contest performance (36,6%) and the conformance performance (23,4%). When asked which performance situation assisted the most in understanding the compositional structure of the music half of participants (10/20) chose the conformance performance followed by the complementation performance (4/20). When asked which performance situation approximated the proposed ‘meditation’ blend, 9 out of 20 participants chose the instrumental performance (without projected visuals). When asked which performance was the most appealing 9 out of 20 participants chose the instrumental performance (without projected visuals). Both answers of these two questions put the complementation performance in second place (respectively 7/20 and 6/20)19.

In conclusion we find the proposed statements on
added value and synchronicity confirmed. Furthermore we find that projected visuals can have a positive effect on audience-understanding of the music’s structure and the ‘blend space’, an important step towards a better audience understanding of the electronics performance. This case study leads us to conclude that a multimedial approach is capable of conveying a ‘correct’ notion of performative meaning. In this regard, I would like to repeat that the compositional idiom in which the electronics performer operates plays an important role. The structure of the composition proposed in this case study isn’t that complicated.

In addition to these conclusions we not only observe a preference towards an instrumental approach as a performance concept, but also in regard to the proposed blend space. The latter might be accredited to the meditational character of the composition which comes across very strong, even without the visualisations. Nevertheless, this result came to me as a surprise, which would leave me with the information that the presence of the electronics performer, playing his or her instrument on stage, can be enough in order to convey a proper notion of performative meaning.



Input = Output²

Similar to the previously discussed projected visuals (or external visuals), the internal projection of animations generated by the throughput agent onto a luminous interface’s platform (or internal visuals) implies a shift in the performative space towards a multimedial setting. This paradigm shift opens up the possibility of coupling gesture to the modes of multimedia created by the intersections of sound and visuals. This in its turn implies a shift in expression and performative meaning from gesture towards the blend spaces, emerging form the correlations between both audio and visual elements throughout the performance. Subsequently, expression can be linked to both gesture (be it effective, accompanist, figurative or any combination) and visual animations. Subsequently, these extra tools can lead to an optimisation of audience understanding of the computer musician’s performance.

In this regard several connections can be observed:

performance gesture conformance: gesture is conform (synchronous) with visual animation and sound = effective visualisation.
performance gesture
complementation: gesture activates sound (-effect) and visual animation which is complementary (synchronous, yet with added value) in nature = accompanist visualisation.
performance gesture
contest: gesture activates sound and contesting (asynchronous, ‘meaningless’) visual animations = figurative visualisation


Throughout my last performances as a computer musician I applied this performance strategy to a number of compositions
16, some of which will be discussed in the next chapter. This lead to the development of the Monome Grid Control and Visualisation Modules (MGCVM).





MGCVM | download the software

MGCVM17 is a collection of modules for making custom performance interfaces for monome grids in Max. It takes advantage of monome’s decoupled design to link performance actions to a wide variety of visual animations. Apart from securing a static one-on-one relationship between key presses and LED feedback or a visual representation of the applied musical process, MGCVM allows for coupling these animations to the sounds produced by the performer’s actions. As a result, MGCVM opens up a myriad of pathways to both monome grid performance practice and audience understanding thereof in a concert situation.



Although there’s a proof of concept video present in my portfolio
18 it’s only during my defence’s lecture-performance and concert that MGCVM will be presented in its full extent. It should be noted that the proposed visualisation modules only present a fraction of the possibilities which, for now, are only closely linked to my personal compositional and performative aesthetics. What follows is a brief description of MGCVM’s current content and M.O..


MGCVM base: connect control and visualisation modules to the monome grid, MIDI routing and transport activity options.

helper modules: m4ludp: create a udp connection between Ableton Live and MGCVM modules (connect DAW volume, trigger and transport to visualisations).

control modules: Connect left outlet to MGCVM base in order to establish basic controller visualisation (similar to Control.app’s toggle button, momentary button, horizontal and vertical faders, keyboard). Connect right outlet to visualisation modules.

visualisation modules:
Connect inlet to a control module and outlet to MGCVM base in order to link a controller with a visual animation.

The included visualisations can be classified as follows
19:

effective visualisations: basic visualisations of the controllers (see control modules), VU-meter

accompanist visualisations: vertical freeze, horizontal freeze, glitch, loop, stutter, reverb row, artefact, sonic boom, vertical waves, horizontal waves, pulsating grid, random grid, ripple, camera2grid.

figurative visualisations: vertical waves, horizontal waves, pulsating grid, random grid, camera2grid.

Although the visualisation strategy proposed above provides an original solution in performer-audience communication, staging the performance plays an important role. Much of the proposed tactics is based on the visibility of the luminous control platform. In this regard, factors such as the size of performance instrument, performance space and lighting are key to the success of this performance strategy.
  1. Gurevich and Trevino (2007), Dahl and Friberg (2007), Sloboda (1988), Camurri et al. (2004), Dobrian and Koppelman (2006), Fels et al. (2003) to name a few.
  2. see ‘Fyans A.C. (2015) Spectator Understanding of Performative Interaction, The influence of mental models and communities of practice on the perception and judgement of skill and error in electronic music performance ecologies’ for more information on mental models.
  3. We see an application of the latter in many electronics performances in the popular music circuit, presumably due to the fact that there aren’t that many effective and accompanist gestures to be performed.
  4. Wether this decision was made in order to raise a better understanding of the performance or to simply provide the audience with something to look at while listening is unclear.
  5. https://en.wikipedia.org/wiki/Madeon (last accessed July 2016)
  6. example of Daedalus performing: https://youtu.be/Z_zIvFYQWig?t=3s (last accessed July 2016)
  7. Novation Launchpad: https://us.novationmusic.com/launch/launchpad (last accessed July 2016)
  8. example of Madeon performing: https://youtu.be/-rNOiQUL4ik?t=19s (last accessed July 2016)
  9. A more thorough overview of the results will be presented in my defence’s lecture-performance.
  10. … projected on a screen in the vicinity of the performer as opposed to internal visuals which take place on the performance instrument itself. more info on this topic will follow in the subchapter ‘Input=Output²’.
  11. In ‘Analysing Musical Multimedia’ Nicholas Cook even posits that music in the abstract doesn't have meaning.
  12. For more information on this topic see Fauconnier, G., and Turner M. (2002) The Way We Think: Conceptual Blending and the Minds Hidden Complexities. New York: Basic Books.
  13. Due to the fixed length of the lecture-performance I had to cut each performance to ca. 4’.
  14. A more thorough overview of the results will be presented in my defence’s lecture-performance.
  15. see portfolio/Media/Video/ExtremelyLoud_excerpts.mp4 for early examples.
  16. A demo maxpatch can be found in portfolio/Software/Max/MGCVM/_MGCVM_04.maxpat
  17. see video above or portfolio/Media/Video/MGCVM_concept_video.mp4
  18. some modules can be classified in multiple categories according to their input options.
  19. The author is aware of the limitations of the result of the case study due to the low number of participants. Nevertheless, this case study can only be executed in a real-life performance environment, which unfortunately was attended by only 20 participants.