[IN]VISIBLE

introduction

Introduction

In this chapter, an overview of my personal artistic development is followed by a brief introduction of the chapters which constitute the core of this thesis. Within these introductions, I will formulate my research questions.



roland-e-20-1067121
Already from a young age I was intrigued by electronic music, or synthesised sounds to say the least. I remember that day in 1988 when my dad brought home a Roland E-20 synthesiser like it was yesterday. As I already played the piano for a couple of years, the keyboard’s interface was familiar to me. Furthermore, I could easily interact with the synthesiser and it took me no time to learn most functions. I strongly remember sitting at the keyboard with my headphones on, striking a chord in the accompaniment section and just listening to the arrangement for several minutes, then changing the chord, again keeping it for several minutes, and so on. I liked the arrangement that occurred when hitting major 7 or 9 chords the most since they had alternating patterns which, in my opinion, resembled a melodic line. Although I was in awe of this instrument I felt like something about this accompaniment section was wrong. It was the fact that playing one note would result in an instant, full blown arrangement coming from my headphones which, of course, was the whole purpose of the instrument. Approaching this element as a pianist made me feel like I was ‘cheating’. After some experimenting I decided to leave the synthesiser be and picked up the piano again, only to return to synthesised and electronic music in a different stage of my development as a musician.

After some years that piano became a guitar, then a bass guitar and then a drum kit. I started playing in a alternative rock bands, which turned into a hardcore bands, which then turned into death metal bands. I was always looking for ‘harder’ and faster music to listen to, music I would then learn how to play. More than the harsh sounds and the virtuoso aspect of this extreme music style it were the ‘exotic’ rhythmical structures that kept me curious. This quest came to a climax when I discovered the band Meshuggah, a metal band with a unique sound and style which included virtuoso guitar solos and clever song structures which were heavily based on rhythmical ‘games’ such as repeating patterns that at first didn’t seem to fit the proposed meter but after a while would sync up with it, etcetera. It was this ‘trompe l’oreille’ effect that would play a significant role in the years to come.


StabberClock (excerpt)

Benjamin Van Esser | 2004


While studying piano at the Conservatory of Brussels I quickly developed a deep affection for contemporary music. It seemed to be all about these harsh sounds and virtuosity I loved in heavy metal. The dissonance, structural and rhythmical complexity were right up my alley. I quickly took interest in
György Ligeti’s piano studies and Conlon Nancarrow’s studies for player piano, which were like an extremely advanced version of the Meshuggah songs (of course this statement should be read the other way around). When I started to compose some years later, I initially attempted to mimic my heroes and set out to write a piano piece which I would be able to perform myself. The piece ended up being too fast and too diffuse in terms of polyrhythmics to play live on an acoustic piano. So I returned to the electronic keyboard/workstation which allowed me to program melodic lines and provided me with tools such as an arpeggiator and a pitch shifter. Contrary to my previous experience with the instrument I now saw the benefits of the presented tools. Rather then turning away from them, this time I embraced them and used them to achieve my goal. It was around this time that I enrolled in the music technology course at the Brussels Conservatory. Although this was mainly a course in recording and editing basics, I quickly saw the possibilities it held as a compositional tool and decided to explore it further. After some experimenting I was able to compose some pieces which were mainly based on destroying audio samples with all kinds of effects, creating an ‘unreal’ realm of sound. In retrospect these soundscapes sounded really cheap but working on them allowed me to learn (more advanced) DAW functions. It didn’t take too long before I was asked to perform some of these pieces during an arts festival. Although I felt fortunate that I could present my own electronic music, I was kind of reluctant to pursue the invitation, since I had no idea how to perform this kind of music live. I decided to team up with a friend who was also a computer musician, and chose to improvise on a piano on top of the soundscapes I/we created.

This experience led me to expand my frame of reference in electronic and electro-acoustic music and its modes of live performance. One of the most distinctive experiences would soon follow during an electronic music show. When entering the concert hall, the only thing on stage was a table with laptop and an audio interface on it. During performance, the performer would just stare at the screen of his computer and, from time to time, move his mouse. Of course it could easily be argued that this setting was an aesthetic choice, like acousmatic music, putting the focus on
listening to rather than looking at the music. Nevertheless, this static performance, which did not include a visual cue nor any connection to the music which came out of the speakers seemed incomplete to me. In some way I even felt duped since I could have had the same thrill at home listening to the record on my sound system. Similar experiences in various idiomatic settings followed. When on stage (which in classical contemporary electro-acoustic music was hardly ever the case), most artist were hiding behind their computer or mixing desk, relying on visuals that were displayed behind or besides them. The same notion of cheating that struck me while playing on the Roland keyboard in my teens emerged again. This influenced me to set out and pursue answers to questions like why and how can it be done in a more proficient way.

In 2010 I joined NorthFaceCollective, a nu-jazz formation comprised of a singer, an electronics musician, a VJ and a sound engineer. After a one year residence at STUK (Leuven) and some shows we decided to enrol in an electronic music contest. Although we made it to the semi-finals, we were disillusioned. Compared to the other acts we were the ones ‘not faking it’, performing every (possibly) executable element of our music in real-time, working with dynamic visuals and a quadraphonic sound system, therefor performing with a much higher risk factor than the other contestants. The jury’s notes were clear, our performance was too static. True, we didn’t jump up and down with the beat and we didn’t have the audience wave their hands with us. In retrospect one could argue whether NorthFaceCollective was contesting in the right category. This experience directly led to the start of the research project I’m presenting in this thesis1.



Coalesce

Preliminary research
2 led me to formulate a first fundamental question as follows: is a computer musician an artistic performer? One should note that this question stipulates a focus on the computer musician throughout this thesis, which can be described as a musician who uses a computer in order to perform music3. With this question, several others concerning the computer musician’s performance practice, the composition of the performance instrument and the performative communication in a concert situation, arise. Yet the first topic to research here is the necessity of the computer musician’s presence on stage during performance. In the essence this seems to be an aesthetic decision, mostly to be taken on the part of the composer. Nevertheless I decided to create several compositions that applied various performance strategies in regard to electronics performance in order to find an answer to this question.

Eliminating the performer entirely is hardly a desirable outcome, and one that few if any composers in the field would advocate. Their elimination is undesirable, beyond the purely social considerations, because human players understand what music is and how it works and can communicate that understanding to an audience... (Rowe, 1993).

Although reading this quote strengthened my believe in a positive answer to my initial question, I could only take note of the fact that, since 1993, many composers of electro-acoustic music continued down the path of a disembodied performer, absent on the stage during performance, and continued writing electronics parts which would only come to life through activation of a computer program or a ‘tape’. This activation is often carried out by a person, other than the actual performer (mostly the composer), which most of the time is positioned at the mixing desk in the back of the hall. In other cases the composers are able to create a sense of liveness by leaving the timing of activation up to the acoustic instrumentalist who controls the software with f.i. a foot pedal. It has to be said that most of the time the composers are also the developers of the accompanied software or tapes and, as stated above, sometimes even the ‘performers’, thus making them classifiable as computer musicians. Although the choice for a disembodied performance strategy of the electronics part in an electro-acoustic composition can be regarded as a matter of aesthetics, I’ll argue whether this type of computer musician is an artistic one (see chapter ‘Coalesce’).

Investigating the paradigm concerning the role of the computer musician in live electro-acoustic performance situations, I present a cycle of compositions called
Coalesce in which several modes of performance practice are explored. Throughout this cycle, which is comprised of 6 electro-acoustic compositions, the computer musician slowly evolves from a disembodied, invisible entity, only manifesting itself through computer programming, to an embodied performer, playing ‘the instrument’ in a real-time, on-stage performance environment. Hereby, this cycle doesn’t only attempt to demonstrate the advantage of presenting the computer musician on stage during live electro-acoustic performances, it also aims to establish a notion of artistry, originating from and presented by the computer musician. The final stage of the proposed evolution can be regarded as a point of convergence, where the computer musician is present on stage in the capacity of the composer, programmer and performer. This brings us to the notion of the computer musician as a multi-threaded performer: a performer who composes the music, the software and the instrument his or her composition is performed on. The latter is open to interpretation, as crafting a new physical instrument is comparable to f.i. programming an already existing control interface. It is here that I position myself throughout this thesis, as a multi-threaded performer, and it’s in this form that I will formulate answers to the questions mentioned above.

The Instrument

Currently, musicians who use digital technology have access to a vast array of tools. These tools can be divided in 2 categories which are both equally important in terms of defining the computer musician: software (audio-programming environments, such as Ableton Live, Max and SuperCollider, to name a few) and hardware (control interfaces like f.i. BCF2000, Novation Launchpad, WiiMote, etc.)
. These tools all are built from the same building blocks; f.i. music performance software allows for MIDI-mapping, multi channel audio, etc; control interfaces allow for controlling software parameters like toggling an effect, moving a fader, etc.. Although some authors (Bongers 2000; Jordà 2005) advocate the view that the interface and the sound engine should be considered as one system, the computer musician’s instrument will always be characterised by the split between these two. In this thesis I propose an instrumental model based on analyses of the basic constituents of electronics performance. This model will then be applied on my own performance, setup to clarify how the above mentioned dichotomy, which is often described as the achilles heel of electronics performance practice, can serve as an aid in performance communication.

Furthermore, we see technology advancing at such a high pace that it’s often difficult for the users of this technology to completely master them and obtain a degree of virtuosity before the next best thing comes along. A case for this problem can be made by drawing a comparison between the computer musician’s performance practice with that of a pianist. Since the basic form factor of the piano’s interface remained unchanged over the last centuries, history was able to produce highly skilled pianists. Although the existence of computer music only spans a couple of decades, we witness a constant change of environment; an environment in which all agents in electronics performance practice evolve at a rapid pace. Together with these changes the affordances and constraints
2 of the before mentioned agents are continuously shifting thus obstructing a computer musician from establishing a thoroughly defined performance practice.

Although this rapid evolution in music technology is still ongoing, the careful observer may notice a shift in development towards optimising already famed form factors instead of the creation of new ones. This phenomenon is noticeable in the fields of software and hardware alike.
And, although this trend mainly exists in commercial music software and hardware, their ‘custom made’ counterparts are built upon the same basic principles (see chapter Instrument, subchapter 2.3 Input and Throughput). This ‘evolutionary interphase’ creates an opportunity for a deeper analysis of the instrument, thus enabling the computer musician to centralise and calibrate the latter. This concretisation, which allows for thorough investigation of the affordances and constraints imposed by the instrument, will be advocated throughout this thesis. Based on this consolidation I will present several computer programs which I specifically developed for my own performance practice regarding electronics performance.

Communication

(…) the term commonly used today for music composed for loudspeakers alone – acousmatic music – is a reference to Pythagoras’s practice of lecturing from behind a screen so that his audience could attend solely to his words. This acousmatic character is often cited as one of the difficulties with the reception of acousmatic music – not, it has to be said, so much because it erases the labour of production, but more often because ‘there is nothing to look at’. Thus there have been various attempts to reintroduce the visual, from video projections to a focus on the person behind the mixing console as ‘diffusion artist’. The former addresses the perceived need to accompany sound with images, without attempting to address the aforementioned de-corporealisation. The latter, in contrast, is borne of the desire to re-incorporate human performance, but it encounters a familiar problem: while there is a body, there is only a generalised mapping of the physical movements of such a body (pressing keys, moving faders, and so on) to the types of energy and gesture present in the music – the music remains, in essence, acousmatic, in the sense that what is known to be the source is visible but remains perceptually detached (Croft, 2007).

John Croft clearly points out one of the biggest problems in an embodied computer musician’s performance practice: audience understanding of electronics performance. We know the auditory system has evolved to seek reasons for the soundfield it encounters (Emmerson, 1999). Also, when comparing acoustic instruments to novel electronic interfaces we find that spectators can better form accurate and detailed mental models of the former than the latter, due to the non-mechanical, non-linear or disconnected nature of action-sound relationships in electronics performance (Fyans, 2014). This implies that a one-on-one relationship between gesture and sound, as found in most acoustic instruments, has the best chances on creating an understandable performance. In other words, the higher the degree of synchronicity between the audible and visual factors of the performance, the better the understanding of the performance.

In electronics performance we can identify several attempts to solve this problem. On the level of the performer a solution can be found in the use (and exaggeration) of ancillary or extra-musical gestures while playing the instrument, in order to convey a large amount of information, that is somewhat independent but reinforces communication of the auditory signal (Vines et al., 2006). Although the latter has proven an effective conductor of expression in acoustic instrument performances, in electronics performances these extra-musical gestures can easily turn into a superfluous choreography, invoking a degree of opacity during performance. Another solution presents itself through the use of sensor-based and camera-based adaptations. Comparable to the performance practice of the theremin, these interfaces are generally used for mapping a bodily gesture to a musical parameter inside the performance software. Nevertheless, when combining several gestures this transparency is easily lost again. A third solution can be found in the use of live visuals which accompany the electronics performance’s audible result, turning a musical performance into a multimedial performance. Although a certain degree of synchronicity between the musical and visual aspects is easily guaranteed, this medium inevitably shifts the notion of meaning from the music to a different level.

In order to guarantee a synchronous yet meaningful electronics performance, I propose a performance practice that is based on linking performance gestures to visual animations which manifest themselves on the input agent of the computer musician’s instrument. These visual animations in their turn are linked to the sonic result instigated by the performer’s actions, thus creating a positive feedback loop capable of optimising the communication model that exists between the computer musician and the audience in a concert situation.

Through this research process I aim to demonstrate that the computer musician is indeed an artistic performer, maintaining a similar degree of on-stage artistry comparable to that of acoustic instrumentalists. Furthermore, my goal is to create a better audience understanding of the computer musician’s performance in electronic and electro-acoustic settings alike.
  1. Although this elaborate introduction might seem superfluous in the context of this thesis, it holds important information on the idiomatic nature of my compositions which are discussed throughout this thesis and the accompanying lecture-performance and concert.
  2. Van Esser B. (2012) Controllerism, a minimalistic approach (Trobadrors research project, KCB)
  3. Therefore, to present a body of research on the performance of electronics before the introduction of the computer is irrelevant for this thesis.
  4. These tools all are built from the same building blocks. F.i. music performance software allows for MIDI-mapping, multi channel audio, etc; control interfaces allow for controlling software parameters like toggling an effect, moving a fader, etc.
  5. Although this phenomenon is mainly noticeable in commercial music software and hardware, their ‘custom made’ counterparts are built upon the same basic building blocks.
  6. Gibson, J. J. 1979. The Ecological Approach to Visual Perception. Boston, Massachusetts: Houghton Mifflin.