“Sound Studies: A Discipline?”: Sound Signatures Winter School, Amsterdam, January 2014
Has the ever-nascent field of sound studies finally “grown up”? After years of intellectual development and a constantly growing body of work, including quite a few classic texts and books, it has been rapidly establishing an identity of its own, independent from the many “parent” disciplines from which it originated. As with any teenager, this process of maturation comes with a dose of self-searching and, indeed, some navel-gazing. But are we ready to acknowledge sound studies as its own discipline?
At the first conference of the European Sound Studies Organization (ESSA) in Berlin in October 2013, a heated debate followed an otherwise routine announcement. The preliminary title for the second installment of the conference: “Sound Studies: A Discipline?” was not going to make it to Copenhagen in June 2014. Although the question mark suggested playfulness, many audience members either did not like the idea of an entire conference devoted to the meta-discussion on the pros and cons of interdisciplinarity or were not prepared to consider sound studies as a discipline at the first place.
Eventually, the Copenhagen conference was safely re-named “Sound Studies: Mapping the Field.” The discussion in Berlin however, continued at the opening session of the Sound Signatures Winter School in Amsterdam in early 2014. Co-organizer Mara Mills asked whether the publication of such anthologies as The Sound Studies Reader in 2012 and The Oxford Handbook of Sound Studies in 2013 meant that sound studies was a proper discipline. Is it, she asked, moving away from its roots as an interdisciplinary field consisting of displaced scholars formerly unable to tackle questions of sound within the confines of their traditional disciplines? The ensuing five days of the Winter School answered Mill’s question in a rather fittingly ambiguous way. The question remains: “Sound Studies: A Discipline?” Well, yes and no.
One of the most significant conclusions of the Winter School’s thought-provoking workshops, keynotes, performances and debates was phrased by co-organizer Carolyn Birdsall during the final discussion on Friday afternoon; she had come to realize that sound studies and its older, more distinguished, but often somewhat stale brother musicology are not the adversaries one is often led to believe. A musicologist by training, I have always found sound studies’ habit of explicitly not dealing with music (in conjunction with its sometimes disproportionate focus on sound art) a little tiresome; and what these five intensive days in Amsterdam convincingly showed, among other things, was that the older brother and its younger sibling can be rather complimentary.
Of course, the traditional objects and methods of the discipline of musicology—in its most dusty and clichéd form studying black dots written on paper by great men—have long been what sound studies scholars avoided. In the late 1980’s, however, musicology already started moving away from this stereotype by incorporating more critical methodologies and broadening its scope. Moreover, ethno- or cultural- musicologists have been breaking the armor of Eurocentrism in mainstream musicology. Now, with the steady rise of sound studies’ academic momentum, musicology is even giving up its intellectual monopoly on determining what does and what does not count as relevant research on music. The highly interdisciplinary body of knowledge developed in this mature sound studies can indeed be very useful in more conventional musicological research; likewise sound studies benefits from work conducted within the disciplinary confines of musicology.
At the Winter School, a prime example of such an exchange was Julia Kursell’s keynote lecture “Motor Media: On Aural Feedback in the History of Musical Instrument Playing.” Focusing on the experiments of nineteenth-century French pianist and teacher Marie Jaëll, Kursell showed how, prior to the advent of recording technology, musical instruments like the piano offered valuable points of entry into the world of sound and hearing. The piano-keyboard, Kursell argued, was not just a site of aesthetic, musical development, but was also employed as an epistemological tool in itself. Moreover, studying such historical cases also opens the door for broader questions engaging musicology, sound studies and science and technology studies.This interdisciplinary overlap allows for discussions of the body politics of music teaching as well as the didactics of a specific aesthetic regime in a particular social milieu.
Other sessions that explicitly dealt with music included Stephen Amico’s lecture combining sound studies, media studies and the “discipline formerly known as ethnomusicology” to discuss ethical difficulties facing ethnographic sound archivists. This discussion about the ownership and right of use of the recordings in such archives was among the most refreshing and timely raised through the week. On a much lighter note, Ashley Burgoyne’s Workshop “What Can You Learn from a Music Game?” represented yet another rapidly developing interdisciplinary field of music research: the study of music cognition.
Recently, after returning from the aforementioned ESSA conference in Copenhagen, Marcel Cobussen predicted in a Facebook update that “in 10-15 years from now, musicology will be a subspecies of sound studies.” He might be right, but rather than a “sub-discipline,” why not envision a continuum from “old-fashioned” musicology, via the much broader field of music studies towards the broader field of sound studies. As such, sound studies would maintain its interdisciplinary status as a field, rather than a discipline, allowing for engagement with the knowledge that has been produced and is still produced in musicology proper and music studies more generally.
It is up to a new generation, raised as sound studies natives, to further the developments toward such an exchange of scholarship. Judging by the presentations, workshops, performances, and most tellingly, student presentations, during these five days in Amsterdam, this will undoubtedly happen. Notwithstanding the very broad scope of topics and approaches, backgrounds and interests, among participants and presenters there was the tacit acknowledgement of communality in the one thing they all shared: a profound interest in sound in the broadest sense of the word that needed very little justification. Initiatives like this Winter School and its upcoming second installment in the form of a Summer School in Berlin leave one with an optimistic outset of the intellectual potential of the young field of sound studies; it forges interdisciplinary connections by virtue of the common interest in an object–sound–that is simultaneously a very specific and seemingly endless scope of scholarly possibilities.
Perhaps the most telling example of this bright future was the fact that the keynote by Jonathan Sterne, without question the week’s big star, author of one of the founding books in the field, was a nice historical overview of the concept of the “soundscape,” although offering few new insights or questions. If anything, this unusually low-key performance from a very impressive scholar, underlined the most inspiring aspect of the Sound Signatures Winter School: there is still much to be done, and, as this very blog has been consistently showing since 2009, a new generation of sound scholars is already doing it. Therefore, I am looking forward to hearing our next generation of scholars weighing in on the question: “Sound Studies: A Discipline?” in the forthcoming discussion in Berlin. With an impressive, diverse and exciting program I’m sure I won’t be disappointed.
—
Melle Jan Kromhout is PhD-Fellow at the Amsterdam School for Cultural Analysis, University of Amsterdam. His research project entitled “Noise Identities” focuses on the revaluation of noise in recorded sound and music. The project aims to develop noise identities as a concept for assessing the relation between recording media and musical significance. He presented his work at conferences around the globe and published several articles including “‘Over the Ruined Factory There’s a Funny Noise’: Throbbing Gristle and the mediatized roots of noise in/as music” (2011), “As Distant and Close as Can Be. Lo-fi Recording: Site-specificity and (In)authenticity” (2012), “An Exceptional Purity of Sound: Noise Reduction Technology and the Inevitable Noise of Sound Recording” (2014) and “’Antennas Have Long Since Invaded Our Brains’: Listening to the ‘Other Music’ in Friedrich Kittler” (forthcoming, 2015). More information on www.mellekromhout.nl
—
Featured image: Carla Müller-Schulzke opening the first ESSA conference in Berlin, October 2013, by Jennifer Stoever, CC BY-SA 3.0
—
REWIND! . . .If you liked this post, you may also dig:
Functional Sound (Studies): The First European Sound Studies Association Meeting— Erik Granly Jensen
“Once the word ‘sound’ was in the title, it opened up a kind of door”: A Conversation with Eric Weisbard— Liana Silva-Ford
“Sound at AMS/SEM/SMT 2012”— Bill Bahng Boyer
Musical Objects, Variability and Live Electronic Performance
This is part two of a three part series on live Electronic music. To review part one, click here.
In the first part of this series,”Toward a Practical Language for Live Electronic Performance,” I established the language of variability as an index for objectively measuring the quality of musical performances. From this we were able to rethink traditional musical instruments as musical objects with variables that could be used to evaluate performer proficiency, both as a universal for the instrument (proficiency on the trumpet) and within genre and style constraints (proficiency as a Shrill C trumpet player). I propose that this language can be used to describe the performance of electronic music, making it easier to parallel with traditional western forms. Having proven useful with traditional instruments we’ll now see if this language can be used to describe the musical objects of electronic performance.
We’ll start with a DJ set. While not necessarily an instrument, a performed DJ set is a live musical object comprised of a number of variables. First would be the hardware. A vinyl DJ rig consists of at least two turntables, a mixer and a selection of vinyl. A CDJ rig uses two CD decks and a mixer. Serato and other vinyl controller software require only one turntable, a mixer and a laptop. Laptop mixing can be done with or without a controller. One could also do a cassette mix, reel to reel mix, or other hardware format mixing. Critical is a means to combine audio from separate sources into one uniform mix. Some of the other variables involved in this include selection, transitions and effects.
Because DJ sets are expected to be filled with pre-recorded sounds, the selection of sounds available is as broad as all of the sounds ever recorded. Specific styles of DJ sets, like an ambient DJ set, limit the selection down to a subset of recorded music. The choice of hardware can limit that even more. An all-vinyl DJ set of ambient music presents more of a challenge, in terms of selection, than a laptop set in the same style, because there are fewer ambient records pressed to vinyl than are available in a digital format.
Connected to selections are transitions, which could be said to define a DJ. When thinking of transitions there are two component factors: the playlist and going from one song to another. The playlist is obviously directly tied to the selection; however, even if you select the most popular songs for the style, unless they are put into a logical order, the transitions between them could make the set horrible.
One of the transitional keys to keeping a mix flowing is beat matching. In a turntable DJ set the beat mapping degree of difficulty is high because all of the tempos have to be matched manually by adjusting the speed of the two selections on the spinning turntables. When the tempos are synchronized, transitioning from one to the other is accomplished via a simple crossfade. With digital hardware such as the laptop, Serato and even CDJ setups, there is commonly a way to automatically match beats between selections. This makes the degree of difficulty to beat match in these formats much lower.
Effects, another variable, rely on what’s available through the hardware medium. With the turntable DJ set, the mixer is the primary source of effects and those until recent years have been limited to disc manipulation (e.g. scratching), crossfader, and EQ effects. Many of the non vinyl setups and even some of the vinyl setups now include a variety of digital effects like delay, reverb, sampler, granular effects and more.
With these variables so defined it becomes easier to objectively analyze the expressed variability of a live DJ set. But, while the variables themselves are objective, the value placed on them and even how they are evaluated are not. The language only provides the common ground for analysis and discussion. So the next time you’re at an event and the person next to you says, “this DJ is a hack!” you can say, “well they’ve got a pretty diverse selection with rather seemless transitions, maybe you just don’t like the music,” to which they’ll reply, “yeah, I don’t think I like this music,” which is decent progress in the scheme of things. If we really want to talk about live electronic performance however we will need to move beyond the DJ set to exemplify how this variable language can work to accurately describe the other musical objects which appear at a live electronic performance.
Take for example another electronic instrument: the keyboard. The keyboard itself is a challenging instrument to define; in fact I could argue that the keyboard is itself not actually an instrument but a musical object. It is a component part of a group of instruments commonly referred to as keyboards, but the keyboard itself is not the instrument. What it is is one of the earliest examples of controllerism.
On a piano, typically fingers are used to press keys on the keyboard, which trigger the hammers to hit the strings and produce sound. The range of the instrument travels seven octaves from A0 to C8, and can theoretically have 88 voice polyphony, though in typical that polyphony is limited to the ten fingers. It can play a wide range of dynamics and includes pedals which can be used to modify the sustainabilty of pitches. With a pipe organ, the keyboard controls woodwind instruments with completely different timbre, range, and dynamics; the polyphony increases and the foot pedals can perform radically different functions. The differences from the piano grow even more once we enter the realm where the term “keyboard,” as instrument, is most commonly used: the synthesizer keyboard.
The first glaring difference is that, even if you have an encyclopedia of knowledge about keyboard synthesizers, when you see a performer with one on stage you simply cannot know by seeing what sounds it will produce. Pressing the key on a synthesizer keyboard can produce an infinite number of sounds, which can change not just from song to song, but from second to second and key to key. A performer’s left thumb can produce an entirely different sound than their left index finger. Using a keyboard equipped with a sequencer, the performer’s fingers may not press any keys at all but can still be active in the performance.
When the keyboard synthesizer was first introduced, it was being used by traditional piano players in standard band configurations, like a piano or organ, with timbres being limited to one during a song and the performance aspect being limited to fingers pressing keys. Some keyboardists however used the instrument more as a bank for sound effects and textures. They may have been playing the same keys, but one wouldn’t necessarily expect to hear a I IV V chord progression. Rather than listening for the physical dexterity of the player’s fingers, the key to listening to a keyboard in this context was evaluating the sounds produced first and then how they were played to fit into the surrounding musical context.
Could one of these performers be seen as more competent than the other? Possibly. The first performer could be said to be one of the most amazing keyboard players in the piano player sense, but where they aren’t really maximizing the variability potential of the instrument, it could be said they fall short as a keyboard synthesizer performer. The second performer on the other hand may not even know what a I IV V chord progression is and thus be considered incompetent on the keyboard in the piano player sense, but the ways in which they exploit the variable possibilities shows their mastery of the keyboard synthesizer as an instrument.
Well, almost.
While generally speaking there isn’t a set of variables which define the keyboard synthesizer as an instrument, if we think of the keyboard synthesizer as a group of musical instruments, each of the individual types of keyboard synthesizers come with their own set of fixed variables which can be defined. Many of these variables are consistent across the various keyboards but not always in a standard arrangement.
As such, while the umbrella term “keyboard” persists it is perhaps more practical to define the instruments and their players individually. There are Juno 60 players, ARP Odyessy players, MiniMoog players, Crumar Spirit players and more. Naturally an individual player can be well versed in more than one of these instruments and thusly be thought of as a keyboardist, but their ability as a keyboardist would have to be properly contextualized per instrument in their keyboard repertoire. Using the MiniMoog as an example we can show how its variability as an instrument defines it and plays into how a performance on the instrument can be perceived.
The first variable worth considering when evaluating the MiniMoog is that it is a monophonic instrument. This is radically different from the piano; despite one’s ability to use ten fingers (or other extremity) only one note will sound at one time. The keyboard section of the instrument is only three and a half octaves long, though the range is itself variable. On the left-hand side there is a pitch wheel and a modulation wheel. The pitch wheel can vary the pitch of the currently playing note, while the modulation wheel can alter the actual sound design.
As a monophonic instrument, one does not need to have both hands on the keyboard, as only one note will ever sound at a time. This frees the hands to modify the sound being triggered by the keyboard exemplified via the pitch and modulation wheels, but also available are all of the exposed controls for the sound design. This means that in performance every aspect of the sound design and the triggering can be variable. Of course these changes are limited to what one can do with their hands, but the MiniMoog also features a common function in analog synths, a Control Voltage input. This means that an external source can control either the aspects of the sound design and/or the triggering for the instrument.
Despite this obvious difference from the piano, playing the MiniMoog does not have to be any less of a physical act. A player using their right hand to play the keyboard while modulating the sound with their left, plays with a different level of dexterity than the piano player. The right and left hand are performing different motions; while the right hand uses fingers to press keys as the arm moves it up and down the keyboard, the left hand can be adjusting the pitch or modulation wheels with a pushing action or alternately adjusting the knobs with a turning action. Like patting your head and rubbing your belly, controlling a well-timed filter sweep while simultaneously playing a melody is nowhere near as easy as it sounds.
At the same time playing the MiniMoog doesn’t have to be very physical at all. A sequencer could be responsible for all of the note triggering leaving both hands free to modulate the sound. Similarly the performer may not touch the MiniMoog at all, instead playing the sequencer itself as an intermediary between them and the sound of the instrument. In this case the MiniMoog is not being used as a keyboard, yet it retains its instrument status as all of the sounds are being generated from it, with the sequencer being used as the controller. Despite not having any physical contact with the instrument itself, the performer can still play it.
Taking it one step further – if a performer were to only touch a sequencer at the start of the performance to press play and never touch the instrument, could they still be said to be playing the MiniMoog live? There is little doubt that the MiniMoog is indeed still performing because it does not have the mechanism to play by itself, but requires agency to illicit a sonic response. In this example that agency comes from the sequencer, but that does not eliminate the performer. The sequencer itself has to be programmed in order to provide the instrument with the proper control voltages, and the instrument itself has to be set up sonically with a designed sound receptive to the sequencer’s control. If the performer is not physically manipulating either device however, they are not performing live, the machines are.
From this we can establish the first dichotomy of electronic performance; the layers of variability in an electronic performance can be isolated into two specific categories: physical variability and sonic variability. While these two aspects are also present in traditional instrument performance, they are generally thought to not be mutually exclusive without additional devices. The vibrato of an acoustic guitar is only accomplished by physically modulating the strings to produce the effect. With an electronic instrument however, vibrato can be performed by an LFO controlling the amplitude. That LFO can be controlled physically but there does not have to be a physical motion (such as a knob turn) associated with it in order for it to be a live modulation or performance. The benefit of it running without physical aid is that it frees up the body to be able to control other things, increasing the variability of the performance.
In a situation where all of the aspects of the performance are being controlled by electronic functions, the agency in performance shifts from the artist performing live, to the artists establishing the parameters by which the machines perform live. Is the artist calling this a live performance a hack? Absolutely not, but it’s important that the context of the performance is understood for it to be evaluated. Like evaluating the monophonic MiniMoog performer based on the criteria of the polyphonic pianist, evaluating a machine performance based on physical criteria is unfair.
In the evaluation of a machine performance, just as with a physical one, variability still plays an important role. At the most base level the machine has to actually be performing and this is best measured by the potential variability of the sound. This gets tricky with digital instruments, as, barring outside influences, it is completely possible to repeat the exact same performance in the digital domain, so that there is no variation between each iteration. But even such cases with a digital sequencer controlling a digital instrument, with no physical interaction, are still a machine performances; they just exhibit very little variability. The performance aspect of the machine only disappears when the possibility for variability is completely removed, at which point the machine is no longer a performance instrument but a playback device as is the case with a CD player playing a backing track. The CD player if not being manipulated physically or by an external control is not a performance instrument as all of the sound contained within it can only be heard as one fixed recorded performance, not live. It is only when these fixed performances are manipulated either physically (ie a DJ set) or by other means, that they go from fixed performances to potentially live ones.
From all of this we arrive at four basic distinctions for live electronic performances:
• The electro/mechanical manipulation of fixed sonic performances
• The physical manipulation of electronic instruments
• The mechanized manipulation of electronic instruments
• A hybrid of physical and mechanized manipulation of electronic instruments
These help set up the context for evaluating electronic performances, as before we can determine the quality of a performance we must first be able to distinguish what type of performance we are observing. So far we’ve only dealt with a monophonic instrument, but even with its limitations can see how the potential variability is quite high. As we get into the laptop as a performance instrument that variability increases exponentially.
This is part two of a three part series. In the next part we will begin to exemplify the laptop as performance instrument, using this language to show the breadth of variability available in electronic performance and perhaps show that indeed, where that variability continues to be explored, there is merit to the potential of live electronic music as an extension of jazz.
—
Native Frequencies at the Trocadero 2013, Featured Image Courtesy of Raymond Angelo (C)
—
Primus Luta is a husband and father of three. He is a writer, technologist and an artist exploring the intersection of technology and art, and their philosophical implications. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for the Rhythm Incursions Podcast series with his show RIPL. As an artist, he is a founding member of the live electronic music collective Concrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.
—
REWIND! . . .If you liked this post, you may also dig:
Experiments in Agent-based Sonic Composition–Andreas Pape
Evoking the Object: Physicality in the Digital Age of Music–-Primus Luta
Sound as Art as Anti-environment–Steven Hammer
Recent Comments