Calling Out To (Anti)Liveness: Recording and the Question of Presence
Editor’s Note: Even though this is officially Osvaldo Oyola‘s final post as an SO! regular–his brilliant dissertation on Latino/a identity and collection cultures is calling–I refuse to say goodbye, perpetually leaving the door open for future encores. He has been a bold and steadfast contributor–peep his extensive back catalogue here–and we cannot thank him enough for bringing such a whipsmart presence to Sounding Out! over the years. Best of luck, OOO, our lighters are up for you!–J. Stoever-Ackerman, Editor-in-Chief
—
As several of my previous Sounding Out! Blog posts reveal, I am intrigued by the way popular music seeks to establish its authenticity to the listener. It seems that recorded popular music seeks out ways to overcome its lack of presence as compared to a live performance, where a unified and spontaneous sense of immediacy seems to automatically bestow the aura of the “authentic”—a uniqueness that, ironically, live reproducibility engenders. Throughout my time as a Sounding Out! regular, I have explored how authenticity may be conferred through artists affecting an accent as a form of musical style, comparing their songs to other “less authentic” forms of music through a call to nostalgia, or even by highlighting artificiality through use of auto-tune.
One of the ways that artists and producers get past a potential lack of authenticity when recording is through call outs to “liveness.” I am not referring to concert recordings (though there are ways that they can be used), but elements like counting off at the beginning of songs or introducing some change or movement in a song. There is no practical need to count off “One, two, three, four!” at the beginning of a recording of a song if it is being pieced together through multiple tracks and overdubs. These days a “click track” or adjustment post-recording can keep all the players in time even if not necessarily playing at once; even if a song is being recorded as a kind of studio jam, the count off could be edited out. It is an artifact of the creation, not a sign of creation itself. Instead, the counting can become an accepted and notable part of the song, like Sam the Sham and the Phaorahs performing “Wooly Bully,” giving it an orientation to time—the sense that all these musicians were present together and playing their instruments at once and needed this unique introduction to keep them all in tempo.
Similarly, sometimes artists call out to other musicians, giving instructions when no instructions are needed, assuming that most popular music is recorded in multiple takes using multiple tracks. In Parade‘s “Mountains,” Prince commands the Revolution, “guitars and drums on the one!” when clearly they had rehearsed when putting together the song, and ostensibly knew when the drum and guitar breakdown was coming up. Prince, furthermore, joins artists as varied as the Grateful Dead and the Beastie Boys in mixing concert recordings with studio overdubs to capture a “live” sound on songs like “It’s Gonna Be a Beautiful Night” and “Alligator.” Even something as ubiquitous as guitar feedback is a transformation of an artifact of live performance into a sound available for use in recording—something that was purposefully avoided until John Lennon’s happy accident when in the studio to cut “I Feel Fine.” Until then, playing with feedback was a way to demonstrate performance skills through onstage vamping.
These varied calls to liveness provide a sense of authenticity to music made via the recording studio, denoting what I understand as the spontaneous sociability of music. Count-offs and studio shout-outs provide a sense of unified presence to a performance, especially if the performance has actually been constructed piecemeal and over time. This is something of a remnant of an old-fashioned notion that recorded music is measured in quality in comparison to live performance. It’s any idea that hung around both implicitly and explicitly long after bands started experimenting in the studio with effects that ranged from the difficult to the impossible to replicate on stage, and reinforced through recordings by performers who purposefully referenced their lauded live performances.
.
For example, James Brown’s “Get Up (I Feel Like Being a) Sex Machine” is built on this conceit. The entire song is a conversation, a call and response between James Brown and his band, the J.B.’s. From the opening line, Brown introduces the song as moment in time in which he is compelled to do his thing, but he demands both encouragement and cooperation from the band in order to achieve it. When Brown asks Bobby Byrd, “Bobby! Should I take ’em to the bridge?” we as listeners are invited to play along with the idea that it has suddenly came into his head to have the band play the bridge—as it might’ve happened (and thus been practiced) countless times in his legendary live shows. It suggests a form of spontaneity that the reality of recording would otherwise drain from the song. Sure, according to RJ Smith’s The One: The Life and Music of James Brown (2012), “Get Up” was recorded in only two takes–already fairly amazing–but the very nature of the song makes it sound like it was recorded in one, even if it had to be broken up into two sides of a 7-inch. That reality doesn’t matter—what matters when listening is the feeling that we, as listeners, are being allowed to partake in the capturing of what seems like one unique, and continuous, moment.
The question then arises: What about recorded music that does the opposite, that makes a point of highlighting its artificial construct—the impossibility of its spontaneous performance? While there are examples that date back at least to the 1960s, does this shift highlight a difference in aesthetic concerns by the pop music audience? If calls to “liveness” suggest a spontaneous sociability to music, what do the meta references to their songcraft suggest about what is important to music now?

Andre 3000’s “Prototype” from 2003’s The Love Below includes chatter with his sound engineer.
The classic example is Ringo Starr’s bellow, “I GOT BLISTERS ON MY FINGERS!” at the end of the Beatles’ “Helter Skelter,” an exclamation made after umpteen takes of the song recorded on the same day, but there are more contemporary and even more obvious examples. Near the end of Outkast’s “Prototype,” (at 4:21) Andre 3000 can be heard talking to his sound engineer John Frye about the ad libs, “Hey, hey John! Are we recording our ad libs? Really? Were we recording just then? Let me hear that, that first one. . .” There is an interesting tension here between the spontaneity of an “ad lib” and listening back to pick the best one or further develop one when re-recording, and Andre in his role as producer decided to keep it in as part of the final product. The recording itself becomes part of the subject of the song as a kind of coda. The banter is actually a brilliant parallel to the content of the song, which undermines the typical “we’ll be together forever” love song trope for one that highlights the reality of serial monogamy common in American culture and lessons each relationship potentially provides us for the next. Rather than pretend that a romantic relationship is a unique and eternal thing, the song admits the work and changes involved, just as it admits that the seemingly special spontaneity of a song is developed through a process.
Of course, hip hop as a genre, with its frequent use of sampling, tends to make its recording process very evident. While it is possible to play samples “live” using a digital sampler or isolating sections on vinyl via the DJ as band member, the use of pre-recorded fragments means that rap music relies on the vocal dynamics of rap to carry the sense of spontaneity. Yet, in 1993’s “Higher Level,” KRS-One opens with a description of the time and place of the recording—“5 o’clock in the morning” at “D&D Studios,” establishing forever when and where and thus how the recording is happening. Five o’clock in the morning places the creation of the song with a context of working and rocking all through the night to get the album completed. The song may or may not have actually been recorded last, but its placement at the end of Return of the Boom Bap, gives it a sense of a last ditch effort to complete the collection of songs. The fact that “5 o’clock in the morning” is likely also among the cheapest available studio times potentially highlights budgetary concerns in the recording itself. This is a rare thing to include in recording, though the Brand New Heavies cap off the dissolution of their 1994 track “Fake” into pseudo-jazz-messing-around with one their members chiding, “a thousand dollar a day studio!” This is a different kind of call to authenticity, as a budgetary concern is an implicit to a “realness” defined by being non-commercial.
One of my all-time favorite examples is a few years older than “Higher Level”—“Nervous” by Boogie Down Production: “written, produced and directed by Blastmaster KRS-One,” which includes an attempt to explain how a song is put together on the “48-track board.” Instead of calling instructions to a band, KRS points out that DJ Doc is doing the mixing and instructs him to “break it down, Doc!” just before a beat breakdown (listen at around 1:40). He explains, “Now, here’s what we do on the 48-track board / We look around for the best possible break / And once we find it, we just BREAK,” and then the pre-recorded beat seems to obey his command, breaking down to just the bass drum and a sampled electric piano from Rhythm Heritage’s “The Sky’s the Limit.” Later, he says, “We find track seven, and break it down!” and the music shifts to just the bass guitar and some tinny synth high-hats.
.
So how does highlighting the recording circumstances, or just bringing attention to the fact that the song being listened to is a multiple-step process of recording and post-production benefit the song itself? Is it like I mentioned in my 2011 “Defense of Auto-Tune” post, that this kind of attention re-establishes authenticity by making its constructed nature transparent? I’d say yes, in part, but I also think that–through its violation of the expectation of seamlessness–the stray track or reference to recording within a song is a nod to a different kind of skillfulness. Exhortations such as “Take it to the Bridge” give an ironic nod to the extemporaneous to call attention to the diligent workmanship and dedication demanded by studio songcraft. Traditionally, live audiences may appreciate a flawless or nearly flawless performance and understand a masterful recovery from (and/or incorporation of) error as the signs of a good show, but, these moments that call attention to the recording studio situation claim there something to appreciate in the fact that Ringo Starr endured 18 takes of “Helter Skelter” until he had painful blisters, or that KRS-One and DJ Doc worked out the proper way to “feel around” the mixing board to make a grooving collage of sounds as disparate as the theme from “Rat Patrol” and WAR’s “Galaxy.”
KRS may have once admonished other MCs to “make sure live you is a dope rhyme-sayer,” but clearly he believes liveness—whether implicitly or explicitly—is not the only measure of musical ability. Rather, the highlighting of labor in the construction of a recording becomes its own kind of (anti-)vamping and demonstration of skill, and of a different kind of sociability in making music that these conversational snippets and references to other people in the studio make clear. This kind of attention to the group labor is especially important as various recording technologies become increasingly available to the wider public and allow for an isolated pursuit of recording music. Just as calls to liveness in recording engage the listener in ways that suggest participation as a live audience, calls to anti-liveness also engage the listener, but by bringing them across time and space into the studio to witness to a different form of great performance.
—
Osvaldo Oyola is a regular contributor to Sounding Out! and a PhD Candidate in English at Binghamton University working on his dissertation, “Collecting Identity: Popular Culture and Narratives of Afro-Latin Self in Transnational America.” He also regularly posts brief(ish) thoughts on music and comics on his blog, The Middle Spaces.
—
REWIND! . . .If you liked this post, you may also dig:
Evoking the Object: Physicality in the Digital Age of Music–-Primus Luta
Experiments in Agent-based Sonic Composition–Andreas Pape
Musical Objects, Variability and Live Electronic Performance—Primus Luta
Musical Objects, Variability and Live Electronic Performance
This is part two of a three part series on live Electronic music. To review part one, click here.
In the first part of this series,”Toward a Practical Language for Live Electronic Performance,” I established the language of variability as an index for objectively measuring the quality of musical performances. From this we were able to rethink traditional musical instruments as musical objects with variables that could be used to evaluate performer proficiency, both as a universal for the instrument (proficiency on the trumpet) and within genre and style constraints (proficiency as a Shrill C trumpet player). I propose that this language can be used to describe the performance of electronic music, making it easier to parallel with traditional western forms. Having proven useful with traditional instruments we’ll now see if this language can be used to describe the musical objects of electronic performance.
We’ll start with a DJ set. While not necessarily an instrument, a performed DJ set is a live musical object comprised of a number of variables. First would be the hardware. A vinyl DJ rig consists of at least two turntables, a mixer and a selection of vinyl. A CDJ rig uses two CD decks and a mixer. Serato and other vinyl controller software require only one turntable, a mixer and a laptop. Laptop mixing can be done with or without a controller. One could also do a cassette mix, reel to reel mix, or other hardware format mixing. Critical is a means to combine audio from separate sources into one uniform mix. Some of the other variables involved in this include selection, transitions and effects.

DJ Lush, FORWARD Winter Session 2012, Image by JHG Photography (c)
Because DJ sets are expected to be filled with pre-recorded sounds, the selection of sounds available is as broad as all of the sounds ever recorded. Specific styles of DJ sets, like an ambient DJ set, limit the selection down to a subset of recorded music. The choice of hardware can limit that even more. An all-vinyl DJ set of ambient music presents more of a challenge, in terms of selection, than a laptop set in the same style, because there are fewer ambient records pressed to vinyl than are available in a digital format.
Connected to selections are transitions, which could be said to define a DJ. When thinking of transitions there are two component factors: the playlist and going from one song to another. The playlist is obviously directly tied to the selection; however, even if you select the most popular songs for the style, unless they are put into a logical order, the transitions between them could make the set horrible.
One of the transitional keys to keeping a mix flowing is beat matching. In a turntable DJ set the beat mapping degree of difficulty is high because all of the tempos have to be matched manually by adjusting the speed of the two selections on the spinning turntables. When the tempos are synchronized, transitioning from one to the other is accomplished via a simple crossfade. With digital hardware such as the laptop, Serato and even CDJ setups, there is commonly a way to automatically match beats between selections. This makes the degree of difficulty to beat match in these formats much lower.
Effects, another variable, rely on what’s available through the hardware medium. With the turntable DJ set, the mixer is the primary source of effects and those until recent years have been limited to disc manipulation (e.g. scratching), crossfader, and EQ effects. Many of the non vinyl setups and even some of the vinyl setups now include a variety of digital effects like delay, reverb, sampler, granular effects and more.
With these variables so defined it becomes easier to objectively analyze the expressed variability of a live DJ set. But, while the variables themselves are objective, the value placed on them and even how they are evaluated are not. The language only provides the common ground for analysis and discussion. So the next time you’re at an event and the person next to you says, “this DJ is a hack!” you can say, “well they’ve got a pretty diverse selection with rather seemless transitions, maybe you just don’t like the music,” to which they’ll reply, “yeah, I don’t think I like this music,” which is decent progress in the scheme of things. If we really want to talk about live electronic performance however we will need to move beyond the DJ set to exemplify how this variable language can work to accurately describe the other musical objects which appear at a live electronic performance.

Joe Nice at Reconstrvct in Brooklyn NY on 2-23-2013, Photo by Kyle Rober, Courtesy of Electrogenic
Take for example another electronic instrument: the keyboard. The keyboard itself is a challenging instrument to define; in fact I could argue that the keyboard is itself not actually an instrument but a musical object. It is a component part of a group of instruments commonly referred to as keyboards, but the keyboard itself is not the instrument. What it is is one of the earliest examples of controllerism.
On a piano, typically fingers are used to press keys on the keyboard, which trigger the hammers to hit the strings and produce sound. The range of the instrument travels seven octaves from A0 to C8, and can theoretically have 88 voice polyphony, though in typical that polyphony is limited to the ten fingers. It can play a wide range of dynamics and includes pedals which can be used to modify the sustainabilty of pitches. With a pipe organ, the keyboard controls woodwind instruments with completely different timbre, range, and dynamics; the polyphony increases and the foot pedals can perform radically different functions. The differences from the piano grow even more once we enter the realm where the term “keyboard,” as instrument, is most commonly used: the synthesizer keyboard.
The first glaring difference is that, even if you have an encyclopedia of knowledge about keyboard synthesizers, when you see a performer with one on stage you simply cannot know by seeing what sounds it will produce. Pressing the key on a synthesizer keyboard can produce an infinite number of sounds, which can change not just from song to song, but from second to second and key to key. A performer’s left thumb can produce an entirely different sound than their left index finger. Using a keyboard equipped with a sequencer, the performer’s fingers may not press any keys at all but can still be active in the performance.

Minimoog Voyager Electric Blue, Image by Flickr User harald walker
When the keyboard synthesizer was first introduced, it was being used by traditional piano players in standard band configurations, like a piano or organ, with timbres being limited to one during a song and the performance aspect being limited to fingers pressing keys. Some keyboardists however used the instrument more as a bank for sound effects and textures. They may have been playing the same keys, but one wouldn’t necessarily expect to hear a I IV V chord progression. Rather than listening for the physical dexterity of the player’s fingers, the key to listening to a keyboard in this context was evaluating the sounds produced first and then how they were played to fit into the surrounding musical context.
Could one of these performers be seen as more competent than the other? Possibly. The first performer could be said to be one of the most amazing keyboard players in the piano player sense, but where they aren’t really maximizing the variability potential of the instrument, it could be said they fall short as a keyboard synthesizer performer. The second performer on the other hand may not even know what a I IV V chord progression is and thus be considered incompetent on the keyboard in the piano player sense, but the ways in which they exploit the variable possibilities shows their mastery of the keyboard synthesizer as an instrument.
Well, almost.
While generally speaking there isn’t a set of variables which define the keyboard synthesizer as an instrument, if we think of the keyboard synthesizer as a group of musical instruments, each of the individual types of keyboard synthesizers come with their own set of fixed variables which can be defined. Many of these variables are consistent across the various keyboards but not always in a standard arrangement.
As such, while the umbrella term “keyboard” persists it is perhaps more practical to define the instruments and their players individually. There are Juno 60 players, ARP Odyessy players, MiniMoog players, Crumar Spirit players and more. Naturally an individual player can be well versed in more than one of these instruments and thusly be thought of as a keyboardist, but their ability as a keyboardist would have to be properly contextualized per instrument in their keyboard repertoire. Using the MiniMoog as an example we can show how its variability as an instrument defines it and plays into how a performance on the instrument can be perceived.

Minimoog, Image by Flickr User Francesco Romito
The first variable worth considering when evaluating the MiniMoog is that it is a monophonic instrument. This is radically different from the piano; despite one’s ability to use ten fingers (or other extremity) only one note will sound at one time. The keyboard section of the instrument is only three and a half octaves long, though the range is itself variable. On the left-hand side there is a pitch wheel and a modulation wheel. The pitch wheel can vary the pitch of the currently playing note, while the modulation wheel can alter the actual sound design.
As a monophonic instrument, one does not need to have both hands on the keyboard, as only one note will ever sound at a time. This frees the hands to modify the sound being triggered by the keyboard exemplified via the pitch and modulation wheels, but also available are all of the exposed controls for the sound design. This means that in performance every aspect of the sound design and the triggering can be variable. Of course these changes are limited to what one can do with their hands, but the MiniMoog also features a common function in analog synths, a Control Voltage input. This means that an external source can control either the aspects of the sound design and/or the triggering for the instrument.
Despite this obvious difference from the piano, playing the MiniMoog does not have to be any less of a physical act. A player using their right hand to play the keyboard while modulating the sound with their left, plays with a different level of dexterity than the piano player. The right and left hand are performing different motions; while the right hand uses fingers to press keys as the arm moves it up and down the keyboard, the left hand can be adjusting the pitch or modulation wheels with a pushing action or alternately adjusting the knobs with a turning action. Like patting your head and rubbing your belly, controlling a well-timed filter sweep while simultaneously playing a melody is nowhere near as easy as it sounds.
At the same time playing the MiniMoog doesn’t have to be very physical at all. A sequencer could be responsible for all of the note triggering leaving both hands free to modulate the sound. Similarly the performer may not touch the MiniMoog at all, instead playing the sequencer itself as an intermediary between them and the sound of the instrument. In this case the MiniMoog is not being used as a keyboard, yet it retains its instrument status as all of the sounds are being generated from it, with the sequencer being used as the controller. Despite not having any physical contact with the instrument itself, the performer can still play it.

Minimoog in Live DJ Performance, Image by Fluckr Users Huba and Silica
Taking it one step further – if a performer were to only touch a sequencer at the start of the performance to press play and never touch the instrument, could they still be said to be playing the MiniMoog live? There is little doubt that the MiniMoog is indeed still performing because it does not have the mechanism to play by itself, but requires agency to illicit a sonic response. In this example that agency comes from the sequencer, but that does not eliminate the performer. The sequencer itself has to be programmed in order to provide the instrument with the proper control voltages, and the instrument itself has to be set up sonically with a designed sound receptive to the sequencer’s control. If the performer is not physically manipulating either device however, they are not performing live, the machines are.
From this we can establish the first dichotomy of electronic performance; the layers of variability in an electronic performance can be isolated into two specific categories: physical variability and sonic variability. While these two aspects are also present in traditional instrument performance, they are generally thought to not be mutually exclusive without additional devices. The vibrato of an acoustic guitar is only accomplished by physically modulating the strings to produce the effect. With an electronic instrument however, vibrato can be performed by an LFO controlling the amplitude. That LFO can be controlled physically but there does not have to be a physical motion (such as a knob turn) associated with it in order for it to be a live modulation or performance. The benefit of it running without physical aid is that it frees up the body to be able to control other things, increasing the variability of the performance.
In a situation where all of the aspects of the performance are being controlled by electronic functions, the agency in performance shifts from the artist performing live, to the artists establishing the parameters by which the machines perform live. Is the artist calling this a live performance a hack? Absolutely not, but it’s important that the context of the performance is understood for it to be evaluated. Like evaluating the monophonic MiniMoog performer based on the criteria of the polyphonic pianist, evaluating a machine performance based on physical criteria is unfair.

Daniel Carter on horn during Overcast Radio’s set at EPICENTER: 02, Photo courtesy of Raymond Angelo (c)
In the evaluation of a machine performance, just as with a physical one, variability still plays an important role. At the most base level the machine has to actually be performing and this is best measured by the potential variability of the sound. This gets tricky with digital instruments, as, barring outside influences, it is completely possible to repeat the exact same performance in the digital domain, so that there is no variation between each iteration. But even such cases with a digital sequencer controlling a digital instrument, with no physical interaction, are still a machine performances; they just exhibit very little variability. The performance aspect of the machine only disappears when the possibility for variability is completely removed, at which point the machine is no longer a performance instrument but a playback device as is the case with a CD player playing a backing track. The CD player if not being manipulated physically or by an external control is not a performance instrument as all of the sound contained within it can only be heard as one fixed recorded performance, not live. It is only when these fixed performances are manipulated either physically (ie a DJ set) or by other means, that they go from fixed performances to potentially live ones.

Mala and Hatcha in Detroit, Image by Tom Selekta
From all of this we arrive at four basic distinctions for live electronic performances:
• The electro/mechanical manipulation of fixed sonic performances
• The physical manipulation of electronic instruments
• The mechanized manipulation of electronic instruments
• A hybrid of physical and mechanized manipulation of electronic instruments
These help set up the context for evaluating electronic performances, as before we can determine the quality of a performance we must first be able to distinguish what type of performance we are observing. So far we’ve only dealt with a monophonic instrument, but even with its limitations can see how the potential variability is quite high. As we get into the laptop as a performance instrument that variability increases exponentially.
This is part two of a three part series. In the next part we will begin to exemplify the laptop as performance instrument, using this language to show the breadth of variability available in electronic performance and perhaps show that indeed, where that variability continues to be explored, there is merit to the potential of live electronic music as an extension of jazz.
—
Native Frequencies at the Trocadero 2013, Featured Image Courtesy of Raymond Angelo (C)
—
Primus Luta is a husband and father of three. He is a writer, technologist and an artist exploring the intersection of technology and art, and their philosophical implications. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for the Rhythm Incursions Podcast series with his show RIPL. As an artist, he is a founding member of the live electronic music collective Concrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.
—
REWIND! . . .If you liked this post, you may also dig:
Experiments in Agent-based Sonic Composition–Andreas Pape
Evoking the Object: Physicality in the Digital Age of Music–-Primus Luta
Sound as Art as Anti-environment–Steven Hammer


















Recent Comments