Archive | Play RSS for this section

Sound Off! // Comment Klatsch #16: Sound and Pleasure

Sounding Off2klatsch \KLAHCH\ , noun: A casual gathering of people, esp. for refreshments and informal conversation  [German Klatsch, from klatschento gossip, make a sharp noiseof imitative origin.] (Dictionary.com)

Dear Readers:  Team SO! thought that we would warm up the dance floor for our upcoming Summer Series on Sound and Pleasure (peep the Call for Posts here. . .pitches are due by 4/15/14).   –J. Stoever, Editor-in-Chief

What sounds give you pleasure and why? 

Comment Klatsch logo courtesy of The Infatuated on Flickr.

 

Live Electronic Performance: Theory And Practice

3493272789_8c7302c8fa_z

This is the third and final installment of a three part series on live Electronic music.  To review part one, “Toward a Practical Language for Live Electronic Performance” click here. To peep part two, “Musical Objects, Variability and Live Electronic Performance” click here.

“So often these laptop + controller sets are absolutely boring but this was a real performance – none of that checking your emails on stage BS. Dude rocked some Busta, Madlib, Aphex Twin, Burial and so on…”

This quote, from a blogger about Flying Lotus’ 2008 Mutek set, speaks volumes about audience expectations of live laptop performances. First, this blogger acknowledges that the perception of laptop performances is that they are generally boring, using the “checking your email” adage to drive home the point. He goes to express what he perceived to set Lotus’s performance apart from that standard. Oddly enough, it isn’t the individualism of his sound, rather it was Lotus’s particular selection and configuration of other artists’ work into his mix – a trademark of the DJ.

Contrasting this with the review of the 2011 Flying Lotus set that began this series, both reveal how context and expectations are very important to the evaluation of live electronic performance.  While the 2008 piece praises Lotus for a DJ like approach to his live set, the 2011 critique was that the performance was more of a DJ set rather than a live electronic performance. What changed in the years between these two sets was the familiarity with the style of performance (from Lotus and the various other artists on the scene with similar approaches) providing a shift in expectations. What they both lack, however, is a language to provide the musical context for their praise or critique; a language which this series has sought to elucidate.

To put live electronic performances into the proper musical context, one must determine what type of performance is being observed. In the last part of this series, I arrive at four helpful distinctions to compare and describe live electronic performance, continuing this project of developing a usable aesthetic language and enabling a critical conversation about the artform.  The first of the four distinctions between different types of live electronic music performance concerns the manipulation of fixed pre-recorded sound sources into variable performances. The second distinction cites the physical manipulation of electronic instruments into variable performances. The third distinction demarcates the manipulation of electronic instruments into variable performances by the programming of machines. The last one is an integrated category that can be expanded to include any and all combinations of the previous three.

Essential to all categories of live electronic music performance, however, is the performance’s variability, without which music—and its concomitant listening practices–transforms from  a “live” event to a fixed musical object. The trick to any analysis of such performance however, is to remember that, while these distinctions are easy to maintain in theory, in performance they quickly blur one into the other, and often the intensity and pleasure of live electronic music performance comes from their complex combinations.

6250416351_d5ca1fc1f3_b

Flying Lotus at Treasure Island, San Francisco on 10-15-2011, image by Flickr User laviddichterman

For example, an artist who performs a set using solely vinyl with nothing but two turntables and a manual crossfading mixer, falls in the first distinction between live electronic music performances. Technically, the turntables and manual crossfading mixer are machines, but they are being controlled manually rather than performing on their own as machines.  If the artist includes a drum machine in the set, however, it becomes a hybrid (the fourth distinction), depending on whether the drum machine is being triggered by the performer (physical manipulation) or playing sequences (machine manipulation) or both. Furthermore, if the drum machine triggers samples, it becomes machine manipulation (third distinction) of fixed pre-recorded sounds (first distinction) If the drum machine is used to playback sequences while the artist performs a turntablist routine, the turntable becomes the performance instrument while the drum machine holds as a fixed source. All of these relationships can be realized by a single performer over the course of a single performance, making the whole set of the hybrid variety.

While in practice the hybrid set is perhaps the most common, it’s important to understand the other three distinctions as each of them comes with their own set of limitations which define their potential variability.  Critical listening to a live performance includes identifying when these shifts happen and how they change the variability of the set.  Through the combination their individual limitations can be overcome increasing the overall variability of the performance. One can see a performer playing the drum machine with pads and correlate that physicality of it with the sound produced and then see them shift to playing the turntable and know that the drum machine has shifted to a machine performance. In this example the visual cues would be clear indicators, but if one is familiar with the distinctions the shifts can be noticed just from the audio.

This blending of physical and mechanical elements in live music performance exposes the modular nature of live electronic performance and its instruments. teaching us that the instruments themselves shouldn’t be looked at as distinction qualifiers but rather their combination in the live rig, and the variability that it offers. Where we typically think of an instrument as singular, within live electronic music, it is perhaps best to think of the individual components (eg turntables and drum machines) as the musical objects of the live rig as instrument.

Flying Lotus at the Echoplex, Los Angeles, Image by Flickr User  sunny_J

Flying Lotus at the Echoplex, Los Angeles, Image by Flickr User sunny_J

Percussionists are a close acoustic parallel to the modular musical rig of electronic performers. While there are percussion players who use a single percussive instrument for their performances, others will have a rig of component elements to use at various points throughout a set. The electronic performer inherits such a configuration from keyboardists, who typically have a rig of keyboards, each with different sounds, to be used throughout a set. Availing themselves of a palette of sounds allows keyboardists to break out of the limitations of timbre and verge toward the realm of multi-instrumentalists.  For electronic performers, these limitations in timbre only exist by choice in the way the individual artists configure their rigs.

From the perspective of users of traditional instruments, a multi-instrumentalist is one who goes beyond the standard of single instrument musicianship, representing a musician well versed at performing on a number of different instruments, usually of different categories.  In the context of electronic performance, the definition of instrument is so changed that it is more practical to think not of multi-instrumentalists but multi-timbralists.  The multi-timbralist can be understood as the standard in electronic performance.  This is not to say there are not single instrument electronic performers, however  it is practical to think about the live electronic musician’s instrument not as a singular musical object, but rather a group of musical objects (timbres) organized into the live rig.  Because these rigs can be comprised of a nearly infinite number of musical objects, the electronic performer has the opportunity to craft a live rig that is uniquely their own. The choices they make in the configuration of their rig will define not just the sound of their performance, but the degrees of variability they can control.

Because the electronic performer’s instrument is the live rig comprised of multiple musical objects, one of the primary factors involved in the configuration of the rig is how the various components interact with one another over the time dependent course of a performance. In a live tape loop performance, the musician may use a series of cassette players with an array of effects units and a small mixer. In such a rig, the mixer is the primary means of communication between objects. In this type of rig however, the communication isn’t direct. The objects cannot directly communicate with each other, rather the artist is the mediator. It is the artist that determines when the sound from any particular tape loop is fed to an effect or what levels the effects return sound in relation to the loops. While watching a performance such as this, one would expect the performer to be very involved in physically manipulating the various musical objects. We can categorize this as an unsynchronized electronic performance meaning that the musical objects employed are not locked into the same temporal relations.

Big Tape Loops, Image by Flickr User  choffee

Big Tape Loops, Image by Flickr User choffee

The key difference between an unsynchronized and s synchronized performance rigs is the amount of control over the performance that can be left to the machines. The benefit of synchronized performance rigs is that they allow for greater complexity either in configuration or control. The value of unsynchronized performance rigs is they have a more natural and less mechanized feel, as the timing flows from the performer’s physical body. Neither could be understood as better than the other, but in general they do make for separate kinds of listening experiences, which the listener should be aware of in evaluating a performance. Expectations should shift depending upon whether or not a performance rig is synchronized.

This notion of a synchronized performance rig should not only be understood as an inter-machine relationship. With the rise of digital technology, many manufacturers developed workstation style hardware which could perform multiple functions on multiple sound sources with a central synchronized control. The Roland SP-404 is a popular sampling workstation, used by many artists in a live setting. Within this modest box you get twelve voices of sample polyphony, which can be organized with the internal sequencer and processed with onboard effects. However, a performer may choose not to utilize a sequencer at all and as such, it can be performed unsynchronized, just triggering the pads. In fact, in recent years there has been a rise of drum pad players or finger drummers who perform using hardware machines without synchronization. Going back to our three distinctions a performance such as this would be a hybrid of physical manipulation of fixed sources with the physical manipulation of an electronic instrument.  From this qualification, we know to look for extensive physical effort in such performances as indicators of the the artists agency on the variable performance.

Now that we’ve brought synchronization into the discussion it makes sense to talk about what is often the main means of communication in the live performance rig – the computer. The ultimate benefit of a computer is the ability to process a large number of calculations per computational cycle. Put another way, it allows users to act on a number of musical variables in single functions. Practically, this means the ability to store, organize recall and even perform a number of complex variables. With the advent of digital synthesizers, computers were being used in workstations to control everything from sequencing to the patch sound design data. In studios, computers quickly replaced mixing consoles and tape machines (even their digital equivalents like the ADAT) becoming the nerve center of the recording process. Eventually all of these functions and more were able to fit into the small and portable laptop computer, bringing the processing power in a practical way to the performance stage.

Flying Lotus and his Computer, Image by Flickr User  jaswooduk

Flying Lotus and his Computer, All Tomorrow’s Parties 2011, Image by Flickr User jaswooduk

A laptop can be understood as a rig in and of itself, comprised of a combination of software and plugins as musical objects, which can be controlled internally or via external controllers. If there were only two software choices and ten plugins available for laptops, there would be over seven million permutations possible. While it is entirely possible (and for many artists practical) for the laptop to be the sole object of a live rig, the laptop is often paired with one or more controllers. The number of controllers available is nowhere near the volume of software on the market, but the possible combinations of hardware controllers take the laptop + controller + software combination possibilities toward infinity. With both hardware and software there is also the possibility of building custom musical objects that add to the potential uniqueness of a rig.

Unfortunately, quite often it is impossible to know exactly what range of tools are being utilized within a laptop strictly by looking at an artist on stage. This is what leads to probably the biggest misnomer about the performing laptop musician. As common as the musical object may look on the stage, housed inside of it can be the most unique and intricate configurations music (yes all of music) has ever seen. The reductionist thought that laptop performers aren’t “doing anything but checking email” is directly tied to the acousmatic nature of the objects as instruments. We can hear the sounds, but determining the sources and understanding the processes required to produce them is often shrouded in mystery. Technology has arrived at the point where what one performs live can precisely replicate what one hears in recorded form, making it easy to leap to the conclusion that all laptop musicians do is press play.

Indeed some of them do, but to varying degrees a large number of artists are actively doing more during their live sets. A major reasons for this is that one of the leading Digital Audio Workstations (DAW) of today also doubles as a performance environment. Designed with the intent of taking the DAW to the stage, Ableton Live allows artists to have an interface that facilitates the translation of electronic concepts from the studio to the stage. There are a world of things that are possible just by learning the Live basics, but there’s also a rabbit hole of advanced functions all the way to the modular Max for Live environment which lies on the frontier discovering new variables for sound manipulation. For many people, however, the software is powerful enough at the basic level of use to create effective live performances.

Sample Screenshot from a performer's Ableton Live set up for an "experimental and noisy performance" with no prerecorded material, Image by Flickr User Furibond

Sample Screenshot from a performer’s Ableton Live set up for an “experimental and noisy performance” with no prerecorded material, Image by Flickr User Furibond

In its most basic use case, Ableton Live is set up much like a DJ rig, with a selection of pre-existing tracks queued up as clips which the performer blends into a uniform mix, with transitions and effects handled within the software. The possibilities expand out from that: multi-track parts of a song separated into different clips so the performer can take different parts in and out over the course of the performance; a plugin drum machine providing an additional sound source on top of the track(s), or alternately the drum machine holding a sequence while track elements are laid on top of it. With the multitude of plugins available just the combination of multi-track Live clips with a single soft synth plugin, lends itself to near infinite combinations. The variable possibilities of this type of set, even while not exploiting the breadth of variable possibilities presented by the gear, clearly points to the artist’s agency in performance.

Within the context of both the DJ set and the Ableton Live set, synchronization plays a major role in contextualization. Both categories of performance can be either synchronized or unsynchronized. The tightest of unsynchronized sets will sound synchronized, while the loosest of synchronized sets will sound unsynchronized. This plays very much into audience perception of what they are hearing and the performers’ choice of synchronization and tightness can be heavily influenced by those same audience expectations.

A second performance screen capture by the same artist, this time using pre-recorded midi sequences, Image by Flickr User Furibond

A second performance screen capture by the same artist, this time using pre-recorded midi sequences, Image by Flickr User Furibond

A techno set is expected to maintain somewhat of a locked groove, indicative of a synchronized performance. A synchronized rig either on the DJ side (Serato utilizing automated beat matching) or on the Ableton side (time stretch and auto bpm detection sync’d to midi) can make this a non factor for the physical performance, and so listening to such a performance it would be the variability of other factors which reveals the artist’s control over the performance. For the DJ, the factors would include the selection, transitions and effects use. For the Ableton user, it can include all of those things as well as the control over the individual elements in tracks and potentially other sound sources.

On the unsychronized end of the spectrum, a vinyl DJ could accomplish the same mix as the synchronized DJ set but it would physically require more effort on their part to keep all of the selections in time. This would mean they might have to limit exerting control on the other variables. An unsychronized Live set would be utilizing the software primarily as a sound source, without MIDI, placing the timing in the hands of the performer. With the human element added to the timing it would be more difficult to produce the machine-like timing of the other sets. This doesn’t mean that it couldn’t be effective, but there would be an audible difference in this type of set compared to the others.

What we’ve established is that through the modular nature of the electronic musician’s rig as an instrument, from synthesizer keyboards to Ableton Live, every variable consideration combines to produce infinite possibilities. Where the trumpet is limited in timbre, range and dynamics, the turntable is has infinite timbres; the range is the full spectrum of human hearing; and the dynamics directly proportional to the output. The limitations of the electronic musician’s instrument appear only in electrical current constraints, processor speed limits, the selection of components and the limitations of the human body.

Flying Lotus at Electric Zoo, 2010, Image by Flickr User TheMusic.FM

Flying Lotus at Electric Zoo, 2010, Image by Flickr User TheMusic.FM

Within these constraints however, we have only begun touching the surface of possibilities. There are combinations that have never happened, variables that haven’t come close to their full potential, and a multitude of variables that have yet to be discovered. One thing that the electronic artist can learn from jazz toward maximizing that potential is the notion of play, as epitomized with jazz improvisation. For jazz, improvisation opened up the possibilities of the form which impacted, performance and composition. I contend that the electronic artist can push the boundaries of variable interaction by incorporating the ability to play from the rig both in their physical performance and giving the machine its own sense of play. Within this play lie the variables which I believe can push electronic into the jazz of tomorrow.

Featured Image by Flickr User Juha van ‘t Zelfde

Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications. In 2014 he will be releasing an expanded version of this series as a book entitled “Toward a Practical Language: Live Electronic Performance”. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for theRhythm Incursions Podcast series with his monthly showRIPL. As an artist, he is a founding member of the live electronic music collectiveConcrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.

tape reelREWIND! . . .If you liked this post, you may also dig:

Evoking the Object: Physicality in the Digital Age of Music--Primus Luta

Experiments in Agent-based Sonic Composition–Andreas Pape

Calling Out To (Anti)Liveness: Recording and the Question of PresenceOsvaldo Oyola

Devil’s Symphony: Orson Welles’s “Hell on Ice” as Eco-Sonic Critique

 

Orson Welles in Mr. Arkadin, 1955.

Orson Welles in Mr. Arkadin, 1955.

WelleswTower_squareDuring our modest publicity blitz leading up to our #WOTW75 project last month, I argued once or twice that we shouldn’t obsess so much over the aftermath of the 1938 invasion radio play — how intense and widespread the panic truly was, how much Welles intended it this way, what it all says about “human nature” and “the power of the media,” etc. — and ought to spend more time unpacking the piece itself. In an incautious moment, I even proposed we ought to think about the play as one of the great works of the 20th century, on par with key films, novels and paintings that get at the structure of modern feeling through aesthetics.

The claim boxed me in. Why? Because, from an aesthetic point of view, “War of the Worlds” may not even belong in the top tier of Welles’s prodigious radio corpus. His role in Archibald MacLeish’s “Fall of the City” is probably more significant in the history of radio aesthetics, and his appearances on Suspense are likely his best work as an actor. Among his principle directed works, I’d argue that plays like “A Passenger to Bali,” “The Pickwick Papers” and “Dracula” are the most exciting. Even more compelling than any of those, meanwhile, is an unusual radio play based on a now-forgotten historical adventure novel about an ill-fated polar voyage — “Hell on Ice,” which radio enthusiasts routinely name as Welles’s best. If it’s true that the essence of Welles’s radio art was his capacity to first create scenes of striking awe and then modulate dramatic pacing, then HOI is surely a minor masterpiece.

Or did I just trap myself again? Judge for yourself, if you like:


Yes, I fear I’m stuck.

While I try to work my way out somehow, read on. In his first post for Sounding Out, and the tenth installment of our Mercury to Mars series (in conjunction with Antenna), Northwestern University Professor Jacob Smith makes the case that, today, HOI is becoming even more resonant, more relevant …

– nv

__

A NASA map of sea ice at the North Pole in September 2007

A NASA map of sea ice at the North Pole in September 2007

The Mercury Theater’s broadcast of “War of the Worlds” on Oct. 30, 1938 may forever be remembered as “the Panic Broadcast,” but listening to the Mercury’s first season seventy-five years later, it is another broadcast that seems most in tune with current anxieties about planetary crisis.

On October 9th, the Mercury Theater performed an adaptation of Edward Ellsberg’s Hell On Ice (1938), which depicted a failed attempt by an American expedition to reach the North Pole in 1879. “Hell on Ice” is notable among the Mercury’s radio broadcasts in a number of ways: it marks the debut of the writer Howard Koch, who became a regular on the series, scripting “War of the Worlds” to air three weeks later; and it is the only show to be based on a “stirring adventure of recent history” as opposed to classic literature and drama. “Hell on Ice” also stands out among the Mercury oeuvre as a proto-environmental critique. That is, like “War of the Worlds,” “Hell on Ice” contemplates the catastrophic collapse of human society, but where the October 30th invasion broadcast was a science fiction thriller that tapped into anxiety about the looming war in Europe, the October 9th show used historical fiction to dramatize the error of human attempts to master the globe. That makes it perhaps the best companion to “War of the Worlds,” a play in which the thwarted invader is no alien – it’s us. Listening to the play today, “Hell on Ice” is not only a masterpiece of audio theater (among fans, the most beloved of all Welles’s radio works) but a powerful “eco-sonic” critique as well.

Captain George Washington DeLong of the Jeanette.

Captain George Washington DeLong of the Jeanette.

In 1879, James Gordon Bennett, the owner of the New York Herald, sponsored an expedition to the North Pole by way of the Bering Strait. Bennett’s ship, christened the Jeannette, was to ride a warm, northerly ocean current to the shores of the mysterious Wrangel Island, which some believed to be the tip of a vast continent that stretched to Greenland. Captain George Washington DeLong and a crew of thirty-one men left San Francisco to great celebration on July 8, 1879, but the voyage did not go as planned: the Jeannette became trapped in the ice on September 6, 1879, and remained stuck there for two years before being crushed by ice floes in June, 1881.

The crew packed into three lifeboats and set a course for Siberia, but one boat was lost at sea with all its passengers and, of the other two, the party led by Captain DeLong froze to death in the Lena Delta.

The Sinking of the Jeanette.

The tragic story of the Jeannette was an inspired choice for the Mercury Theater. The 1930s were a time of intense interest in polar exploration, when Admiral Richard E. Byrd’s two Antarctic expeditions became multimedia events. Ellsberg’s Hell on Ice rode the crest of that wave and, moreover, was well suited to Welles’s “first person” approach to radio narrative, since it drew upon the journals of the Jeannette’s officers. Ellsberg’s book is also surprisingly radiogenic in it’s vivid descriptions of sound. We read that the “unearthly screeching and horrible groanings” of the ice pack are “like the shrieking of a thousand steamer whistles, the thunder of heavy artillery, the roaring of a hurricane, and the crash of collapsing houses all blended together,” and that the “deep bass” of the ice floes and the “high scream” of the grating icebergs are “a veritable devil’s symphony of hideous sounds” (Hell on Ice, 110, 161). The Mercury Theatre’s adaptation grants considerable airtime to recreating that “devil’s symphony,” with stunning sequences depicting the piercing arctic wind, ice floes that shriek and drum against the ship’s hull, and the ship’s engines straining against the ice:


The frozen world of “Hell on Ice” had many expressive possibilities for the Mercury’s sound effects crew, and was also a wonderful showcase for composer Bernard Herrmann. John Houseman claimed that Herrmann had a repertoire of music for the Mercury broadcasts, one of which was “frozen music,” to be used for “gruesome effects.” Herrmann’s frozen music is first heard when the ship becomes locked in the ice and signals a shift in the show’s narrative emphasis to themes of frozen time, stasis, immobility, and deadening routine. The slow, queasy, pendulum-like movements of Herrmann’s score make the perfect accompaniment to Captain DeLong’s June 21st journal entry describing the absolute monotony of “the same faces, the same dogs, the same ice,” read on the broadcast by the actor Ray Collins (The Voyage of the Jeanette, 382-3). Here and elsewhere in the broadcast, Herrmann’s frozen music is a sonic set design that portrays the bleak scene of the frozen north, and provides commentary on the emotional life of the crew, who struggle with the soul-crushing monotony of life on the ice pack.


We should appreciate “Hell on Ice” not just for its aesthetic achievement, however, but also for its social critique. As with other Welles projects, “Hell on Ice” questions America’s passage to an industrial and imperial society (consider for example, James Naremore’s argument that The Magnificent Ambersons charts a transition from “midland streets” to “grimy highways” [The Magic World of Orson Welles 89-91]). “Hell on Ice” brings out the ecological dimension of that critique, and in that regard, resembles another nineteenth century first-person tale in which little or nothing happens: Thoreau’s Walden (1854), which initially suggests a narrative of adventure (the individual in the wilderness), but then quickly abandons it for descriptions of everyday life on Walden Pond. Robert B. Ray claims that Thoreau had little gift for narrative, and that “going to Walden appealed to him because there nothing would happen” (Walden X 40, 11). As the narrative interest fades, it is replaced by Thoreau’s poetic descriptive passages and biting social commentary. In a similar re-routing of narrative expectations Captain DeLong wrote in his journals that, given the “popular idea” that “daily life in the Arctic regions should be vivid, exciting, and full of hair-breadth escapes,” the account of his voyage was sure to be found “dull and weary and unprofitable” (The Voyage of the Jeanette, 409-10). Immobility, routine, and unprofitability were a blessing to Thoreau, who even contrasted his “experiment” on Walden Pond to Arctic explorers like John Franklin and Martin Frobisher: where they had explored the Earth’s higher latitudes, Thoreau implored readers to “explore your own higher latitudes… Explore thyself” (Walden, 213).

The voyage of the Jeanette as depicted on the endpaper of the 1938 edition of Hell on Ice that Welles probably read.

The voyage of the Jeanette as depicted on the endpaper of the 1938 edition of Hell on Ice that Welles probably read.

Indeed, “Hell on Ice” and Walden share a certain narrative problem – or, more precisely, a “lack-of-narrative” problem. When Welles adapted DeLong’s journals (via Ellsberg), he responded to that problem in part by recourse to character study. On the Mercury broadcast, the Jeannette’s thwarted mission opens up the possibility for brilliant dramatic scenes: the interaction among engineer George Melville (Welles), DeLong (Collins), John Danenhower (Joseph Cotton), and reporter Jerome Collins (Howard Smith) during the crew’s first Christmas on the ice; Melville’s encounter with the seaman Erikson (Karl Swenson); the escalating tensions between DeLong and Collins; and Melville and DeLong’s final conversation about their chances on the ice.

It may seem pointless to speculate about what Thoreau might have written had he been keeping a journal on board the Jeannette, but by a remarkable coincidence, another icon of American environmentalism nearly did just that. Nature writer and Sierra Club founder John Muir was a passenger on board a government ship sent to look for the missing Jeannette in 1881. Radio fans will take pleasure in the fact that the name of the ship was the “Corwin.” Muir was eager for the chance to study how glaciers had shaped the landscape of the polar region during the last Ice Age. For Muir, the frozen North was vivid and exciting as a natural laboratory and a window into deep time, just as it is for ecological activists today.

If we listen closely, can we hear Muir’s sentiments in Welles’ “Hell on Ice”?

Listening to the show as an ecological critique prompts us to hear the sound effects not only as a showcase of modernist radio technique, but as a means to give voice to nonhuman nature and create dissonant harmonies with human endeavors. This is not to argue that the Mercury group foresaw current concerns, but to testify to the enduring suppleness of their work and inspire eco-sonic productions in the future. Notice how the Bennett expedition is made to seem insignificant by the thunderous sounds of the “endless miles of surging ice” that snap the Jeannette to splinters. Or consider how, during DeLong’s last divine service on the edge of the ice pack, the sound of the men singing a hymn is gradually drowned out by a crescendo of roaring arctic wind.


Last service for the sailors of the Jeanette

Last service for the sailors of the Jeanette

In these sequences, the broadcast uses sound to play with spatial scale, performing a kind of auditory zoom that forces us to hear the human in relation to a sense of planet. The conclusion of the show does something similar, but in a temporal register: Melville describes burying DeLong and his men at a desolate spot overlooking the Arctic Ocean, where the winds wail an “eternal dirge.”

A monument to DeLong and his crew.

A monument to DeLong and his crew.

There is a certain sad irony to this conclusion, which asserts that the wind and ice of the Arctic are timeless, for we have come to understand that the polar climate does indeed have a history, and that humans now shape it in profound ways. “Hell on Ice” thus takes on new meaning in our own era, as temperatures rise in the Arctic, and we are forced to contemplate another kind of polar “hell,” one represented not by an impenetrable wall of ice, but by the thinning and disappearance of the ice pack, with all its intimations of environmental catastrophe. Indeed, it is now Muir’s voice that we should hear, with its deep historical and planetary perspective, when Collins, as DeLong, speaks the line that the Jeannette’s Captain wrote on the first day that the ship became frozen in the ice: “This is a glorious country to learn patience in” (The Voyage of the Jeanette, 116).

Frame from Citizen Kane (1941)

Frame from Citizen Kane (1941)

Jacob Smith is Associate Professor in the Radio-Television-Film Department at Northwestern University. He has written several books on sound (Vocal Tracks: Performance and Sound Media [2008], and Spoken Word: Postwar American Phonograph Cultures [2011], both from the University of California Press), and published articles on media history, sound, and performance.

In order of their appearance, here are the other nine entries in our series From Mercury to Mars: Orson Welles and Radio after 75 Years, which is a joint project with Antenna: Responses to Media and Culture. 

  • Here is “Hello Americans,” Tom McEnaney‘s post on Welles and Latin America
  • Here is Eleanor Patterson’s post on editions of WOTW as “Residual Radio”
  • Here is “Sound Bites,” Debra Rae Cohen‘s post on Welles’s “Dracula”
  • Here is Cynthia B. Meyers on the pleasures and challenges of teaching WOTW in the classroom
  • Here is “‘Welles,’ Bells and Fred Allen’s Sonic Pranks,” Kathleen Battles on parodies of Welles.
  • Here is Shawn VanCour on the second act of War of the Worlds
  • Here is the navigator page for our #WOTW75 collective listening project
  • Here is our podcast of Monteith McCollum‘s amazing WOTW remix
  • Here is Josh Shepperd on WOTW and media studies.

Heard Any Good Games Recently?: Listening to the Sportscape

"Finland vs. Belarus" by Flickr user s. yume, CC-BY-2.0

Sound and music play important roles in shaping our experiences of sports. Every sport has its own characteristic sounds and soundscape; some are very silent while others can be dangerously noisy. Barry Truax, in his engagement with R. Murray Schafer’s concept of soundscape in the book Acoustic Communication, states that the listener is always present in a soundscape, not solely as a listener but also as a producer of sound (10). Both Truax and Schafer use the term hi-fi to describe environments where sounds may be heard clearly, while lo-fi, often urban, environments, have more overlapping sounds. When an audio environment is well balanced (hi-fi), there is a high degree of information exchange between sound, listeners and the environment, and the listener is involved in an interactive relationship with the other two components (Truax 57). Truax’s understanding of the concepts of hi-fi and lo-fi enable a better understanding of the power relations between the key sonic elements of sports: players, the audience and the organizer (usually a game DJ), an increasingly prominent role in today’s team sports events due to permeation of recorded music.  Using examples from Finnish soccer, pesäpallo (“Finnish baseball”), and ice hockey, I track how a particular game’s sonic balance can be altered to shape the atmosphere of the event and, even influence the game’s outcome.

In Europe, soccer is overwhelmingly associated with crowd chants, as noted by Les Back in his article “Sounds in the Crowd.” Without the sounds from the audience, the soccer soundscape would be more hi-fi, revealing the keynote sounds of the sport clearly: for example, the thuds from kicking the football, individual shouts from both the players and the spectators. The clear hi-fi signal articulation may be desirable at other times, but from a home team perspective, it does not provide a good soccer atmosphere.  However, while playing recorded music to engage the crowd preceding free kicks or corner situations is not prohibited, it breaks the unwritten rules of the game. This means that creating a good atmosphere becomes the crowd’s responsibility;  thus, the infamous songs and chants.

"Final!" by Flickr user liikennevalo, CC BY-NC-SA 2.0

“Final!” by Flickr user liikennevalo, CC BY-NC-SA 2.0

The culture of avoiding electronically reproduced music reveals the potential vulnerability of soccer’s soundscape to silence as much as chants, if not more so. Silence often becomes a way of effecting change at the level of soundscape.  A silent, passive, crowd can mirror, for example, the team’s performance on the field or reflect a general lack of interest. Organized supporter groups can also demonstrate their dissatisfaction with something by refusing to sing.


This sound clip demonstrates how keynote sounds of soccer are exposed while approximately 1200 people in the audience seem to be “just watching” a very important home game at the end of the Veikkausliiga season 2012. In the end of the clip the home team, FF Jaro, equalizes and eventually went on to avoid relegation by just 1 point.

In contradiction to soccer, an important part of the pesäpallo experience (Finnish baseball, the national sport of Finland) is actually listening to the continuous communication of the teams. The key to pesäpallo, and the most important difference between pesäpallo and American baseball, is the vertical pitching. Hitting the ball, as well as controlling the power and direction, is much easier. This gives the offensive game much more variety, speed and tactical dimensions than in baseball. The fielding team is forced to counter the batter’s choices with defensive schemes and the game becomes a mental challenge. The continuous communication by the batting team standing in a half circle around the dueling batter and pitcher influences the pesäpallo soundscape.  For a better appreciation of the sport, spectators must carefully tune in to the teams’s communiqués.

"Pesäpallo, Hyvinkää, Finland" by Flickr user Robert Andersson, CC BY-NC-ND 2.0

“Pesäpallo, Hyvinkää, Finland” by Flickr user Robert Andersson, CC BY-NC-ND 2.0

The male pesäpallo team Vimpelin Veto from the small village of Vimpeli in rural Finland has a very active crowd, with a high know-how of the sport. The village has only a little over 3200 inhabitants but had an average of 2087 spectators/game during the 2012 season. In a local newspaper article Veto’s player Mikko Rantalahti reveals that when the crowd is making lots of noise the visiting players’ tactical “wrong”-shouts (“väärä” in finnish), like when a pitched ball is too low, can’t be heard by the fielding players of the visiting team. The audiences’ collective shouting makes the soundscape more lo-fi and the visiting team’s communication difficult.


This tradition of strategic noisemaking has, before the use of headsets, also been heard in American football, when crowds make noise to make the vocal communication difficult for the visiting team. According to Matthew Mihalka’s PhD dissertation “From the Hammond Organ to ‘Sweet Caroline’: The Historical Evolution of Baseball’s Sonic Environment,” crowd noise in baseball is viewed as less influential since directions are sent via hand signals (44). Even though the pesäpallo manager leads the offensive play with a multicolored fan and other visual signals much of the communication is verbal.

(starting point ~16:30)

In this video clip from the 2011 Superpesis final between Vimpelin Veto and Sotkamon Jymy, the audience tries not only to disturb the focus of the hitter, but also the communication of the visiting team standing in the half circle around the batter. Even the commentators are struck by the crowd noise and note its influence.

"Vimpelin Vedon jokeri Toni Kuusela lyöntivuorossa" by Picasa user Nurmon Jymy, CC BY-NC-ND 3.0

“Vimpelin Vedon jokeri Toni Kuusela lyöntivuorossa” by Picasa user Nurmon Jymy, CC BY-NC-ND 3.0

At Vetos games, the audience creates the sonic atmosphere just as in soccer. When the home team is batting, the audience engages in rhythmic hand clapping, deliberately uncoordinated with the organizers’ music. In 2012, I interviewed the managing director J-P Kujala, who is responsible for the music at Vetos games, and he stated that the atmosphere at Veto’s home games is so good that “there is no need for musical reinforcements.” He also doubted that the audience would react positively to music played to activate the audience. At the stadium, music is only heard before the game, during warm-up and intermissions. Kujala refrains from playing music when the visiting team is batting since that can be considered as “disturbing. . .we don’t do that here.”   From the organizers’ perspective, the teams are sonically treated equally, but if the home audience creates a sound wall that drains out the visiting teams’ tactical shouts—making the soundscape more “lo-fi”—it is considered as home court advantage. In this context, lo-fi is not related to the use of technology and playing music, but instead to the audience’s sounds.

However, in contrast to the Vetos’ home court sound culture, more teams are beginning to play music inside the actual game, not only when the home team is batting (2:19) but also when the visiting team is batting. DJs often use songs to create funny remarks at the visiting team’s expense. Whatever the implied interpretation of the music might be, the strategy of playing music in this core situation also modifies something very authentic about the pesäpallo experience. In this sound clip from Koskenkorvan Urheilijat’s home game one can hear the visiting team Pattijoen Urheilijat communicating underneath the Finnish hit song Älä tyri nyt (“Don’t mess up now”). Notice that the home crowd, unlike at Vetos games, is not actively making noise—hence the use of music.


As this clip shows, the increasing use of music in pesäpallo calls attention to the need to develop up-to-date rules for the use of recorded music rather than relying on custom or practice.

"Ice Hockey World Championships Finland-Belarus" by Flickr user Chiva Congelado, CC BY-NC-SA 2.0

“Ice Hockey World Championships Finland-Belarus” by Flickr user Chiva Congelado, CC BY-NC-SA 2.0

When discussing the soundscape of ice hockey, the most popular sport in Finland, the question is no longer about whether or not to play music but which music suits certain situations best. As in soccer, the most active fans often get cheaper tickets to fill in their own fan sections and sing from the curve behind the goals. Apart from singing along to iconic goal songs or team anthems, the fans very seldom interact with the other music played by the DJ. Moving toward a more mediated sport experience, the ice hockey soundscape is also becoming more lo-fi and the balance of sound making has shifted towards the organizers, with lots of sound events using recoded sound (music, videos, commercials etc.) to entertain the crowd during breaks of play. This shift from hi-fi to lo-fi can, according to Truax, encourage the feeling of being cut off from the environment and may begin to dramatically shift the audience’s experience of the sport (20).

There is no doubt that supporter groups have an important role as creators of meaningful sounds and good atmosphere in Finnish ice halls. In that sense it is a paradox that much of the music played “from record” overlaps their activity. John Bale has written that “fully modernized sport will alter the nature of the soundscape of stadiums and arenas […] and that electronically amplified sound will also increase and hence reduce the spontaneity of the crowd’s songs and chants” (141). The hockey example above with its planned rituals confirms this statement. Discussing and choosing the right songs for the right moment in an attempt to not only entertain but also coordinate the crowd is of course a way to deal with this schizophonic clash of sounds. A more and more common way to integrate the fans in the formation of the soundscape is the possibility of interacting with the DJ through for example Twitter. This is also a way to recognize the power relations in the soundscape.

"Men's Bronze Medal: Finland vs. Slovakia" by Flickr user s. yume, CC BY 2.0

“Men’s Bronze Medal: Finland vs. Slovakia” by Flickr user s. yume, CC BY 2.0

The ice hockey team HC TPS, together with a long time sponsor, recently came up with the idea of “buying silence” and donating the spot to fans. The sponsor also provided the organizers and fans with radiotelephones. That way they could, when prompted by a text on the video screens in the hall, communicate when the spot is being played and make the best out of the situation. This innovative action alters the balance of the soundscape allowing other sounds to be produced and heard more clearly. It makes the ice hockey soundscape hi-fi again; the fans’ interaction with the environment improves and showcases how the balance in the soundscape of hockey is now entangled with the use of technology for sound reproduction.

As highlighted by the examples above, sounds play an important role for experiencing sports. For the audience, making sounds is a way to participate and interact with the event. When the use of music, at least in finnish sports, seems to increase there is also a need to identify the underlying necessity to the play music; it becomes a race to not only find suitable sport music but identify why music is played and which effects it might have on the soundscape as a whole. In soundscape research there has been a certain romanticization for hi-fi soundscapes, but in the cases I have studied there are no clear dichotomies where the one stands for something negative (lo-fi) and the other for something to strive for (hi-fi). Both hi-fi and lo-fi environments reveal power relations in how they connect to the audience’s motivation and ability to contribute with sounds, in addition to the use of technology.

Featured image: “Finland vs. Belarus” by Flickr user s. yume, CC-BY-2.0

Kaj Ahlsved is a PhD student in musicology at Åbo Akademi University in Turku, Finland. His research focus is on the ubiquitous music of our everyday life and especially how recorded music is used during sport events. He does ethnographic field work in team sports, mainly focusing on Finnish male teams in ice hockey, soccer, pesäpallo (“finnish baseball”), volleyball, floorball and basket. His research is funded by PhD Program in Popular Culture Studies and he is a member of the Nordic Research Network for Sound Studies (Norsound). He holds a master’s degree in musicology and bachelor’s degree in music pedagogy (classical guitar). Kaj is a Finnish-swede living with his wife and three children in the bilingual town of Jakobstad/Pietarsaari. He is, of course, a proud fan of the local soccer team FF Jaro.

%d bloggers like this: