Archive | Technology RSS for this section

Episode I: The Greatest Sound in the Galaxy: Sound and Star Wars

Ever tried listening to a Star Wars movie without the sound? –IGN, 1999
Sound is 50 percent of the motion-picture experience. –George Lucas

In the radio dramatization of Return of the Jedi (1996), a hibernation sickness-blinded Han Solo can tell bounty hunter Boba Fett is in the same room with him just by smelling him.  Later this month, Solo:  A Star Wars Story (part of the Anthology films, and as you might expect from the title, a prequel to Han Solo’s first appearance in Star Wars:  A New Hope) may be able to shed some light on how Han developed this particular skill.

Later in that dramatization, we have to presume Han is able to accurately shoot a blaster blind by hearing alone.  Appropriately, then, sound is integral to Star Wars.  For every iconic image in the franchise—from R2D2 to Chewbacca to Darth Vader to X-Wing and TIE-fighters to the Millennium Falcon and the light sabers—there is a correspondingly iconic sound.  In musical terms, too, the franchise is exemplary. John Williams, Star Wars’ composer, won the most awards of his career for his Star Wars (1977) score, including an Oscar, a Golden Globe, a BAFTA, and three Grammys.  Not to mention Star Wars’ equally iconic diegetic music, such as the Mos Eisley Cantina band (officially known as Figrin D’an and the Modal Nodes).

Without sound, there would be no Star Wars.  How else could Charles Ross’ One Man Star Wars Trilogy function?  In One Man Star Wars, Ross performs all the voices, music, and sound effects himself.  He needs no quick costume changes; indeed, in his rapid-fire, verbatim treatment, it is sound (along with a few gestures) that he uses to distinguish between characters.  His one-man show, in fact, echoes C-3PO’s performance of Star Wars to the Ewoks in Return of the Jedi, a story told in narration and sound effects far more than in any visuals.  “Translate the words, tell the story,” says Luke in the radio dramatization of this scene.  That is what sound does in Star Wars. 

I believe that the general viewing public is aware on a subconscious level of Star Wars’ impressive sound achievements, even if this is not always articulated as such.  As Rick Altman noted in 1992 in his four and a half film fallacies, the ontological fallacy of film—while not unchallenged—began life with André Bazin’s “The Ontology of the Photographic Image,” (1960) which argues that film cannot exist without image.  Challenging such an argument not only elevates silent film but also the discipline of film sound generally, so often regarded as an afterthought.  “In virtually all film schools,” Randy Thom wrote in 1999, “sound is taught as if it were simply a tedious and mystifying series of technical operations, a necessary evil on the way to doing the fun stuff.”

Film critic Pauline Kael wrote about Star Wars on original release in what Gianlucca Sergi terms a “harmful generalization” that its defining characteristic was its “loudness.”  Loud sound does not necessarily equal good sound in the movies, which audiences themselves can sometimes confuse.  “High fidelity recordings of gunshots and explosions, and well fabricated alien creature vocalizations” do not equal good sound design alone, as Thom has argued.  On the contrary, Star Wars’ achievements, Sergi posited, married technological invention with overall sound concept and refined if not defined the work of sound technicians and sound-conscious directors.

The reason why Star Wars is so successful aurally is because its creator, George Lucas, was invested in sound holistically and cohesively, a commitment that has carried through nearly every iteration of the franchise, and because his original sound designer, Ben Burtt, understood there was an art as well as a science to highly original, aurally “sticky” sounds.  Ontologically, then, Star Wars is a sound-based story, as reflected in the existence of the radio dramatizations (more on them later). This article traces the historical development of sound in not only the Star Wars films (four decades of them!) but also in other associated media, such as television and video games as well as examining aspects of Star Wars’ holistic sound design in detail.

A long time ago, in a galaxy far, far away . . .

As Chris Taylor points out, George Lucas “loved cool sounds and sweeping music and the babble of dialogue more than he cared for dialogue itself.”  In 1974, Lucas was working on The Radioland Murders, a screwball comedy thriller set in the fictional 1930s radio station WKGL.  Radio, indeed, had already made a strong impression on Lucas, such that legendary “Border blaster” DJ Wolfman Jack played an integral part in Lucas’ film American Graffiti (1973).  As Marcus Hearn picks up the story, Lucas soon realized that The Radioland Murders were going nowhere (the film would eventually be made in 1994).  Lucas then turned his sound-conscious sensibilities in a different direction, in “The Star Wars” project upon which he had been ruminating since his film school days at the University of Southern California.  Retaining creative control, and a holistic interest in a defined soundworld, were two aspects Lucas insisted upon during the development of the project that would become Star Wars.  Lucas had worked with his contemporary at USC, sound designer and recordist Walter Murch, on THX 1138 (1971) and American Graffiti, and Murch would go on to provide legendary sound work for The Conversation (1974), The Godfather Part II (1974), and Apocalypse Now (1979). Murch was unavailable for the new project, so Lucas then asked producer Gary Kurtz to visit USC to evaluate emerging talent.

Pursuing a Masters degree in Film Production at USC was Ben Burtt, whose BA was in physics.  In Burtt, Lucas found a truly innovative approach to film sound which was the genesis of Star Wars’ sonic invention, providing, in Sergi’s words, “audiences with a new array of aural pleasures.”  Sound is embodied in the narrative of Star Wars.  Not only was Burtt innovative in his meticulous attention to “found sounds” (whereas sound composition for science fiction films has previously relied on electronic sounds), he applied his meticulousness in character terms.  Burtt said that Lucas and Kurtz, “just gave me a Nagra recorder and I worked out of my apartment near USC for a year, just going out and collecting sound that might be useful.”

Ben Burtt plays the twang of steel guy wires, which formed the basis of the many blaster sounds (re-creating the moment with Miki Hermann for a documentary). Image by Flickr User: Tom Simpson (CC BY-NC-ND 2.0)

Inherent in this was Burtt’s relationship with sound, in the way he was able to construct a sound of an imaginary object from a visual reference, such as the light saber, described in Lucas’ script and also in concept illustrations by Ralph McQuarrie.  “I could kind of hear the sound in my head of the lightsabers even though it was just a painting of a lightsaber,” he said.  “I could really just sort of hear the sound maybe somewhere in my subconscious I had seen a lightsaber before.”  Burtt also shared with Lucas a sonic memory of sound from the Golden Age of Radio:  “I said, `All my life I’ve wanted to see, let alone work on, a film like this.’ I loved Flash Gordon and other serials, and westerns. I immediately saw the potential of what they wanted to do.”

But sir, nobody worries about upsetting a droid

Burtt has described the story of A New Hope as being told from the point of view of the droids (the robots).  While Lucas was inspired by Kurosawa’s The Hidden Fortress (1958) to create the characters of droids R2-D2 (“Artoo”) and C-3PO (“Threepio”), the robots are patently non-human characters.  Yet, it was essential to imbue them with personalities.  There have been cinematic robots since Maria, but Burtt uniquely used sound to convey not only these two robots’ personalities, but many others as well.  As Jeanne Cavelos argues, “Hearing plays a critical role in the functioning of both Threepio and Artoo.  They must understand the orders of their human owners.”  Previous robots had less personality in their voices; for example, Douglas Rain, the voice of HAL in 2001:  A Space Odyssey, spoke each word crisply with pauses. Threepio is a communications expert, with a human-like voice, provided by British actor (and BBC Radio Drama Repertory Company graduate) Anthony Daniels.  According to Hearn, Burtt felt Daniels should use his own voice, but Lucas was unsure, wanting an American used car salesman voice.  Burtt prevailed, creating in Threepio, vocally, “a highly strung, rather neurotic character,” in Daniels’ words, “so I decided to speak in a higher register, at the top of the lungs.”  (Indeed, in the Diné translation of Star Wars [see below], Threepio was voiced by a woman, Geri Hongeva-Camarillo, something that the audience seemed to find hilarious.)

Artoo was altogether a more challenging proposition.  As Cavelos puts it, “Artoo, even without the ability to speak English, manages to convey a clear personality himself, and to express a range of emotions.”  Artoo’s non-speech sounds still convey emotional content.  We know when Artoo is frightened;

when he is curious and friendly;

and when he is being insulting.

(And although subtitled scenes of Artoo are amusing, they are not in the least necessary.)  Artoo’s language was composed and performed by Burtt, derived from the communication of babies:

we started making little vocal sounds between each other to get a feeling for it.  And it dawned on us that the sounds we were making were not actually so bad.  Out of that discussion came the idea that the sounds a baby makes as it learns to walk would be a direction to go; a baby doesn’t form any words, but it can communicate with sounds.

The approach to Artoo’s aural communications became emblematic of all of the sounds made by machines in Star Wars, creating a non-verbal language, as Kris Jacobs calls it, the “exclusive province” of the Star Wars universe.

Powers of observation lie with the mind, Luke, not the eyes

According to Gianlucca Sergi, the film soundtrack is composed of sound effects, music, dialogue, and silence, all of which work together with great precision in Star Wars, to a highly memorable degree.  Hayden Christensen, who played Anakin Skywalker in Attack of the Clones (2002) and Revenge of the Sith (2005), noted that when filming light saber battles with Ewan McGregor (Obi-Wan Kenobi), he could not resist vocally making the sound effects associated with these weapons.

This a good illustration of how iconic the sound effects of Star Wars have become.  As Burtt noted above, he was stimulated by visuals to create the sound effects of the light sabers, though he was also inspired by the motor on a projector in the Department of Cinema at USC.  As Todd Longwell pointed out in Variety, the projector hum was combined with a microphone passed in front of an old TV to create the sound.  (It’s worth noting that the sounds of weapons were some of the first sound effects created in aural media, as in the case with Wallenstein, the first drama on German radio, in 1924, which featured clanging swords.)

If Burtt gave personality to robots through their aural communications, he created an innovative sound palette for far more than the light sabers in Star Wars.  In modifying and layering found sounds to create sounds corresponding to every aspect of the film world—from laser blasts (the sound of a hammer on an antenna tower guy wire) to the Imperial Walkers from Empire Strikes Back (modifying the sound of a machinist’s punch press combined with the sounds of bicycle chains being dropped on concrete)—he worked as meticulously as a (visual) designer to establish cohesion and impact.

Sergi argues that the sound effects in Star Wars can give subtle clues about the objects with which they are associated.  The sound of Imperial TIE fighters, which “roar” as they hurtle through space, was made from elephant bellows, and the deep and rumbling sound made by the Death Star is achieved through active use of sub-frequencies.  Meanwhile, “the rebel X-wing and Y-wing fighters attacking the Death Star, though small, emit a wider range of frequencies, ranging from the high to the low (piloted as they are by men of different ages and experience).”  One could argue that even here, Burtt has matched personality to machine.  The varied sounds of the Millennium Falcon (jumping into hyperspace, hyperdrive malfunction), created by Burtt by processing sounds made by existing airplanes (along with some groaning water pipes and a dentist’s drill), give it, in the words of Sergi, a much more “grown-up” sound than Luke’s X-Wing fighter or Princess Leia’s ship, the Tantive IV.  Given that, like its pilot Han Solo, the Falcon is weathered and experienced, and Luke and Leia are comparatively young and ingenuous, this sonic shorthand makes sense.

Millions of voices

Michel Chion argues that film has tended to be verbocentric, that is, that film soundtracks are produced around the assumption that dialogue, and indeed the sense of the dialogue rather than the sound, should be paramount and most easily heard by viewers.  Star Wars contradicts this convention in many ways, beginning with the way it uses non-English communication forms, not only the droid languages discussed above but also its plethora of languages for various denizens of the galaxy.  For example, Cavelos points out that Wookiees “have rather inexpressive faces yet reveal emotion through voice and body language.”

While the 1978 Star Wars Holiday Special may have many sins laid at its door, among them must surely be that the only Wookiee who actually sounds like a Wookiee is Chewbacca.  His putative family sound more like tauntauns.  Such a small detail can be quite jarring in a universe as sonically invested as Star Wars. 

While many of the lines in Star Wars are eminently quotable, the vocal performances have perhaps received less attention than they deserve.  As Starr A. Marcello notes, vocal performance can be extremely powerful, capitalizing on the “unique timbre and materiality that belong to a particular voice.”  For example, while Lucas originally wanted Japanese actor Toshiro Mifune to play Obi-Wan, Alec Guinness’ patrician Standard English Neutral accent clearly became an important part of the character. For example, when (Scottish) actor Ewan McGregor was cast to play the younger version of Obi-Wan, he began voice lessons to reproduce Guinness’ voice. Ian McDiarmid (also Scottish), a primarily a Shakespearean stage actor, was cast as arch-enemy the Emperor in Return of the Jedi, presumably on the quality of his vocal performance, and as such has portrayed the character in everything from Revenge of the Sith to Angry Birds Star Wars II

Sergi argues that Harrison Ford as Han Solo performs in a lower pitch but an unstable meter, a characterization explored in the radio dramatizations of A New Hope, Empire Strikes Back, and Return of the Jedi, when Perry King stands in for Ford.  By contrast, Mark Hamill voices Luke in two of the radio dramatizations, refining and intensifying his film performances.  Sergi argues that Hamill’s voice emphasizes youth:  staccato, interrupting/interrupted, high pitch.

And affectionately parodied here:

I would add warmth of tone to this list, perhaps illustrated nowhere better than in Hamill’s performance in episode 1 – “A Wind to Shake the Stars” of the radio dramatization, which depicts much of Luke’s story that never made it onscreen, from Luke’s interaction with his friends in Beggar’s Canyon to a zany remark to a droid (“I know you don’t know, you maniac!”). It will come as no surprise to the listeners of the radio dramatization that Hamill would find acclaim in voice work (receiving multiple nominations and awards).  In the cinematic version, Hamill’s performance is perhaps most gripping during the climactic scene in Empire Strikes Back when Darth Vader tells him:

According to Hamill, “what he was hearing from Vader that day were the words, ‘You don’t know the truth:  Obi-Wan killed your father.’  Vader’s real dialogue would be recorded in postproduction under conditions easier to control.”  More on that (and Vader) shortly.

It has been noted that Carrie Fisher (who was only nineteen when A New Hope was filmed) uses an accent that wavers between Standard North American and Standard Neutral English.  Fisher has explained this as her emulating experienced British star of stage and screen Peter Cushing (playing Grand Moff Tarkin).

However, the accents of Star Wars have remained a contentious if little commented upon topic, with most (if not all) Imperial staff from A New Hope onwards speaking Standard Neutral English (see the exception, stormtroopers, further on).  In production terms, naturally, this has a simple explanation.  In story terms, however, fans have advanced theories regarding the galactic center of the universe, with an allegorical impetus in the form of the American Revolution.  George Lucas, after all, is an American, so the heroic Rebels here have echoes with American colonists throwing off British rule in the 18th century, inspired in part because of their geographical remove from centers of Imperial rule like London.  Therefore, goes this argument, in Star Wars, worlds like Coruscant are peopled by those speaking Standard Neutral English, while those in the Outer Rim (the majority of our heroes) speak varieties of Standard North American.  Star Wars thus both advances and reinforces the stereotype that the Brits are evil.

It is perhaps appropriate, then, that James Earl Jones’ performance as Darth Vader has been noted for sounding more British than American, though Sergi emphasizes musicality rather than accent, the vocal quality over verbocentricity:

The end product is a fascinating mixture of two opposite aspects:  an extremely captivating, operatic quality (especially the melodic meter with which he delivers the lines) and an evil and cold means of destruction (achieved mainly through echoing and distancing the voice).

It is worth noting that Lucas originally wanted Orson Welles, perhaps the most famous radio voice of all time, to portray Vader, yet feared that Welles would be too recognizable.  That a different voice needed to emanate from behind Vader’s mask than the actor playing his body was evident from British bodybuilder David Prowse’s “thick West Country brogue.”  The effect is parodied in the substitution of a Cockney accent from Snatch (2000) for Jones’ majestic tones:

A Newsweek review of Jones in the 1967 play A Great White Hope argued that Jones had honed his craft through “Fourteen years of good hard acting work, including more Shakespeare than most British actors attempt.”  Sergi has characterized Jones’ voice as the most famous in Hollywood, in part because in addition to his prolific theatre back catalogue, Jones took bit parts and voiced commercials—“commercials can be very exciting,” he noted.  The two competing forces combined to create a memorable performance, though as others have noted, Jones is the African-American voice to the white actors who portrayed Anakin Skywalker (Clive Revill and Hayden Christensen), one British, one American.

Brock Peters, also African American and known for his deep voice, played Vader in the radio dramatizations.  Jennifer Stoever notes that in America, the sonic color line “historically contoured, identified, and marked mismatches between ‘sounding white’ and ‘looking black’” (231) whereas the Vader performances “sound black” and “look white.” Andrew Howe in his chapter “Star Wars in Black and White” notes the “tension between black outer visage and white interior identity [ . . ] Blackness is thus constructed as a mask of evil that can be both acquired and discarded.”

Like many of the most important aspects of Star Wars, Vader’s sonic presence is multi-layered, consisting in part of Jones’ voices manipulated by Burtt, as well as the sonic indicator of his presence:  his mechanized breathing”

The concept for the sound of Darth Vader came about from the first film, and the script described him as some kind of a strange dark being who is in some kind of life support system.  That he was breathing strange, that maybe you heard the sounds of mechanics or motors, he might be part robot, he might be part human, we really didn’t know.  [ . . .] He was almost like some robot in some sense and he made so much noise that we had to sort of cut back on that concept.

On radio, a character cannot be said to exist unless we hear from him or her; whether listening to the radio dramatizations or watching Star Wars with our eyes closed, we can always sense the presence of Vader by the sound of his breathing.  As Kevin L. Ferguson points out, “Is it accidental, then, that cinematic villains, troubling in their behaviour, are also often troubled in their breathing?”  As Kris Jacobs notes, “Darth Vader’s mechanized breathing can’t be written down”—it exists purely in a sonic state.

Your eyes can deceive you; don’t trust them

Music is the final element of Sergi’s list of what makes up the soundtrack, and John Williams’ enduring musical score is the most obvious of Star Wars’ sonic elements. Unlike “classical era” Hollywood film composers like Max Steiner or Erich Korngold who, according to Kathryn Kalinak, “entered the studio ranks with a fair amount of prestige and its attendant power, Williams entered as a contract musician working with ‘the then giants of the film industry,’” moving into a “late-romantic idiom” that has come to characterize his work.  This coincided with what Lucas envisioned for Star Wars, influenced as it was by 1930s radio serial culture.

Williams’ emotionally-pitched music has many elements that Kalinak argues link him with the classical score model:  unity, the use of music in the creation of mood and character; the privileging of music in moments of spectacle, the way music and dialogue are carefully mixed. This effect is exemplified in the opening of A New Hope, the “Main Title” or, as Dr Lehman has it (see below), “Main/Luke A.”  As Sergi notes, “the musical score does not simply fade out to allow the effects in; it is, rather literally, blasted away by an explosion (the only sound clearly indicated in the screenplay).”

As Kalinak points out, it was common in the era of Steiner and Korngold to score music for roughly three-quarters of a film, whereas by the 1970s, it was more likely to be one-quarter.  “Empire runs 127 minutes, and Williams initially marked 117 minutes of it for musical accompaniment”; while he used three themes from A New Hope, “the vast majority of music in The Empire Strikes Back was scored specifically for the film.”

Perhaps Williams’ most effective technique is the use of leitmotifs, derived from the work of Richard Wagner, and more complex than a simple repetition of themes.  Within leitmotifs, we hear the blending of denotative and connotative associations, as Matthew Bribitzer-Stull notes, “not just a musical labelling of people and things” but also, as Thomas S. Grey puts it, “a matter of musical memory, of recalling things dimly remembered and seeing what sense we can make of them in a new context.”  Bribitzer-Stull also notes the complexity of Williams’ leitmotif use, given that tonal music is given for both protagonists and antagonists, resisting the then-cliché of using atonal music for antagonists.  In Williams’ score, atonal music is used for accompanying exotic landscapes and fight or action scenes.  As Jonathan Broxton explains,

That’s how it works. It’s how the films maintain musical consistency, it’s how characters’ musical identities are established, and it offers the composer an opportunity to create interesting contrapuntal variations on existing ideas, when they are placed in new situations, or face off against new opponents.

Within the leitmotifs, Williams provides various variations and disruptions, such as the harmonic corruption when “the melody remains largely the same, but its harmonization becomes dissonant.” One of the most haunting ways in which Williams alters and reworks his leitmotifs is what Bribitzer-Stull calls “change of texture.”

Frank Lehman of Harvard has examined Williams’ leitmotifs in detail, cataloguing them based on a variety of meticulous criteria.  He has noted, for example, that some leitmotifs are used often, like “Rebel Fanfare” which has been used in Revenge of the Sith, A New Hope, The Empire Strikes Back, The Force Awakens, The Last Jedi, and Rogue One.  Lehman particularly admires Williams’ skill and restraint, though, in reserving particular leitmotifs for very special occasions.  For example, “Luke & Leia,” first heard in Return of the Jedi (both film and radio dramatization) and not again until The Last Jedi:

While Williams’ use of leitmotifs is successful and evocative, not all of Star Wars’ music consists of leitmotifs, as Lehman points out; single, memorable pieces of music not heard elsewhere are still startlingly effective.

In the upcoming Solo, John Williams will contribute a new leitmotif for Han Solo, while all other material will be written and adapted by John Powell.  Williams has said in interview that “I don’t make a particular distinction between ‘high art’ and ‘low art.’  Music is there for everybody.  It’s a river we can all put our cups into, and drink it, and be sustained by it.”  The sounds of Star Wars have sustained it—and us—and perfectly illustrate George Lucas’ investment in the equal power of sound to vision in the cinematic experience.  I, for one, am looking forward to what new sonic gems may be unleashed as the saga continues.

On the first week of June, Leslie McMurtry will return with Episode II, focusing on shifts in sound in the newer films and multi-media forms of Star Wars, including radio and cartoons–and, if we are lucky, her take on Solo!

Featured Image made here: Enjoy!

 Leslie McMurtry has a PhD in English (radio drama) and an MA in Creative and Media Writing from Swansea University.  Her work on audio drama has been published in The Journal of Popular Culture, The Journal of American Studies in Turkey, and Rádio-Leituras.  Her radio drama The Mesmerist was produced by Camino Real Productions in 2010, and she writes about audio drama at It’s Great to Be a Radio Maniac.

tape reelREWIND!…If you liked this post, you may also dig:

The Magical Post-Horn: A Trip to the BBC Archive Centre in Perivale–Leslie McMurtry

Speaking American–Leslie McMurtry

Out of Sync: Gendered Location Sound Work in Bollywood–Priya Jaikumar  

Advertisements

Botanical Rhythms: A Field Guide to Plant Music

Only overhead the sweet nightingale

Ever sang more sweet as the day might fail,

And snatches of its Elysian chant

Were mixed with the dreams of the Sensitive Plant

Percy Shelley, The Sensitive Plant, 1820

 

ROOT: Sounds from the Invisible Plant

Plants are the most abundant life form visible to us. Despite their ubiquitous presence, most of the times we still fail to notice them. The botanists James Wandersee and Elizabeth Schussler call it “plant blindness, an extremely prevalent condition characterized by the inability to see or notice the plants in one’s immediate environment. Mathew Hall, author of Plants as Persons, argues that our neglect towards plant life is partly influenced by the drive in Western thought towards separation, exclusion, and hierarchy. Our bias towards animals, or zoochauvinism–in particular toward large mammals with forward facing eyes–has been shown to have negative implications on funding towards plant conservation. Plants are as threatened as mammals according to Kew’s global assessment of the status of plant life known to science. Curriculum reforms to increase plant representation and engaging students in active learning and contact with local flora are some of the suggested measures to counter our plant blindness.

Participatory art including plants might help dissipate plants’ invisibility. Some authors argue that meaningful experiences involving a multiplicity of senses can potentially engage emotional responses and concern towards plants life. In this article, I map out a brief history of the different musical and sound art practices that incorporate plants and discuss the ethics of plant life as a performative participant.

 

 

STEM: Music to grow your plants by 

Flowers grow rhythmically.

Henry Turner Bailey, 1916

“Music for plants” is a small footnote in the history of recorded music. However, it perfectly mirrors many of the misconceptions and mainstream perceptions of plant life. By late 1950s, reports on the relationship between plants and music started to surface in popular culture and making the headlines of newspapers for the next decades: Flute Music ‘charms’ plants into growing bigger, better; Silly-looking plants that listen and really care, Drooping Plants Revived by Soothing East Indian Music. These experiments were later compiled and disseminated by the bestselling book The Secret Lives of Plants (1973) that furthered ideas of sentient plants that feel emotions and respond to human thought (what Cleve Backster called primary perception).

Cleve Backster - Primary Perception

Cleve Backster

The book reinforced the music-plant experiments of Dorothy Retallack, that famously claimed that plants exposed to classical and sitar music thrived in comparison to plants exposed to Led Zeppelin and Jimmy Hendrix’s acid rock. The scientific shortcomings of these experiments are well known. Daniel Chamovitz, author of What a Plant Knows, points out that Retallack’s experiments mainly provide a window into the cultural-political climate of the 1960s through the lens of a religious social conservative who believed that rock music was correlated with antisocial behavior among teenagers.  The alleged beneficial plant response to classical music was in many occasions used as an ideological device against youth culture.

Criticizing Music, Dr Max Rafferty, Effects of Rock Music on Plants

Musicians and record companies seized to entertain this new potted audience. Records to aid plant growth could be found in many florist stores in the US.  Their labels promised happy, healthy and fast growing plants with the help of classical and chamber music standards, electronic tunes, sine waves and spoken word. For instance, Molly Roth’s record Plant Talk (1976) gaudily speaks to several indoor plants (English Ivy, Fern, Philodendron…) while giving advice on plant care.

Molly Roth & Jim Bricker, Plant Talk/Sound Advice, 1976:

 

Dr. George Milstein’s record Music to Grow Plants (1970) uses high pitched tones under a Mantovani-esque orchestration to help improve the exchange of oxygen and carbon dioxide in the plants’ leaves. “The music is sugar coating for the vibrations” explains Milstein, “sound vibrations induce the stomata to remain open wider for longer periods, thus plants take in more nourishment and grow faster and sturdier.”

Dr. George Milstein, Music to Grow Plants, 1970:

 

Music to Grow Plants manifested the human perception of plants as passive and isolated recluses of indoor places. Some of these artists’ efforts came from the genuine struggle to grow plants in big metropolis. However, the veiled nature of plants became attached to personal narratives, tastes and social values. Plants were visible insofar as a canvas to anthropomorphic projections.

 

LEAF: Green Materialities and the Electrical Plant

Oats… the witching soul of music.

Kate Greenaway, The Language of Flowers, 1884

 

The sounding materiality of plants was appropriated by avant-garde practices interested in amplifying the noises of everyday life. The sounds produced by acts of physical contact with plants became a new ground for musical composition. Contact microphones attached to plants’ surfaces amplified their inaudible sonic proprieties. Two of John Cage’s percussion compositions Child of Tree (1975) and Branches (1976) call for amplified plant materials like cacti and rattles from a Poinciana tree. Plants provided the quality of chance and indeterminacy as they gradually deteriorate during the performance. The amplified cactus became an icon of indeterminacy music and keeps being plucked by many artists today like Jeph Jerman, So Percussion, Mark Andre, Adrienne Adar or Lindsey French. The Portuguese sound artist João Ricardo creates full soundscapes by conducting an orchestra of over twenty cacti (Cactus Workestra) performed by young students that follow his gestural directions on how to rhythmically pluck the cacti needles.

 

 

Creating music through touch and corporal proximity with plant life revitalizes human-plant relationships generating intimacy and knowledge. John Ryan poses the importance of “reaching out towards plants” to create experiences of embodied appreciation and connectivity. A close connection between body, plants and music can be found in leaf music (folded leaf whistle, gumleaf music) practiced by Australian Aboriginal societies who developed an acute ability to select and differentiate the sonic qualities of plant matter. The scholar Robin Ryan describes how leaf music is an intimate and vital part of Aboriginal societies to reflect upon the nonhuman world, as well as, a vehicle of attachment to local “music trees,” bushes, and plants. The Serbian film Unplugged (2013), directed by Mladen Kovacevic, follows two leaf players from rural eastern Serbia and an instrument builder trying to learn the art of leaf music. The simplicity of the leaf instrument deceives the extent of knowledge and practice necessary to master it.

 

 

Time and intimacy with plant matter are important components of leaf music. Artists like Annea Lockwood (Piano Transplants) and Ross Bolleter (Ruined Pianos) reversed the equation of the effects of music on plant growth and explored the effects of plant growth on musical instruments by abandoning pianos in outdoor fields and gardens. These works disregard human-time and tune in to plant-time. There’s a special acknowledgement of plant life in art works that tap into plants’ otherness.

Annea Lockwood - Piano Garden [1969-70] photo by Chris Ware

Annea Lockwood – Piano Garden [1969-70] photo by Chris Ware

In many music performances the role of the plant remains attached to an object-like position tied to the artist’s agenda. Musical practices using generative systems stemming from plants’ biological information attempt to take a step forward into the inner life of plants. Sensors attached to plants’ leaves detect bioelectrical potential changes originating from environmental variables like light, humidity, temperature and touch. These micro-electrical fluctuations are converted into MIDI signals that trigger notes and controls in synthesizers. The element of interactivity that these systems allow between public and plant via sound highlights in real time plant responses to sensorial stimuli. The Mexican artist Leslie Garcia sonically demonstrates the sensorial qualities of plants in her project Pulsu(m) Plantae (2012-13) and makes her software available for other artists to use. Similarly, the duo Scenocosme  creates interactive gardens where plants act as sensors to human touch generating cascades of sound. Creative chains linking plants, technology, music and touch can also be found in site-specific installations by Mileece and Miya Masaoka.

 

 

Plant-based generative music was pioneered by the British architect and artist John Lifton in the early 70s. Lifton created Green Music, an installation for 6 plants in an environmental chamber connected to an analogue computer and fed to a synthesizer. In 1976, the producers of the film-adaptation of Secret Life of Plants brought John Lifton to San Francisco’s Golden Gate Park to collaborate with Richard Lowenberg, Tom Zahuanec, and Jim Wiseman. The group developed a several day media-performance with sonic translations of brain waves and muscle electrical potentials of 6 dancers mixed with plant-based generative music. The film features few sequences of this performance. However, we can get a better glimpse at Tom Zahuranec’s plant generative music in an interview by Charles Amirkhanian for a KPFA’s Radio Event. Particularly, we can hear audiences’ thoughts on plant life and how widespread were new age ideas of primary perception in plants.

 

Charles Amirkhanian & audience members react to Tom Zahuranec’s plant music, Radio Event No. 20, KPFA, 1972

 

These early experiments on generative music were a main influence on the artistic collective Data Garden, which has been releasing plant music and creating immersive audio environments controlled by plants since 2011. After a successful Kickstarter campaign, Data Garden launched a biofeedback kit, the MIDI Sprout, that allows users to easily derive music from plants’ electrical changes. Joe Patitucci, founder of Data Garden, says that since the campaign they have produced 800 units and have 400 users on their forum experimenting with the technology. In parallel, Patitucci has developed an online stream called Plants.fm that continuously broadcasts music generated by a snake plant and/or a philodendron. Data Garden has also released an app that allows users to plug the MIDI Sprout to the phone and hear their plants triggering the sounds designed by the developers.

 

Robert Aiki Aubrey Lowe, performance with MIDI Sprout, modular synthesizer and voice

 

These methods of generative composition are easing the way for users to creatively relate to plants. However, it’s vital that artists don’t reduce the diversity of plant life into a single aesthetic or into a “music of the spheres” representation. In this respect, the sound ecologist Michael Prime shatters the convention of assigning melodic sounds to plants by creating alien soundscapes generated from the electrical signals of hallucinogenic plants as heard in L-Fields and in One hour as a plant.

Beyond plants’ electrical responses, some artists are using alternative parameters to translate the life of plants into sound. For instance, Christine Ödlund collaborated with Ecological Chemistry Research Group in Stockholm to create an electro-acoustic composition accompanied by a score entitled Stress Call of the Stinging Nettle in which she transposes into tones the chemical signals released by a stinging nettle when attacked by a caterpillar and how the plant communicates with its nearby plant kin (the score can be seen in more detail here). The installation “Oxygen Flute,” created by Chris Chafe and Greg Niemeyer, reveals plant and human respiration through CO2 concertation readings in a chamber filled with bamboo. The fluctuation of CO2 inside the sealed chamber is translated into bamboo flute music fostering in the visitor a heightened perception of his own breath. The sonification of these hidden relationships between plant life and animal life call attention to larger concepts like the greenhouse effect or global warming in a very physical and emotional way. They make graspable what Timothy Morton calls hyperobjects – objects massively distributed in time and space that defy our perception and comprehension.

 

FOREST: Plant Bioacoustics and Acoustic Ecology

Sit by the trees – what kind of tree makes what kind of sound?

Pauline Oliveros, Country Meditations, 1988

 

One could argue that the only sound that is ecologically relevant is the sound of the plant itself. The realm of vibrations occurring on the plants’ surfaces that manifest the plant’s own agency and connectivity to its surroundings. In short, plant bioacoustics aims to study plants’ adaptive strategies that employ the use of sound. A common example is the process of buzz pollination in which plants only release pollen when vibrating at a specific frequency by pollinator bees. Plants can also respond selectively to the mechanical vibrations generated by the chewing of insect herbivores eliciting defensive chemical responses. A study by Monica Gagliano revealed that young roots of corn grow towards the source of continuous tones and respond optimally to frequencies of 200–300 Hz, which is within the frequency range of the clicking sounds the same roots emit themselves. Also, Gagliano and her team have recently shown that the roots of pea seedlings are able to locate water sources by sensing the vibrations generated by water moving. Gagliano has been one of the forefront voices advocating for the need of more research in plant bioacoustics to understand the ecological significance of sound in plants.

 

Slide29

 

So far, it is not completely clear how plants use “sound detection,” and if sounds are used as signal or are merely by-products of their physiology. Nevertheless, is important to recognize them. These sounds have been the focus of some practices that articulate artistic and scientific points of view. Inspired by Gagliano’s studies, Sebastian Frisch created the installation Biophonic Garden that recontextualizes a lab setting where a group of corn seedlings are suspended in a water tank that grow towards a constant sine tone of 220 Hertz. A set of headphones allows one to tune into the roots’ acoustic environment amplified by two hydrophones.

biophonic_garden_bg

Photo of Biophonic Garden, with Authorization by Sebastian Frisch

Zach Poff’s project Pond Station invites us to eavesdrop on the sounds of underwater plants of a small freshwater pond in Upstate New York. During an artistic residency at Wave Farm, Poff built a floating platform that operates from dawn to evening using solar-charged batteries. The Pond Station uses hydrophones to amplify the sounds of underwater life and broadcasts them via online web stream. The underwater soundscape goes through cyclical changes according to seasons and time of day. In the mornings, Poff describes a photosynthetic chorus of bubbling as plants begin to produce oxygen. Recently, an invasion of duckweed covered the surface of the freshwater pond affecting its soundscape. I asked Zach Poff about the sonic consequences of this invasion:

Duckweed taught me a lesson about biophony as an indicator of biodiversity. For an entire year I struggled with rebuilding hydrophones and upgrading electronics, trying to get back the poly-rhythmic diversity that I heard during the first year of listening. Then I realized that the duckweed could reduce oxygen levels enough to cause fish kills, and block sunlight from reaching other aquatic plants.

Pond Station in the Morning by Zach Poff, Reproduced with Authorization

Photo of Pond Station in the Morning by Zach Poff, Reproduced with Authorization

Poff finds a parallel between the lack of density and variety in the underwater soundscape of the pond and Bernie Krause’s recordings made in California’s Lincoln Meadow before and after selective logging occurred:

From a distance the visual field was unchanged but the biophony was basically gone after the logging. The pond duckweed looks like a benign blanket of green, but all that’s left of the sound is the slow bubbling that I attribute to decaying organics on the pond bottom. It’s jarring.

 

FRUIT: Plant Ethics and Speculative Botany

There are many ways to love a vegetable.

M.F.K. Fisher, How to cook a wolf, 1942

The sonification and acoustic amplification of plant life evoke both a sense of connection and the realization of an ontological fracture. The translation and artistic representation of plant otherness into sound or music brings ups vital ethical considerations. Michael Marder, author of Plant-Thinking, argues that techniques applied to plants to derive meaningful information from a human standpoint occlude the meaning of the plants themselves. Once we engage with the electronic menagerie, the plant starts to disappear. Alternative ways of thinking with and of being with plants are called upon by Marder, specifically, artistic practices that vibrate with the self-expressions of vegetal life. In Grafts, the scientist Monica Gagliano states that it is inaccurate and unethical to answer the question “How do plants sound?” by transposing vegetal processes onto musical scales. The concern is the override of plants’ natural voices with familiar harmonic sounds, the same way time-lapse photography rips plants of its own temporally.

The work of the Slovenian bio-artist and researcher Špela Petrič delves into the frontiers of plant otherness and problematizes plant ethics. In 2015, Petrič performed Skotopoiesis, a durational piece in which the artist faced a germinating cress for 19 hours. The artist figure casted a shadow on the cress contributing to the etiolation (blanching, whitening) of the plants. Petrič wrote that the 19-hour period of active inactivity was her way of surrendering to the plant. I asked Špela Petrič’ about her perspective on ethics and performative plants:

I think the reason so many people started asking about the ethics of plant use stems on one hand from an increasing pool of knowledge that suggests plants are very much a complex, sentient beings, and on the other because we find ourselves in a spiraling loop of exploitation of all living beings, which provokes questions like: how did we get here and what can we possibly do to change our cosmology to be conducive of a livable world?

For Špela, plant ethics is not tied to artists’ treatment of plants but rather what kind of story the work tells to the audience. Špela confesses:

This part – the way an artwork is perceived – can be tricky and that is why I write about the trap of interfaces. I’ve struggled with it myself; in my best attempt to forefront the relationship between humans and plants I sometimes had to admit to being overpowered by the technology I used. Technology wants to tell its own narrative – the medium is the message – and we should be aware of that.

As to the risk of creating an anthropomorphic experience with plants, Špela sees an opportunity here:

I don’t think anthropomorphic experiences as a point of entry into the plant world should be a priori avoided, we might even say that anthropomorphism is one of our greatest tools for connecting with other species, but the task for artists is one of editing, of observing and of being mindful to what the artwork is saying.

Artistic practices with plants through music or sound can open the hidden territories of vibrant plant matter and an underground mesh of rhythms and patterns. The act of listening to plant life is an act of acknowledgment, a possibility for emotional identification and empathy rendering plants visible.

Featured image: “Music to Grow Plants By,” compilation by the author

Carlo Patrão is an independent radio artist based in New York City. zeppelinruc.wordpress.com

tape reel

REWIND! . . .If you liked this post, you may also dig:

Listening to the City of Light: An interview with Sound Recordist Des Coulam – Carlo Patrão

Sounding Out! Podcast #58: The Meaning of Silence – Marcella Ernest

Learning to Listen Beyond Our Ears: Reflecting Upon World Listening Day–Owen Marshall

“This Liquid Dream”: An Interview with Aquaphoneia Composer Navid Navab

Multidisciplinary composer and media alchemist Navid Navab and his team at the Topological Media Lab based at Concordia University (Montreal) presented Aquaphoneia, a sound installation which transmutes voice into water and water into air at Biennale Nemo in Paris in December 2017 (and will run until March 2018). I conducted this interview in the context of the first presentation of Aquaphoneia originally conceptualized for Ars Electronica 2016: RADICAL ATOMS and the alchemists of our time. This version of the piece looked at technology through the lens of the living materiality. As Prof. Hiroshi Ishii, of the MIT Media Lab’s Tangible Media Group, stated, artists “suggest completely new ways of looking at the role of science in our society and the interplay of technology and nature.”

***

EB [Esther Bourdages]: The theme of the 2016 Ars Electronica Festival, RADICAL ATOMS – and the alchemists of our time, is very close to the Topological Media Lab’s mission: transmutation and alchemy on the philosophical and phenomenological level. For Aquaphoneia, can you expand on alchemy and specifically on how this art piece stands out from your past work? How did alchemical thought process and production techniques come up in the process of the piece?

NN [Navid Navad]: When the 2016 theme for Ars Electronica Festival was announced I was happily surprised and thought: finally, things are coming to light at a much larger scale. Yes, please can we reverse the still prominent European Modernism’s separations—between the conceptual and the material, the precise and the messy, the sciences and the arts—and go back to the holistic richness of alchemical matter? This transition that we are currently experiencing calls for a shift away from representational technologies: from interfaces to stuff, from objects to fields of matter-in-process, from fixed concepts to processes that enact concepts. For over a decade, we as alchemists have been engaging with “bodies and materials that are always suffused with ethical, vital and material power.”

The Topological Media Lab [TML] is occupied by people who are living to fuse and confuse, ready to unlearn the apparent practicality of isolated disciplines, while playfully improvising new pathways to understanding potential futures. The TML hosts an array of projects for thinking-feeling through poetry-infused-matter and breathing life into static forms—which to me is an effortlessly artistic process, and all the while inseparable from a rigorously philosophical or scientific one. Even though it might take decades for the kinds of computational-materials that we are envisioning today to be engineered from ground up at an atomic level, with what is possible today, we explore how the messy stuff of the world could become computationally charged with the potential for play: sounding, dancing, and co-performing new ways of living with or without us.

Aquaphoneia comes out of this rich ecology of experiments. In Aquaphoneia, voice and water become irreversibly fused. The installation listens to the visitors, and transmutes their utterances into aqueous voice, which then is further enriched and purified through alchemical processes.

.

To fully realize this liquid dream, we went to great lengths in order to fuse the messy behaviour of matter flowing throughout the installation with meticulously correlated and localized sonic behaviour. For example, the temporal texture of boiling liquid in one chamber is perceptually inseparable from the spectral entropy of simmering voices which then evaporate into a cloud of spectral mist. All of this dynamic activity is finely localized: the sounds acoustically emit exactly from where the action occurs, rather than spatially schizophying loudspeakers elsewhere.

On another hand, our material-computational-centric approach lead to a tough yet rewarding meditation on control and process. As a composer, I had to let go of all desires for immediate control over sounds and surrender important rhythmical and compositional decisions to messy material processes. In Alchemical Mercury (2009), Karen Pinkus quotes Marcel Duchamp: “alchemy is a kind of philosophy: a kind of thinking that leads to a way of understanding” (159). For us, in the process of creating Aquaphoneia, essentially what had to be understood and then given up was our attachment to our far-too-human notions of time and tempo. Instead we embraced and worked within the infinitely rich and pluri-textural tempi of matter. Technically and compositionally this meant that most of our focus had to be placed on merging the continuous richness of material processes with our computational processes through an array of techniques: temporal pattern following, audio-mosaicing, continuous tracking of fields of activity using computer vision and acoustic sensing techniques in order to synthesize highly correlated sonic morphologies, careful integration of structure-born-sound, etc. We were able to co-articulate compositions by constraining material processes sculpturally, and then letting the liquid voice and the laws of thermodynamics do their thing.

[EB]: One of the first elements that we notice in the installation is the brass horn connected to an old Edison sound recording machine, that now turns into liquid instead of wax cylinders.   In fact, it came from an Edison talking machine. You repurpose an authentic artifact, but you do not fall into the trap of nostalgia, and neither into the role of collector, but you embrace innovation with a dynamic approach which excavates past media technologies in order to understand or surpass contemporary audio technologies. Where does the use of the Edison horn come from and how does it speak to your relationship with the superposition of history?

Paris, Biennale Némo, 17 October 2017 – 18 March 2018 Credit: Navid Navad, 2017

[NN]: The history of sound reproduction involves transforming audible pressure patterns or sound energy into solid matter and vice versa. The historic Edison recording machines gathered sound energy to etch pressure patterns onto tinfoil wrapped around a cylindrical drum. Sound waves, focussed at the narrow end of the horn, caused a small diaphragm to vibrate, which in turn caused a miniature steel-blade stylus to move and emboss grooves in the cylinder. The tin foil would later on be replaced by wax cylinders, vinyl disks and eventually digital encoding.

Aquaphoneia engages the intimately recursive relationship between sounding technologies and material transmutations. Our digital audio workstations are an in fact an inclusive part of this history, this endless chain of analog transmutation between energy and matter. Under the fiction of the digital there is always the murmur of electrons and of matter-energy fields in physical transmutation. As J. Fargier writes on an early book on Nam June Paik (1989) “The digital is the analog correspondence of the alchemists’ formula for gold” (translation by NN). Well, yes. The digital revolution has allowed us to shape, compute, purify, and sculpt sounds like never before… but then often at the hefty cost of a disembodying process, with interfaces that are linked to sounds only through layers upon layers of representation, far detached from resonating bodies and the sexy flux of sounding matter.

Aquaphoneia playfully juxtaposes material-computational histories of talking machines within an imaginary assemblage: sounds are fully materialized and messed with tangibly within an immediate medium very much like clay or water or perhaps more like a yet to be realized alchemico-sonic-matter. This odd assemblage orchestrates liquid sounds leveraging intuitive worldly notions—such as freezing, melting, dripping, swishing, boiling, splashing, whirling, vaporizing—and in the process borrows alchemical tactics expanding across material sciences, applied phenomenology, metaphysics, expanded materiology, and the arts. Aquaphoneia’s alchemical chambers set these materials, metaphors, and forces into play against one another. After the initial ritual of offering one’s voice to the assemblage, the aqueous voice starts performing for and with itself, and human visitors have the opportunity to watch and participate as they would when encountering the unpredictable order of an enchanted forest river.

It is also noteworthy that the horn resembles a black hole. The edge of the horn acts like an event horizon, separating sounds from their source-context. Sounds, once having passed the acousmatic event horizon, cannot return to the world that they once knew. Voices leaving the body of their human or non-human speaker, fall into the narrow depths of the horn, and are squeezed into spatio-temporal infinity. Disembodied voices, are immediately reborn again with a new liquid body that flows though alchemical chambers for sonic and metaphysical purification.

Much of my work deals with the poetics of schizophonia (separation of sound from their sources). Sound reproduction (technologies), from Edison’s talking machines to our current systems, transcode back and forth between the concrete and acousmatic, situated and abstract, materialized and dematerialized, analogue and digital. Often sounds are encoded into a stiff medium which then may be processed with an interface, eventually decoded, and re-manifested again as sound. Aquaphoneia ends this nervous cycle of separation anxiety and re-attachment by synthesizing a sounding medium capable of contemporary computational powers such as memory, and adaptive spectro-temporal modulation and morphing. To adapt Marshall McLuhan, instead of encoding and decoding a presumed message with representational technologies, it enchants the medium.

Image Credit: Topological Media Lab, 2016

[EB]: There is the tendency to think that artwork from Media Labs are stable and high tech. Aquaphoneia uses analog and digital technologies with a Do-It-Yourself (DIY) touch in the aesthetic. Since your lab is multidisciplinary oriented and influenced by diverse fields of knowledge, can you develop on the DIY dimension in Aquaphoneia under the gaze of Clint Enns—cinematographer in the experimental field of cinema—: “Adopting a DIY methodology means choosing freedom over convenience”?

[NN]: Aquaphoneia is a truly eclectic assemblage lost in time. Aquaphoneia’s mixed form reflects its extremely fluid, collaborative and playful creative process. Instead of coming up with a definitive design and executing it industrially, Aquaphoneia’s realization involved a much more playful process, where every little aspect of the installation—materials, sounds, software, electronics, etc.—was playfully investigated and messed with. Every little detail matters and every process, undulating back and forth between conception to execution, is an artistic process. The research-creation process leading to the works that come out of our lab are as critical to us as the final and fully produced art works. This was also true for the alchemists who, through their process, were seeking to develop new approaches for understanding the world, relating to matter, and surpassing nature.

Our research-creation activities concern experimenting with ethico-aesthetics of collective thinking-making: humans, non-humans, machines, and materials enacting and co-articulating the ever-changing material-social networks of relations which shape them. This DIY art-all-the-way approach, while providing a healthy dose of aesthetic freedom, is also an ethical one: we live with and within our designs and grow with them. That being said, we are not attached to a DIY process in the same way that some maker cultures might be. Sometimes we blindly find and repurpose something that does something cool, complicated, and mysterious and that is fantastic, sort of like philosophy of media meets cyber dumpster diving meets DIY hacker space meets cutting edge tech research meets miniMax (minimum engineering with maximum impact) meets speculative whatever…

Image Credit: Topological Media Lab, 2016

For example, at some point we decided to gather sonic vapour in a glass dome and condense it back into drops, which were then guided to fall into the bottom of the installation. The purified drop of voice—sonic “lapis philosophorum”—was to fall into the depths of the earth beneath and shine upward like sonic gold, connecting heaven and earth. We had to execute this opus magnum inside a very small hole in the base of the installation. The water drop needed to be immediately sensed and sonified, leading to sounds coming out of the same hole, along with synchronized light. You can imagine that if we were relying on “black-boxed” technologies and ready-made techniques then this task would have seemed like a nightmare to design and fabricate. The water drop was to fall all the way to the bottom of the hole where it would be acoustically sensed by a small apparatus that had to be acoustically isolated from everything else. Then the result of the sonification had to be pushed through the very same hole with a high degree of intelligibility and in a way that it would be seamlessly localized. Meanwhile, light had to shine through this hole in sync with the sounds but the source of light had to remain hidden.

The solution to this technical puzzle came to us effortlessly when playing around with random stuff. We found a hipster product—a little plastic horn—that was made for turning your iPod into a gramophone. Then a speaker was mounted inside of this plastic horn in order to focus sounds towards the end tip of the horn. The back of the speaker was fully covered with foam and duct tape to stop any sound from escaping anywhere except for where we wanted it to appear. A small hole was drilled into the brass pipe in the base of the installation. Our advanced hipster horn-tip-sound-laser-thing was then inserted, allowing crisp sounds to enter the brass hole and emit from it without any visible clues for the perceiver as to where the speaker was hidden. Meanwhile, a similar lighting solution was created so that in a very small footprint we can focus, direct, and bounce enough directional light in the brass pipe without ever getting in the way of the water drops.

We had to engage with this sort of detailed fabrication/composition process throughout the whole installation in order to come up with solutions to sense the behaviour of the materials and liquids locally and to manifest them sonically and visually so that there would be no separation from local material behaviours and their computational enchantment. In trying to do so we discovered that more often than not, there was no ready-made solution or technique to rely on, and at the same time we didn’t have months ahead of us to engage in an abstract design and fabrication process. We had limited hours of collective play time to leverage and to come up with innovative techniques that we didn’t even know could exist and that was really fun.

**

Image Credit: Topological Media Lab, 2016**

Aquaphoneia is a rich sound art piece – a manifesto by itself about innovation and inventiveness. The sound installation demonstrates that the main crafters Navid Navad and his partner Michael Montanaro, in collaboration with other members of the Topological Media Lab, swim easily into the multidisciplinary art. They are are not afraid to experiment and engage with the material, which results in an interlacing of forms, a mixture of historic references, and an interesting fusion of “low” and “high” technology. I was able to catch some of the build up of the art piece, and it was fantastic to witness the lab as a playful messy artistic field with a little team of scholars fusing their different backgrounds in convergence on the marriage of art and science.

Aquaphoneia, a sound installation which transmutes voice into water and water into air at Biennale Nemo in Paris runs until March 2018.

Aquaphoneia Credits:
    • NAVID NAVAB art direction, sound/installation concept and design, audiovisual composition, programming, behaviour design
    • MICHAEL MONTANARO art direction, visual/installation concept, design and fabrication
    • PETER VAN HAAFTEN electronics, sound, programming
    • consulting assistants: Nima Navab (embedded lighting design) Joseph Thibodeau (electronics)
    • research collaboration: Topological Media Lab

Featured Image: Aquaphoneia, Paris, Biennale Némo, 17 October 2017 – 18 March 2018, Credit: Navid Navad, 2017

Esther Bourdages works in the visual arts and technology art field as a writer, independent curator and scholar. Her curatorial research explores art forms such as site-specific art, installation and sculpture, often in conjunction with sound. She has authored many articles and critical commentaries on contemporary art. As a musician, she performs under the name of Esther B – she plays turntables, handles vinyl records, and records soundscapes. She works and lives in Montreal.

Navid Navab is a Montreal based media alchemist, multidisciplinary composer, audiovisual sculptor, phono-menologist, and gestureBender. Interested in the poetics of gesture, materiality, and embodiment, his work investigates the transmutation of matter and the enrichment of its inherent performative quali- ties. Navid uses gestures, rhythms and vibration from everyday life as basis for realtime compositions, resulting in augmented acoustical poetry and painterly light that enchants improvisational and pedestrian movements.

Navad currently co-directs the Topological Media Lab, where he leverages phenomenological studies to inform the the creation of computationally-augmented performance environments. His works, which which take on the form of gestural sound compositions, responsive architecture, site specific interven- tions, theatrical interactive installations, kinetic sound sculptures and multimodal comprovisational per- formances, have been presented internationally at diverse venues such as Canadian Center for Architec- ture, Festival du Nouveau Cinema, Ars Electronica Festival Linz, HKW Berlin, WesternFront Vancouver, McCord Museum, Musée d’art Contemporain de Montréal, Contemporary Arts Museum Houston, Inter- national Digital Arts Biennial, Musiikin Aika Finland, and Festival International Montréal/Nouvelles Mu- siques, among others. www.navidnavab.net

tape reelREWIND! . . .If you liked this post, you may also dig:

Playing with Bits, Pieces, and Lightning Bolts: An Interview with Sound Artist Andrea Parkins — Maile Colbert

Sounding Our Utopia: An Interview With Mileece— Maile Colbert

Optophones and Musical Print–Mara Mills

“Happy Homes Have Gramophones” –Gender, Technology, and the Sonic Restaging of Community Before and After the Partition of Bengal

co-edited by Praseeda Gopinath and Monika Mehta

Our listening practices are discursively constructed. In the sonic landscape of India, in particular, the way in which we listen and what we hear are often normative, produced within hegemonic discourses of gender, class, caste, region, and sexuality. . . This forum, Gendered Soundscapes of India, offers snapshots of sound at sites of trans/national production, marketing, filmic and musical texts. Complementing these posts, the accompanying photographs offer glimpses of gendered community formation, homosociality, the pervasiveness of sound technology in India, and the discordant stratified soundscapes of the city. This series opens up for us the question of other contexts in India where sound, gender, and technology might intersect, but, more broadly, it demands that we consider how sound exists differently in Pakistan, Sri Lanka, the Maldives, Bangladesh, Bhutan, Nepal, and Afghanistan. How might we imagine a sonic framework and South Asia from these locations? —Guest Editors Praseeda Gopinath and Monika Mehta

For the full introduction to the forum, click here.

To read all of the posts in the forum, click here.

“She compelled respect at once by refusing on any account to be phonographed: perhaps she thought, amongst other things, that if she committed her soul to a broken piece of wax it might get broken…my subsequent experiences showed that it was only too likely,” wrote the British musicologist A.H. Fox-Strangways in 1910 about Indian female singer Chandra Prabha, while remarking on the harsh reactions to the gramophone in India (90).  Such deep-rooted discomfort with the gramophone speaks to the cognitive, perceptual and experiential challenges faced by a listener/performer when a new auditory technology substitutes familiar terrains of musical production.

In this post, I revisit the decades prior to and following the 1947 Partition of Bengal, a phase singularly volatile not only in India’s political but also its musical and technological histories.  I examine how the introduction of European harmony/polyphony in the aural imaginary of Bengal negotiates ideologies espoused by the nationalists in the (re)constitution of gendered space post-Partition by transforming relations of consumption. The production of gendered domesticity was vitally related to rigid conceptions of physical space and its allocation in colonial Bengal which, further, influenced music reception in ways worth probing.

The auditory regimes prior to the emergence of recording/radio-broadcast typified public modes of listening based on live performances engendering affective flows and presupposed human proximity. This culture of aurality is inextricably tied to communal modes of consumption and performance, be it the high-end salon-tradition of the Bengali modern song, the hard-hitting agitprop strains of the Bengal wing of the IPTA (Indian People’s Theatre Association) or even the stylized elite classical genres. The collective nature of musical practice conjures up traditional connotations of masculine spaces, especially in the case of the elite Bengali household where the gendered ideology of spatial orientation relegated the respectable Bengali woman (bhadramahila) to the interiors of the house (antahpur/andarmahal). The delights of salon-music were to be relished by the man of the house (babu).

‘Gramophone – a home entertainer’

Thus, the communitarian character of musical practice often made it elusive to respectable women. However, the emergence and subsequent sophistication of auditory technologies ushered a radical transformation to such a dynamic by dissociating music from the human performer. Besides leading to the obvious technological alienation in the listener, the privatization of the listening experience was accompanied by a condition of a penetrating solitude and interiority, a state speaking to the voices and /sounds emanating from the phonograph. At the sociological level, the entry of recording technology redefined long-held divisions of domestic space and the gendered dynamics thereof by not only democratizing musical consumption but also forging provisional collectivities of listeners often cutting across gender, class and caste. Besides, traditional associations of musical genres with specific loci- classical music with the salon/concert space for instance- gave way to a more fluid conception of domestic space assuming multiple sonic/musical identities depending on what the gramophone played. The phonographic interface, thus, radically reconfigures listening practices and produces a different paradigm of self, sound, community, and gender.

What is at stake here is not some covert form of linear technological determinism, but a more nuanced detour around auditory-technologies, spaces of consumption, and the affordances thereof that calibrate auditory experience along new registers. What merits contemplation is how (if at all) these technological innovations in the commercial arena complement and usher formal nuances and sonic innovations in the musical works they mediate. The gramophone renders problematic the uncritical conflation of the sonic and visual registers typical of live musical performance and, in the process, sets in motion a unique dynamic of interacting with musical sound. Severed from its visual footholds in live performance, phonographic sounds often provoke the listener to imagine the singing/performing body which, in turn, informs the way the sounds are processed mentally.

Vintage Gramophone spotted in Little India, Serangoon, Singapore, Image by Flickr User Linkway88, (CC BY-NC-ND 2.0)

Indian music has traditionally been based on a single melody which, in its skeletal grammar, is an individual mode of expression, even when performed by a group. The intrinsic form of Indian and traditional East Asian music in general exhibits a non-harmonic character. The concept of musical harmony proper is considered a European import. European harmony, polyphony, and counterpoint are in their very essences a set of disparate tonal registers forging a gestalt which impresses on the mind of the listener an overarching unity. At an experiential level, the polyphonic form embodies a distinct sonic ontology and a novel dimension, as it were, and thus cannot be reduced to merely a stylistic import. It induces a new auditory condition, a new register of being-in-listening (the lecture snippet from 57:08-.1:02:07 effectively demonstrates the morphing of the basic melody of a song into its polyphonic equivalent). The new auditory condition conjoins the familiarity of the melody with the markedly different yet complementing registers of the polyphony, creating a novel sensation for the uninitiated Bengali listener.

Among the very early records to employ musical polyphony in India were two iconic musical works of the mid-20th century, one devotional in intent– Aham Rudre from Mahishasurmardini (1931) composed by the legendary music director Pankaj Mullick–and the other, a professed experiment in introducing polyphony in Bengali music, Shurer Ei Jharna (1958), by the noted composer Salil Chowdhury.

In the current context, it is important to note how the sonic dimension of musical polyphony in Aham Rudre  and Shurer Ei Jharna embodies and substitutes notions of aural communities and restages a communitarian character. Notably, the creation and circulation of these works paralleled the establishment of commercial state-radio in India (1930) and the first microgroove record in Kolkata in 1958 by the Gramophone Company.

The Gramophone Company in Calcutta marketed its records with the Bengali tagline “Shukhi Grihokon Shobhe Gramophone/Happy Homes Have Gramophones,” projecting the phonograph as the symbolic ideal of the domestic idyll and in the process confronting gendered spatial demarcations head-on by invading the auditory horizons of the secluded Bengali women. The striking presence of the gramophone in the iconic Gramophone Scene (1:35:17-1:35:28) in Satyajit Ray’s movie Ghare Baire–set in the backdrop of the 1905 Partition of Bengal–beautifully illustrates the sorority forged by the gramophone which, notably, draws even the marginalized widow Bouthan within its field of influence.

However, the gramophone superseded its commodity-character to serve not only in crass exhibitionism but also as an index of a masculine, elite consumerist culture where “serious music” and musical connoisseurship often became synonymous with the gramophone and recorded sound. A new breed of “record-collectors” came into existence, mainly belonging to the upwardly-mobile/elite classes whose passion for records was their most prominent identity-marker in the domestic realm, occasionally outweighing even their professional concerns.

But even as the radio and phonograph transcended the hitherto gendered character of musical reception by entering the women’s quarters and dissolving time-honored segregations of auditory spaces within the household, it had to contend with a deep-seated psychological discomfort in the listener, a fundamental unease with befriending technology that substituted the human. I argue that the newly insulated character of the radiophonic auditory experience was counteracted by significant efforts, conscious or otherwise, at sonically restaging and reclaiming the community lost in technological mediation.

Indian farmers gathered to listen the Farm Forum programme broadcast by All India Radio in the 1950s, Image Courtesy of Flickr User Public Resources.Org, (CC BY 2.0)

Given pet notions of musical anthropology and the chronological coincidence between the early uses of harmony and the entry and sophistication of technologically mediated music in Bengal one could, at the risk of slight oversimplification, posit that the import of the harmonic form at this significant juncture sonically compensates the auditory solitude induced by radio/phonograph by recreating a modified and idealized Platonic (Platonic here is used as an allusion to ‘music of the spheres’ to point towards how musical harmony since medieval times has been associated with ideal public) community and restaging it within the confines of the constitutive plurality of the polyphonic mode. As an aside, the initial introduction of polyphony in Shurer Ei Jharna (1958) garnered flak from a large section of the audience who cognized it as a group of amateur performers ‘singing out of key’ (Salil Chowdhury’s lecture from 30:31-31:15). Over the next few decades, however, this form was  trans-culturated and seamlessly assimilated within the sonic vocabulary of the Bengali/Indian masses, so much so that without the regular vocal/instrumental counterpoint, commercial songs nowadays are often felt to be lacking hue.

The sonic changes that I have been investigating preceded or followed the Partition of Bengal, which informed the gendered patterns of popular musical consumption. It is well-known that the exigencies of the Partition proved emancipatory for women in that they were exposed to the vagaries of the workplace, leaving the confines of their quarters. It is with an often uncritical celebratory fervor that the Partition is credited with fashioning the independent, self-reliant and educated middle-class Bengali working woman, on occasion emerging as the sole bread-earner of the family. Jasodhara Bagchi says that the “partition accelerated the earlier trends of the twentieth century of abolishing the ‘purdah that had confined the Bengali bhadramahila to her antahpur (private quarters)…The same stroke that brought this flood of uprooted marginalised women to Calcutta also opened the door to many new opportunities for Bengali middle-class Hindu women. They came out of the private domain of domesticity and child rearing to take up public duties.’”(8) Uditi Sen, however, in her revisionist reading of the celebratory impulse argues that “situational aberrations” notwithstanding, the Partition did not lead to “a transformation of social norms or any substantive change of women’s ideal role within the bounds of the family.”(16)

In the aftermath of the Second World War, which had also witnessed the entry of women into the professional/public sphere, the USA launched a propaganda war to restore women to their hearth, revivifying the “cult of the housewife,” deploying films and popular music to promote the trope of the ideal housewife. Redefining domestic spaces as woman’s space had also been in the cards for the Indian state post-Partition, which had to a large extent been governed by patterns of popular media consumption. Arguably, the coincidental emergence of musical harmony and sophistication of private auditory technologies in the years following the Partition contributed to efforts to restore women to their private quarters, by compensating the lost professional community of the self-reliant working woman with the poetic/sonic community embodied by the polyphonic form, in the process enlivening her insipid lived quarters. Popular media technologies often employ innovation in content to revivify clichéd formats; musical harmony coupled with sophisticated audio-reproduction provides a classic instance of inaugurating a new sonic dimension in popular music which provides a powerful and enthralling form of domestic leisure.

Thus, in the context of early 20th century Bengal, the gramophone was a significant import which not only reconfigured perceptual registers and musical cultures but also listening practices by entering the interiors of elite Bengali households. Besides democratizing the listening experience, which till then had largely been restricted to male constituencies, the gramophone privatized musical consumption. It was through the introduction of musical polyphony, which is intrinsically ‘public/ communal’ as regards its sonic character, that this impulse was counteracted. As mentioned earlier, these technical/musical innovations widened the scope and impact of musical performance and arguably contributed to the reconstitution of gendered domestic space post-Partition which points to subtle and complex relations among technology, (musical) genre and gender.

Featured Image: Screen Capture from by SO! Ed. Satyajit Ray’s Ghare Baire

Ronit Ghosh is a postgraduate student at the Department of Art and Technology, Aalborg University, Denmark. His research interests include aesthetic philosophy, critical sound studies and the sociology of Indian popular music. He has published articles on sound studies in the International Journal on Stereo and Immersive Media and The Rupkatha Journal and has an article forthcoming in the Journal of Sonic Studies. He is a classical violinist and an aspiring music composer.

REWIND! . . .If you liked this post, you may also dig:

Tape Hiss, Compression, and the Stubborn Materiality of Sonic Diaspora–Chris Chien

Pushing Play: What Makes the Portable Cassette Recorder Interesting?—Gustavus Stadler

Hearing “Media-Capitalism” in Egypt–Ziad Fahmy

%d bloggers like this: