When I performed at the 2012 Computers and Writing Conference in Raleigh, North Carolina, I looked around during my fairly abstract 10-minute long improvisation featuring feedback loops, glitches, silences, and circuit-bent instruments, and I noticed the audience’s sometimes visible restlessness, discomfort, and even anxiety. This is a fairly common occurrence when I perform experimental sound art, particularly in contexts in which audiences expect “music” (you can hear my work at 38:30 in the video below). However, for an experimental sound artist to take offense to such reactions is, in my estimation, missing the point of the exercise. That sound art disrupts, agitates, and even offends is a powerfully reaffirming reminder that sound art transcends music and sound; it is a method of revelation, an act that surpasses logical communication, instead challenging the very nature of sound and perception.
As an artist, scholar, and fan, I am drawn toward sound and music that lures me into a new world, an unfamiliar way of being and knowing. Like Lewis Carroll’s Alice, I learn that the rules of my world no longer apply. This happened when I heard J Dilla’s Donuts album, and when I heard Madlib’s Medicine Show #3: Beat Konducta in Africa, when I heard Miles Davis’ Bitches Brew. An artist that continually draws me down the rabbit hole is Walter Gross, an experimental sound/beat artist out of Los Angeles. His work changes the way I usually interact with sonic art, both in terms of his sound and in his approach to physical collage and handcrafted cassette packaging, Gross departs from the comfortable and familiar listening imparted by polished hi-fi 3-minute tracks with definitive beginnings and ends and discernible melodies. Gross instead propels listeners into very unusual (and pleasantly discomforting) soundscapes that demand attention. Almost counter-intuitively, Gross’s visual representations of his work intensify that experience. Consider his 2010 work, Dopamine:
Dopamine is likely a challenging piece for audiences, at least in terms of violating the dominant structures of music. The piece opens with disorienting use of panning, deliberately obscuring degraded audio, largely indiscernible movements and patterns, and so on. His video work likewise presents a fitting yet relatively unusual juxtaposition of youth and destruction, celebration and danger. In terms of both sound and sight, Gross’ work disrupts dominant musical sensibilities, challenging the very patterns and structures within which we can express ideas. He violates tradition, shakes off the canonical baggage carried by prevailing paradigms of Art and Music, and plunges audiences into unfamiliar sensory experiences that require metacognition, reflection, and examination of what sonic art is, and more importantly, what sonic art can be. Gross, in other words, seems to transcend the musician moniker and reach something else entirely. In what follows, I’d like to explore a (very brief) history of such artists, and begin to think about how to frame sonic art as immersion in what Marshall McLuhan called anti-environments: the unconscious environment as raised to conscious attention.
Sound as Art
There exists a strong tradition of experimental noise and sound art, particularly in 20th-century Western avant-garde movements. Futurists were arguably the first to consider noise as music in the European tradition, and were certainly influential in asking artists and audiences to become more aware of the changing social and sonic surroundings . In his 1913 manifesto-of-sorts titled “The Art of Noises,” Italian Futurist Luigi Russolo proposed an orchestral configuration that more aptly represented the range of sounds available to contemporary listeners, namely those sounds that accompanied industrialization and urbanization. The sounds of the Futurist orchestra would include “rumbles, roars, explosions, and crashes.” Russolo built devices called intonarumori to mechanically achieve and manipulate these sounds. His brother, Antonio Russolo, also enacted this new philosophy of modern found sound and composed Corale and Serenata.
Any inquiry of art as anti-environment would be incomplete without a discussion of the great anti-art movement, Dada. Like the Futurists before them, Dadaists used found sound and technology-as-art to violently disrupt conventions of art, beauty, and authorship within the white avant-garde community. Marcel Duchamp’s famous work, “Fountain,” is likely the most familiar Dadaist artifact to contemporary readers, yet the sound poetry of Kurt Schwitters and other Dadaist and Dada-inspired sound pieces such as Erwin Schulhoff’s 1922 work In Futurum (the middle movement of which contains only a rest and the notation “with feeling,” an undoubtable precursor to John Cage’s 4’33”, written 30 years later) created sonic spaces of innovation and strangeness that changed the way audiences listened to both voices and silences. The Russian Cubo-Futurists, especially zaumniks such as Alexei Kruchenykh, made similar ventures into anti-environments. Kruchenykh developed the sound art zaum, which he understood as a transrational language that undercut existing language systems in which the “word [had] been shackled…by its subordination to rational thought” (70). Zaum was a sort of linguistic anti-environment, one rooted in the notion that meaning resided first and foremost in the sound of a word rather than the denotative symbol system that emerged alongside the proliferation of print/visual culture. One could also not underemphasize the work of John Cage, from his prepared piano to his work with organic instruments.
The list of artists, genres, and movements engaged to some extent in the enterprise of anti-environment architecture could go on and be debated indefinitely: Free Jazz, Turntablism/Nu Jazz, Experimental Hip-Hop,Fluxus, Circuit Bending, Prepared Guitar, ProtoPunk, Punk, Post-Punk, New Wave, No Wave. . . in all of these diverse movements, the sonic artists share the tendency to create strange new worlds via sound; worlds that reveal social and technological environments that most people seem unaware of in the moment. This is why media theorist Marshall McLuhan called the artist “indispensible,” because the artist can tell us something about ourselves that we cannot know via ordinary means of perception. Sonic artists expose audiences to auditory phenomena, structures, juxtapositions, etc. that are to various extents hidden, obscured, or ignored as “noise.” The sonic artist is more than just a clever selector and (re)arranger of sound; s/he is a revelatory agent, exposing what is inaudible.
Art as Anti-environment
Anti-environments, however we might define and classify them, are vital not only to artistic communities themselves, but they are also vital to a society of fish in water. In his 1968 text, War and Peace in the Global Village, McLuhan asserts (among other things) that humans remain largely unaware of their new environments, likening them to fish in water: “one thing about which fish know exactly nothing is water, since they have no anti-environment which would enable them to perceive the element they live in” (175). In other words, humans seldom possess or practice a sense of awareness regarding their surroundings because there’s nothing against which surroundings may be contrasted. The “water” to McLuhan represented the various environments (physical, psychological, cultural) shaped by technological innovation, but we can—and should—extend the water metaphor to a range of hegemonic frameworks: constructions of gender, race, ability, and so on.
This essay is certainly not an attempt to generate some sort of evaluative rubric by which to judge artistic or sonic expression objectively. Rather, we might use the concept of anti-environments as a way to frame our subjective experiences and encounters with all sound, and begin listening to unfamiliar sounds as psychedelic (from Greek psyche- “mind” + deloun “reveal”) keys to illuminate the patterns and structures in which listeners exist. We must work to understand our environments and our place in them; if we are to engage critically with our culture, we must first understand existing (yet invisible) patterns and structures that surround us. And we are aided in this effort, in great part, by humanity’s great seekers of pattern recognition, the sonic-psychonautical messengers: the sonic artists.
To return to the sound that inspired this meditation, Walter Gross (among others) is in many ways participating in and propelling the discourse of Leary and McLuhan, Schwitters and Schulhoff, Kruchenykh and Cage,Davis and Sun Ra, Madlib and J Dilla. Gross performs the sonic anti-environment, enacts the revelation of obscured sonic paradigms. For me, Gross can act as a sort of lens through which ordinary sonic patterns and structures become visible. I hear Flying Lotus, Bob Dylan, and The Minutemen differently after Gross. I hear my office, my home, my family’s voices differently after Gross. I hear patterns that weren’t audible before. After Gross, I become aware of how I am continuously trained to expect certain things from the sonic world: compartmentalized units of meaning, clearly stated origins of utterances, linear narratives, repeated/repeatable melodies, and so on.
Likewise, my own sonic art/scholarship approaches the use of sound to reveal the inaudible assumptions present in Western frameworks surrounding sonic production. I will conclude with an illustration of my own work and why sonic anti-environments are so central to my philosophy and method. One of my sonic works, “Toward an Object-Oriented Sonic Phenomenology,” was recently part of an exhibition titled Not For Human Consumption, curated by Julian Weaver of CRISAP in London. I recorded the sounds of a high mast lighting pole using contact microphones. Contact microphones do not “hear” like humans typically hear. Typical (dominant) notions of human hearing (and therefore of sound itself) involve the reception and interpretation of vibrations present in air. Contact microphones instead only interpret the vibrations in solid objects.
By listening through an object–through alien “ears,” so to speak– we can begin to critique the ways that we privilege listening via air, a listening that places humans at the center of the universe. We can consider the ways that sound has very real effects on humans with atypical hearing abilities and nonhuman objects. It is difficult to have such conversations if we never explore sonic anti-environments, if we never break through dominant epistemological models, if we never expose the limits of our own environments.
Featured Image: Beatrix*JAR in Dayton, Ohio, September 9, 2009, by Flickr User Vista Vision
Steven Hammer is a Ph.D. candidate in Rhetoric, Writing, and Culture at North Dakota State University in Fargo, ND, USA. His research deals with various aspects of sonic art, from exploring glitch and proto-glitch practices and theories (e.g., circuit bending), to understanding and producing sound from an object-oriented ontology (e.g., contact microphones). He also researches and facilitates trans-Atlantic translation collaborations between American, European, and African universities. He has multimedia publications with Enculturation, Sensory Studies, as well as forthcoming book chapters with Wiley/IEEE press, and IGI Global Publishing, and has performed creative and academic work at several conferences across North America, including the national Computers and Writing Conference and the Council for Programs in Technical and Scientific Communication. He performs experimental circuit-bent and sampler-based music under the moniker “patchbaydoor,” and has constructed and documented a number of hardware modification projects for his own artistic projects and for other artists in the upper Midwest United States. You can read/hear more atstevenrhammer.com
Editor’s Note: WARNING: THE FOLLOWING POST IS INTERACTIVE!!! This week’s post is especially designed by one of our regulars, Andreas Duus Pape, to spark conversation and provoke debate on its comment page. I have directly solicited feedback and commentary from several top sound studies scholars, thinkers, artists, and musicians, who will be posting at various times throughout the day and the week–responding to Andreas, responding to each other, and responding to your comments. Look for commentary by Bill Bahng Boyer (NYU), Maile Colbert(Binaural/Nodar, Faculdade de Belas Artes da Universidade do Porto), Nick Knouf (Cornell University), Primus Luta (AvantUrb, Concrète Sound System), Alejandro L. Madrid (University of Illinois at Chicago), Tara Rodgers (University of Maryland), Jonathan Skinner (ecopoetics), Jonathan Sterne (McGill University), Aaron Trammell (Rutgers University, Sounding Out!) and yours truly (Binghamton University, Sounding Out!). Full bios of our special respondents follow the post. We wholeheartedly wish to entice you this Monday to play. . .and listen. . .and then share your thoughts via the comment page. . .and play again. . .listen again. . .read the comments. . .and share more thoughts. . .yeah, just go ahead and loop that. –JSA, Editor-in-Chief
I’m a musician and an economist. Sometimes you will find me playing acoustic folk rock and blues on guitar, harmonica and voice. And at other times I will be at work, where I apply my expertise in game theory to the computer modeling of social phenomena. I create simulations of people interacting – such as how people decide which way to vote on an issue such as a tax levy, or how people learn to sort objects given to them in an experiment. In these simulations, the user can set up characteristics of the environment, such as the number of people and their individual goals. After things are set up, users watch these interactions unfold. The simulation is a little story, and one need only tweak the inputs to see how the story changes.
As a musician, I was curious if a program that generates social stories could be refashioned to generate musical pieces. I wanted to build a music-generation engine that the listener could tweak in order to get a different piece each time. But not just any tune – a piece with some flow, some story. I like that tension between randomness and structure. On one hand, I want every song to vary in unpredictable ways; on the other hand, I want to create music and not structureless noise.
I created a basic story of predators and prey, whimsically naming the prey “Peters,” represented by rabbits, and the predators “Wolves.” My simulation depicts a plain in the savannah with a green oasis. The prey seek the oasis and the predators seek the prey. Each character has its own goals and the closer they are to achieving them, the happier they are. Both predators and prey want to have stomachs full of food, so naturally they want to be close to their target (be it prey or oasis). As they travel through the savannah, they learn what choices (directions of movement) make them happier, and use this experience to guide them.
So how does this story become music? To this question there are two answers: a technical one and an intuitive one. The intuitive answer is that in real life the story of predators and prey plays out geographically on the savannah, but musically this is a story that plays out over a sonic landscape. To elaborate, I abstracted the movement of the prey and predator on the geography of the plain into the musical geometry of a sonic landscape. The farther north an agent travels, the higher the pitch. And, the farther east an agent travels the longer the duration. In other words, as an agent travels to the northwest, she makes longer-lasting tones that are higher pitched. I also mapped happiness to volume, so that happy agents make louder tones. Finally, so that each agent would have a distinct voice as they traveled through this space, I chose different instruments for each agent.
In the video below I assigned the “church organ” sound to prey, and the “brass section” sound to predators.
Ultimately, there are some things that I like about this piece and others that I do not.
As a harmonica player, I improvise by creating and resolving tension. I think this piece does that well. The predator will pursue the prey into a quiet, low-pitch corner, creating a distant, rumbling sound – only to watch prey escape to the densely polyphonic northwest corner. There is an ebb and flow to this chase that I recognize from blues harmonica solos. In contrast to my experience as a harmonica player, however, I have found that some of the most compelling parts of the dynamics come from the layering of notes. The addition of notes yields a rich sonic texture, much like adding notes to a chord on an organ.
Unfortunately, for largely technical reasons, there is a lack of coherent rhythm and pacing. The programming platform (agent-based modeling software called NetLogo) is not designed to have the interface proceed in real-time. Basically, the overall speed of the piece can change as the processing load increases or decreases. I found that as agents learnt more about their surroundings (and more system resources are allocated to this “memory”), they became slower and slower. To fix this, I capped the size of their memory banks so that they would forget their oldest memories. The closest I have come to a rhythmic structure is by ordering the way that the agents play. This technique makes the piece have a call-and-response feel. If only the piece to had a coherent rhythm, then I could imagine playing harmonica along with it.
One last comment on pitch: while an earlier version of this piece mapped each step in space to a semitone, things sounded too mechanical. Even though this was the easiest and most intuitive decision from a technical standpoint, it was aesthetically lacking, so I have now integrated traditional musical scales. The minor scale, in my opinion, is the most interesting as it makes the predator/prey dynamic sound appropriately foreboding.
You can play this piece yourself. Simply go to this link with Java enabled in your browser (recommended: Google Chrome). Pressing “Setup” then “Go” will create your own run of the piece. As it is running, you can adjust the slider above the graphic window to change the speed. Press “Go” again to stop the model, adjust any parameters you wish and press “Setup” and “Go” again to see how the piece changes. Here are some parameters to try: instA and instB to change the instruments associated with prey and predators; PlayEveryXSteps to change the pace of the piece (higher results in a slower paced piece); Num-PackAs and Num-PackBs changes the number of prey and predators; the vertical PeterVol and WolfVol adjust the overall volume of prey and predators.
In regards to my version of “Peter and the Wolf,” I have a number of things that I’m curious about.
First, how does this relate to what you think of as music? Do you like listening to it? Which elements do you like and which do you dislike? For example, what do you think about about the tension and rhythm – do you agree the first works and that the second could be improved? Would you listen to this for enjoyments’ sake, and what would it take for this to be more than a novelty? What do you think about the narrative that drives the piece? I chose the predator and prey narrative, admittedly, on a whim. Do you think there might be some other narrative or agent specific goals that might better drive this piece? Is there any metaphor that might better describe this piece? As a listener do you enjoy the experience of being able to customize and configure the piece? What would you like to have control over that is missing here? Would you like more interaction with the piece or less interaction?
Finally, and perhaps most importantly, what do you think of the premise? Can simple electronic agents (albeit ones which interact socially) aspire to create music? Is there something promising in this act of simulation? Is music-making necessarily a human activity and is this kind of work destined to be artificial and uncanny?
Thanks for listening. I look forward to your thoughts.
- – -
Andreas Duus Pape is an economist and a musician. As an economist, he studies microeconomic theory and game theory–that is, the analysis of strategy and the construction of models to understand social phenomena–and the theory of individual choice, including how to forecast the behavior of agents who construct models of social phenomena. As a musician, he plays folk in the tradition of Dylan and Guthrie, blues in the tradition of Williamson and McTell, and country in the tradition of Nelson and Cash. Pape is an assistant Professor in the department of Economics at Binghamton University and is a faculty member of the Collective Dynamics of Complex Systems (CoCo) Research Group.
- – -
Guest Respondents on the Comment Page (in alphabetical order)
Bill Bahng Boyer is a doctoral candidate in music at New York University who is completing a dissertation on public listening in the New York City subway system.
Maile Colbert is an intermedia artist with a concentration in sound and video, living and working between New York and Portugal. She is an associated artist at Binaural/Nodar.
Nicholas Knouf is a Ph.D. candidate in information science at Cornell University.
Primus Luta is a writer and an artist exploring the intersection of technology and art; he maintains his own AvantUrb site and is a founding member of the live electronic music collective Concrète Sound System.
Alejandro L. Madrid is Associate Professor of Latin American and Latino Studies at the University of Illinois at Chicago and a cultural theorist and music scholar whose research focuses on the intersection of modernity, tradition, globalization, and ethnic identity in popular and art music, dance, and expressive culture from Mexico, the U.S.-Mexico border, and the circum-Caribbean.
Tara Rodgers is an Assistant Professor of Women’s Studies and a faculty fellow in the Digital Cultures & Creativity program at the University of Maryland. As Analog Tara, she has released electronic music on compilations such as the Le Tigre 12″ and Source Records/Germany, and exhibited sound art at venues including Eyebeam (NYC) and the Museum of Contemporary Canadian Art (Toronto).
Jonathan Skinner founded and edits the journal ecopoetics, which features creative-critical intersections between writing and ecology. Skinner also writes ecocriticism on contemporary poetry and poetics.
Jonathan Sterne teaches in the Department of Art History and Communication Studies and the History and Philosophy of Science Program at McGill University. His latest book, Mp3 The Meaning of a Format comes out this fall from Duke University Press.
Jennifer Stoever-Ackerman is co-founder, Editor-in-Chief and Guest Posts Editor for Sounding Out! She is also Assistant Professor of English at Binghamton University and a former Fellow at the Society for the Humanities at Cornell University (2011-2012).
Aaron Trammell is Multimedia Editor of Sounding Out! and a Ph.D. Candidate in Media and Communications at Rutgers University.
I am here today to defend auto-tune. I may be late to the party, but if you watched Lil Wayne’s recent schizophrenic performance on MTV’s VMAs you know that auto-tune isn’t going anywhere. The thoughtful and melodic opening song “How to Love” clashed harshly with the expletive-laden guitar-rocking “John” Weezy followed with. Regardless of how you judge that disjunction, what strikes me about the performance is that auto-tune made Weezy’s range possible. The studio magic transposed onto the live moment dared auto-tune’s many haters to revise their criticisms about the relationship between the live and the recorded. It suggested that this technology actually opens up possibilities, rather than marking a limitation.
Auto-tune is mostly synonymous with the intentionally mechanized vocal distortion effect of singers like T-Pain, but it has actually been used for clandestine pitch correction in the studio for over 15 years. Cher’s voice on 1998’s “Believe” is probably the earliest well-known use of the device to distort rather than correct, though at the time her producers claimed to have used a vocoder pedal, probably in an attempt to hide what was then a trade secret—the Antares Auto-Tune machine is widely used to correct imperfections in studio singing. The corrective function of auto-tune is more difficult to note than the obvious distortive effect because when used as intended, auto-tuning is an inaudible process. It blends flubbed or off-key notes to the nearest true semi-tone to create the effect of perfect singing every time. The more off-key a singer is, the harder it is to hide the use of the technology. Furthermore, to make melody out of talking or rapping the sound has to be pushed to the point of sounding robotic.
The dismissal of auto-tuned acts is usually made in terms of a comparison between the modified recording and what is possible in live performance, like indie folk singer Neko Case’s extended tongue-lashing in Stereogum. Auto-tune makes it so that anyone can sing whether they have talent or not, or so the criticism goes, putting determination of talent before evaluation of the outcome. This simple critique conveniently ignores how recording technology has long shaped our expectations in popular music and for live performance. Do we consider how many takes were required for Patti LaBelle to record “Lady Marmalade” when we listen? Do we speculate on whether spliced tape made up for the effects of a fatiguing day of recording? Chances are that even your favorite and most gifted singer has benefited from some form of technology in recording their work. When someone argues that auto-tune allows anyone to sing, what they are really complaining about is that an illusion of authenticity has been dispelled. My question in response is: So what? Why would it so bad if anyone could be a singer through Auto-tuning technology? What is really so threatening about its use?
As Walter Benjamin writes in “The Work of Art in the Age of Mechanical Reproduction,” the threat to art presented by mechanical reproduction emerges from the inability for its authenticity to be reproduced—but authenticity is a shibboleth. He explains that what is really threatened is the authority of the original; but how do we determine what is original in a field where the influences of live performance and record artifact are so interwoven? Auto-tune represents just another step forward in undoing the illusion of art’s aura. It is not the quality of art that is endangered by mass access to its creation, but rather the authority of cultural arbiters and the ideological ends they serve.
Auto-tune supposedly obfuscates one of the indicators of authenticity, imperfections in the work of art. However, recording technology already made error less notable as a sign of authenticity to the point where the near perfection of recorded music becomes the sign of authentic talent and the standard to which live performance is compared. We expect the artist to perform the song as we have heard it in countless replays of the single, ignoring that the corrective technologies of recording shaped the contours of our understanding of the song.
In this way, we can think of the audible auto-tune effect is actually re-establishing authenticity by making itself transparent. An auto-tuned song establishes its authority by casting into doubt the ability of any art to be truly authoritative and owning up to that lack. Listen to the auto-tuned hit “Blame It” by Jaime Foxx, featuring T-Pain, and note how their voices are made nearly indistinguishable by the auto-tune effect.
It might be the case that anyone is singing that song, but that doesn’t make it less bumping and less catchy—in fact, I’d argue the slippage makes it catchier. The auto-tuned voice is the sound of a democratic voice. There isn’t much precedent for actors becoming successful singers, but “Blame It” provides evidence of the transcendent power of auto-tune allowing anyone to participate in art and culture making. As Benjamin reminds us, “The fact that the new mode of participation first appeared in a disreputable form must not confuse the spectator.” The fact that “anyone” can do it increases possibilities and casts all-encompassing dismissal of auto-tune as reactionary and elitist.
Mechanical reproduction may “pry an object from its shell” and destroy its aura and authority–demonstrating the democratic possibilities in art as it is repurposed–but I contend that auto-tune goes one step further. It pries singing free from the tyranny of talent and its proscriptive aesthetics. It undermines the authority of the arbiters of talent and lets anyone potentially take part in public musical vocal expression. Even someone like Antoine Dodson, whose rant on the local news, ended up a catchy internet hit thanks to the Songify project.
Auto-tune represents a democratic impulse in music. It is another step in the increasing access to cultural production, going beyond special classes of people in social or economic position to determine what is worthy. Sure, not everyone can afford the Antares Auto-Tune machine, but recent history has demonstrated that such technologies become increasingly affordable and more widely available. Rather than cold and soulless, the mechanized voice can give direct access to the pathos of melody when used by those whose natural talent is not for singing. Listen to Kanye West’s 808s & Heartbreak, or (again) Lil Wayne’s “How To Love.” These artists aren’t trying to get one over on their listeners, but just the opposite, they want to evoke an earnestness that they feel can only be expressed through the singing voice. Why would you want to resist a world where anyone could sing their hearts out?
Osvaldo Oyola is a regular contributor to Sounding Out! He is also an English PhD student at Binghamton University.
*a companion piece of this research, on electronic sounds as lively individuals, is forthcoming in the American Quarterly special issue on sound, September 2011.
Not long ago, while researching the history of synthesized sound—or taking a break to troll for interesting synthesizers for sale online (activities that, for me, inevitably blend together)—I came across a thriving industry of small companies that offer custom-made wood panels to adorn the sides of old and new synths, like Synthwood, Custom Synths, Analogics, and MPCStuff.
As Trevor Pinch and Frank Trocco note in Analog Days, their history of Moog synthesizers, an “analog revival” is underway: “Today in the digital world, there is a longing to get back to what was lost” (9). The music technology magazine Sound on Sound concurs, documenting a renewed interest among electronic music-makers in modular synthesizers like those popularized by Moog and others in the late-1960s. Yet there seems to be more at play with this proliferation of wood customizations than merely nostalgia for analog synths, Hammond organs, and hi-fi cabinetry. How might we interpret this desire to adorn—lovingly, even obsessively—steel-encased machines that produce sound by electronic means, with various species of wood? What does this realm of audio esoterica reveal about material and social aspects of musical instruments, and the workings of contemporary media cultures more broadly?
On Contingency and Faith: Walnut, Purple Felt, and the True Cross
Pinch and Trocco describe the Minimoog as the first synthesizer to become a “classic,” due to its relative ease of use, widespread availability, portability and compact design (214). In the retrospective imaginations of historians and musicians, a significant feature that established its classic design was the walnut wood case on an early generation of Minimoog models.
However, Bill Hemsath, an engineer who assembled the first Minimoog prototypes in 1969-70, told Pinch and Trocco that these instruments were assembled from “junk I found in the attic” and an assortment of affordable materials cobbled together in the moment (214). Jim Scott, another engineer who worked on developing the Minimoogs, explained in a 1997 interview: “the reason we made it walnut [was] because Moog had gotten a deal someplace and had a whole barnful.” He noted that “the musicians certainly appreciated the fact that it was made out of walnut,” but eventually the designers “ran out of walnut and started buying something else and slapping paint on it to make it look like walnut.” The various kinds of wood used on models from different years, and the exact start and end dates of the coveted walnut models, remain contested matters among Moog enthusiasts.
Hemsath elaborated on this history in a 1998 interview by making an analogy to “classic” piano design: “There’s a similar story from Steinway. Back when they first got started in the U.S. they used to buy their felts from a feltmaker in Paris… And they got a lot of purple felt because [the supplier] used to be the felt maker for Napoleon’s army, and had a lot left over. So the colored cores in the hammers of those old Steinways were purple because of Napoleon’s army. Well, [the supplier] ran out, and [Steinway] said, red’s fine. They started making pianos with red felt, which is what they have today, and people started complaining, saying, it’s not a real Steinway, it’s not purple.” Like the proverbial purple felt on original Steinway pianos, walnut panels on synthesizers became “classic” because of their association with an originary moment, however happenstance, in the history of a particular instrument, and a limited supply and production run that rendered the material in question relatively rare.
So, a contemporary synthesizer enthusiast’s desire to acquire a “classic” walnut Minimoog, or to commemorate its aesthetic with customized wood panels, is in part an effort to establish a material connection to history. Synthesizer history unfolds in the deep time of technoscience which, as Donna Haraway has argued, often “barely secularize[s]” Judeo-Christian narratives of first and last things, of figural anticipation and fulfillment (9-10). The concern among some synthesizer enthusiasts to possess either the actual wood of an early-model Minimoog, or a faithful substitute for it, indeed resonates with Christian material cultures around relics of the True Cross and next-best artifacts with suitable provenance. A historical conjuncture that is contingent on otherwise unremarkable circumstances (e.g., Bob Moog’s good deal on a barnful of walnut in upstate New York) is marked as an originary or otherwise defining moment (the “invention” of a “classic” synthesizer) for a culture that defines itself as proceeding from it; the former is made to anticipate the latter, and the latter comes to fulfill the former.
Taking Stock: Materialities of Instruments, Sounds, Ecosystems
What kind of wood panels live in my studio? The manual to my Jomox XBase 09 drum machine, from 1999, details that its “steel sheet body” is bookended by “varnished side panels made of alder wood.” Wikipedia‘s pop-anthropological roundup of alder’s “use by humans” includes smoking various foods, treating skin inflammations and tumors, and building electric guitars. Fender Stratocasters have been built with alder since the 1950s. Guitar enthusiasts are notoriously fussy about which type of wood comprises the instrument’s body because of its effect on tone. Scientists, meanwhile, have taken to applying medical imaging techniques to Stradivarius violins, trying to “crack the mystery” of its prized tone. (Some say it’s due to the particular density of slow-growing trees in the Little Ice Age; others conclude it must be the varnish.)
Given these interconnected concerns with instrument materials and the composition of tone, one might venture an etymological connection between timbre—which the Oxford English Dictionary describes as the character or quality of a musical sound depending upon the instrument producing it—and timber, which references “the matter or substance of which anything is built up or composed.” Music scholars often characterize timbre as the materiality of sound. Despite longstanding knowledge of the relationships of timber and timbre among instrument builders and musicians, and possible overlaps in historical applications of these words, placing wood panels on the sides of synthesizers surely has no effect on the resulting tone. Or does it? Audiophiles are prone toward occult-like habits, such as placing a single coin on top of a speaker to absorb vibration; and wood panels may well have subtle effects on the overall stability of an electronic instrument, resulting in barely perceptible sonic artifacts.
My Virus B synthesizer from the late-1990s has darker wood side panels than the Jomox, sort of a faux mahogany. Recently I wrote to Access Music, explaining my research on synthesizer history and inquiring what kind of wood they used. They replied that the B series featured stained beech wood (also commonly used and appreciated for producing smoked German beers and cheeses). Virus volunteered that they “in general do not use any kind of tropical wood for our devices.” Using sustainable wood has become a mandate and marketing concern at the Moog company as well; Moog’s wood “comes primarily from Tennessee. Hardwoods in Tennessee are growing faster than they are being harvested… US hardwoods are a world-wide model of sustained forest management.” Among contemporary synthesizer companies, there is often a selective eco-consciousness; as synthesizer designer Jessica Rylan suggested in our interview for Pink Noises: Women on Electronic Music and Sound (Duke: 2010), it is arguably impossible to build a synthesizer that does not incorporate at least some materials that are toxic in stages of manufacturing and/or disposal.
The paradox of dressing up an electronic machine made partly of toxic materials and processes with a sustainable-wood exterior is a fitting metaphor—like a contemporary fig leaf—for how we outwardly express environmentalist concern, despite plenty of contradictions in practice. Wood-adorned electronic devices, in all their glorious contradictions, are especially resonant in this cultural moment; see Asus’s EcoBook, Karvt’s lineup of custom wood skins for MacBooks, and, my favorite, Flashsticks: handmade wood USB “sticks” that combine “the high tech world of computing with the simplicity of the world of nature.” The story of Flashsticks’ handmade creation is a case study in eco-contradiction: the website implies that no trees were harmed in the making of their USB sticks—the company uses locally-sourced, “fallen wood from the previous winter’s storms”—yet we do not hear of the toxic materials that may comprise the drive itself.
Wood panels indeed work to conceal inconvenient truths. As Ruth Schwartz Cowan pointed out, the midcentury aesthetic of hiding household appliances behind wood paneling typified a culture that concealed gendered divisions of domestic labor (205). Lisa Parks has documented the similar recent phenomenon of dressing up cell towers as trees, which obscures the politics of media infrastructure behind a cloak of “nature.”
This is also a story about the mirage of a space between nature and artifice. Retro-culture enthusiasts celebrate that “real cars have fake wood paneling.” Meanwhile, a company called iBackwoods has engineered a “real wood” iPhone case that pays tribute to “timeless style of a wood panel station wagon.” Moog’s new Filtatron application for iPad, a software emulation of the company’s Moogerfooger filter pedal, is rendered authentic by its virtual wood panels. All of these examples reveal the “nature” of wood paneling to be cultural all the way down.
Ultimately, wood paneling might prompt us to recognize the interconnectedness among seemingly divergent materials, environments, and social practices. Consider, as a useful comparison to the climate-forged Stradivarius, the ash baseball bat: cherished by players for its “magical” effects on hitting, and now threatened by a warming climate and killer beetle in its source forests in Pennsylvania. Every synthesizer likewise holds and explodes into an ecosystem, and sometimes sounds like one too. The composer Mira Calix has suggested that analog synthesizers, with their individual quirks that increase with age, are much like wooden instruments; both seem to breathe like “little creatures” and take on a unique character, like a human voice. Our synthesizers, our kin.