Editors’ note: As an interdisciplinary field, sound studies is unique in its scope—under its purview we find the science of acoustics, cultural representation through the auditory, and, to perhaps mis-paraphrase Donna Haraway, emergent ontologies. Not only are we able to see how sound impacts the physical world, but how that impact plays out in bodies and cultural tropes. Most importantly, we are able to imagine new ways of describing, adapting, and revising the aural into aspirant, liberatory ontologies. The essays in this series all aim to push what we know a bit, to question our own knowledges and see where we might be headed. In this series, co-edited by Airek Beauchamp and Jennifer Stoever you will find new takes on sound and embodiment, cultural expression, and what it means to hear. –AB
In November 2016, my colleague Imani Wadud and I were invited by professor Sherrie Tucker to judge a battle of the bands at the Lawrence Public Library in Kansas. The battle revolved around manipulation of one specific musical technology: the Adaptive Use Musical Instruments (AUMI). Developed by Pauline Oliveros in collaboration with Leaf Miller and released in 2007, the AUMI is a camera-based software that enables various forms of instrumentation. It was first created in work with (and through the labor of) children with physical disabilities in the Abilities First School (Poughkeepsie, New York) and designed with the intention of researching its potential as a model for social change.
Our local AUMI initiative KU-AUMI InterArts forms part of the international research network known as the AUMI Consortium. KU-AUMI InterArts has been tasked by the Consortium to focus specifically on interdisciplinary arts and improvisation, which led to the organization’s commitment to community-building “across abilities through creativity.” As KU-AUMI InterArts member and KU professor Nicole Hodges Persley expressed in conversation:
KU-AUMI InterArts seeks to decentralize hierarchies of ability by facilitating events that reveal the limitations of able-bodiedness as a concept altogether. An approach that does not challenge the able-bodied/disabled binary could dangerously contribute to the infantilizing and marginalization of certain bodies over others. Therefore, we must remain invested in understanding that there are scales of mobility that transcend our binary renditions of embodiment and we must continue to question how it is that we account for equality across abilities in our Lawrence community.
Local and international attempts to interpret the AUMI as a technology for the development of radical, improvisational methods are by no means a departure from its creators’ motivations. In line with KU-AUMI InterArts and the AUMI Consortium, my work here is that of naming how communal, mixed-ability interactions in Lawrence have come to disrupt the otherwise ableist communication methods that dominate musical production and performance.
The AUMI is designed to be accessed by those with profound physical disabilities. The AUMI software works using a visual tracking system, represented on-screen with a tiny red dot that begins at the very center. Performers can move the dot’s placement to determine which part of their body and its movement the AUMI should translate into sound. As one moves, so does the dot, and, in effect, the selected sound is produced through the performer’s movement.
Could this curious technology help build radical new coalitions between researchers and disabled populations? Mara Mills’s research examines how the history of communication technology in the United States has advanced through experimentation with disabled populations that have often been positioned as an exemplary pretext for funding, but then they are unable to access the final product, and sometimes even entirely erased from the history of a product’s development in the name of universal communication and capitalist accumulation. Therefore, the AUMI’s usage beyond the disabled populations first involved in its invention always stands on dubious historical, political, and philosophical ground. Yet, there is no doubt that the AUMI’s challenge to ableist musical production and performance has unexpectedly affected and reshaped communication for performers of different abilities in the Lawrence jam sessions, which speaks to its impressive coalitional potential. Institutional (especially academic) research invested in the AUMI’s potential then ought to, as its perpetual point of departure, loop back its energies in the service of disabled populations marginalized by ableist musical production and communication.
Facilitators of the library jam sessions, including myself, deliberately avoid exoticizing the AUMI and separating its initial developers and users from its present incarnations. To market the AUMI primarily as a peculiar or fringe musical experience would unnecessarily “Other” both the technology and its users. Instead, we have emphasized the communal practices that, for us, have made the AUMI work as a radically accessible, inclusionary, and democratic social technology. We are mainly invested in how the AUMI invites us to reframe the improvisational aspects of human communication upon a technology that always disorients and reorients what is being shared, how it is being shared, and the relationships between everyone performing. Disorientations reorient when it comes to our Lawrence AUMI community, because a tradition is being co-created around the transformative potential of the AUMI’s response-rate latency and its sporadic visual mode of recognition.
In his work on the AUMI, KU alumni and sound studies scholar Pete Williams explains how the wide range of mobility typically encouraged in what he calls “standard practice” across theatre, music, and dance is challenged by the AUMI’s tendency to inspire “smaller” movements from performers. While he sees in this affective/physical shift the opportunity for able-bodied performers to encounter “…an embodied understanding of the experience of someone with limited mobility,” my work here focuses less on the software’s potential for able-bodied performers to empathize with “limited” mobility and more on the atypical forms of social interaction and communication the AUMI seems to evoke in mixed-ability settings. An attempt to frame this technology as a disability simulator not only demarcates a troubling departure from its original, intended use by children with severe physical disabilities, but also constitutes a prioritization of able-bodied curiosity that contradicts what I’ve witnessed during mixed-ability AUMI jam sessions in Lawrence.
Sure, some able-bodied performers may come to describe such an experience of simulated “limited” mobility as meaningful, but how we integrate this dynamic into our analyses of the AUMI matters, through and through. What I aim to imply in my read of this technology is that there is no “limited” mobility to experientially empathize with in the first place. If we hold the AUMI’s early history close, then the AUMI is, first and foremost, designed to facilitate musical access for performers with severe physical disabilities. Its structural schematic and even its response-rate latency and sporadic visual mode of recognition ought to be treated as enabling functions rather than limiting ones. From this position, nothing about the AUMI exists for the recreation of disability for able-bodied performers. It is only from this specific position that the collectively disorienting/reorienting modes of communication enabled by the AUMI among mixed-ability groups may be read as resisting the violent history of labor exploitation, erasure, and appropriation Mills warns us about: that is, when AUMI initiatives, no matter how benevolently universal in their reach, act fundamentally as a strategy for the efficacious and responsible unsettling of ableist binaries.
The way the AUMI latches on to unexpected parts of a performer’s body and the “discrepancies” of its body-to-sound response rate are at the core of what sets this technology apart from many other instruments, but it is not the mechanical features alone that accomplish this. Sure, we can find similar dynamics in electronics of all sorts that are “failing,” in one way or another, to respond with accuracies intended during regular use, or we can emulate similar latencies within most recording software available today. But what I contend sets the AUMI apart goes beyond its clever camera-based visual tracking system and the sheer presence of said “incoherencies” in visual recognition and response rate.
What makes the AUMI a unique improvisational instrument is the tradition currently being co-created around its mechanisms in the Lawrence area, and the way these practices disrupt the borders between able-bodied and disabled musical production, participation, and communication. The most important component of our Lawrence-area AUMI culture is how facilitators engage the instrument’s “discrepancies” as regular functions of the technology and as mechanical dynamics worthy of celebration. At every AUMI library jam session I have participated in, not once have I heard Tucker or other facilitators make announcements about a future “fix” for these functions. Rather, I have witnessed an embrace of these features as intentionally integrated aspects of the AUMI. It comes as no surprise, then, that a “Battle of the Bands” event was organized as a way of leaning even further into what makes the AUMI more than a radically accessible musical instrument––that is, its relationship to orientation.
Perhaps it was the competitive framing of the event––we offered small prizes to every participating band––or the diversity among that day’s participants, or even the numerous times some of the performers had previously used this technology, but our event evoked a deliberate and collaborative improvisational method unfold in preparation for the performances. An ensemble mentality began to congeal even before performers entered the studio space, when Tucker first encouraged performers to choose their own fellow band members and come up with a working band name. The two newly-formed bands––Jayhawk Band and The Human Pianos––took turns, laying down collaboratively premeditated improvisations with composition (and perhaps even prizes) in mind. iPad AUMIs were installed in a circle on stands, with studio monitor headphones available for each performer.
Jayhawk Band’s eponymous improvisation “Jayhawks,” which brings together stylized steel drums, synthesizers, an 80’s-sounding floor tom, and a plucked woodblock sound, exemplifies this collaborative sensory ethos, unique in the seemingly discontinuous melding of its various sections and the play between its mercurial tessellations and amalgamations:
In “Jayhawks,” the floor tom riffs are set along a rhythmic trajectory defiant of any recognizable time signature, and the player switches suddenly to a wood block/plucking instrument mid-song (00:49). The composition’s lower-pitched instrument, sounding a bit like an electronic bass clarinet, opens the piece and, starting at 00:11, repeats a melodically ascending progression also uninhibited by the temporal strictures of time signature. In fact, all the melodic layers in “Jayhawk,” demonstrate a kind of temporally “unhinged” ensemble dynamic present in most of the library jam sessions that I’ve witnessed. Yet unexpected moves and elements ultimately cohere for jam session performers, such as Jayhawk Band’s members, because certain general directions were agreed upon prior to hitting “record,” whether this entails sound bank selections or compositional structure. All that to say that collective formalities are certainly at play here, despite the song’s fluid temporal/melodic nuances suggesting otherwise.
Five months after the battle of the bands, The Human Pianos and Jayhawk Band reunited at the library for a jam session. This time, performers were given the opportunity to prepare their individual iPad setup prior to entering the studio space. These customized setup selections were then transferred to the iPads inside the studio, where the new supergroup recorded their notoriously polyrhythmic, interspecies, sax-riddled composition “Animal Parade”:
As heard throughout the fascinating and unexpected moments of “Animal Parade,” the AUMI’s sensitivity can be adjusted for even the most minimal physical exertion and its sound bank variety spans from orchestral instruments, animal sounds, synthesizers, to various percussive instruments, dynamic adjustments, and even prefabricated loops. Yet, no matter how familiar a traditionally trained (and often able-bodied) musician may be with their sound selection, the concepts of rhythmic precision and musical proficiency––as they are understood within dominant understandings of time and consistency––are thoroughly scrambled by the visual tracking system’s sporadic mode of recognition and its inherent latency. As described above, it is structurally guaranteed that the AUMI’s red dot will not remain in its original place during a performance, but instead, latch onto unexpected parts of the body.
Simultaneously, the dot-to-movement response rate is not immediate. My own involvement with “the unexpected” in communal musical production and performance moulds my interpretation of what is socially (and politically) at work in both “Jayhawks” and “Animal Parade.” While participating in AUMI jam sessions I could not help but reminisce on similar experiences with the collective management of orientations/disorientations that, while depending on quite different technological structures, produced similar effects regarding performer communication.
Being a researcher steeped in the L.A. area Salsa, Latin Jazz, and Black Gospel scenes meant that I was immediately drawn to the AUMI’s most disorienting-yet-reorienting qualities. In Timba, the form of contemporary Afrocuban music that I most closely studied back in Los Angeles, disorientations and reorientations are the most prized structural moments in any composition. For example, Issac Delgado’s ensemble 1997 performance of “No Me Mires a Los Ojos” (“Don’t Look at Me In the Eyes”)– featuring now-legendary performances by Ivan “Melon” Lewis (keyboard), Alain Pérez (bass), and Andrés Cuayo (timbales)—sonically reveals the tradition’s call to disorient and reorient performers and dancers alike through collaborative improvisations:
Video Filmed by Michael Croy.
“No Me Mires a los Ojos” is riddled with moments of improvisational coalition formed rather immediately and then resolved in a return to the song’s basic structure. For listeners disciplined by Western musical training, the piece may seem to traverse several time signatures, even though it is written entirely in 4/4 time signature. Timba accomplishes an intense, percussively demanding, melodically multifaceted set of improvisations that happen all at once, with the end goal of making people dance, nodding at the principle tradition it draws its elements from: Afrocuban Rumba. Every performer that is not a horn player or a vocalist is articulating patterns specific to their instrument, played in the form of basic rhythms expected at certain sections. These patterns and their variations evolved from similar Rumba drum and bell formats and the improvisational contributions each musician is expected to integrate into their basic pattern too comes from Rumba’s long-standing tradition of formalized improvisation. The formal and the improvisational function as single communicative practice in Timba. Performers recall format from their embodied knowledge of Rumba and other pertinent influences while disrupting, animating, and transforming pre-written compositions with constant layers of improvisation.
What ultimately interests me the most about the formal registers within the improvisational tradition that is Timba, is that these seem to function, on at least one level, as premeditated terms for communal engagement. This kind of communication enables a social set of interactions that, like Jazz, grants every performer the opportunity to improvise at will, insofar as the terms of engagement are seriously considered. As with the AUMI library jam sessions, timba’s disorientations, too, seem to reorient. What is different, though, is how the AUMI’s sound bank acts in tandem with a performer’s own embodied musical knowledge as an extension of the archive available for improvisation. In Timba, the sound bank and knowledge of form are both entirely embodied, with synthesizers being the only exception.
Timba ensembles and their interpretations of traditional and non-Cuban forms, like the AUMI and its sound bank, use reliable and predictable knowledge bases to break with dominant notions of time and its coherence, only to wrangle performers back to whatever terms of communal engagement were previously decided upon. In this sense, I read the AUMI not as a solitary instrument but as a partial orchestration of sorts, with functions that enable not only an accessible musical experience but also social arrangements that rely deeply on a more responsible management of the unexpected. While the Timba ensemble is required to collaboratively instantiate the potential for disorientations, the AUMI provides an effective and generative incorporation of said potential as a default mechanism of instrumentation itself.
As the AUMI continues on its early trajectory as a free, downloadable software designed to be accessed by performers of mixed abilities, it behooves us to listen deeply to the lessons learned by orchestral traditions older than our own. Timba does not come without its own problems of social inequity––it is often a “boy’s club,” for one––but there is much to learn about how the traditions built around its instruments have managed to centralize the value of unexpected, multilayered, and even complexly simultaneous patterns of communication. There is also something to be said about the necessity of studying the improvisational communication patterns of musical traditions that have not yet been institutionalized or misappropriated within “first world” societies. Timba teaches us that the conga alone will not speak without the support of a community that celebrates difference, the nuances of its organization, and the call to return to difference. It teaches us, in other words, to see the constant need for difference and its reorganization as a singular practice.
The work started with the AUMI’s earliest users in Poughkeepsie, New York and that involving mixed-ability ensembles in Lawrence, Kansas today is connected through the AUMI Consortium’s commitment to a kind of research aimed at listening closely and deeply to the AUMI’s improvisational potential interdisciplinarily and undisciplinarily across various sites. A tech innovation alone will not sustain the work of disrupting the longstanding, rooted forms of ableism ever-present in dominant musical production, performance, and communication, but mixed-ability performer coalitions organized around a radical interrogation of coherence and expectation may have a fighting chance. I hope the technology team never succeeds at working out all of the “discrepancies,” as these are helping us to build traditions that frame the AUMI’s mechanical propensity towards disorientation as the raw core of its democratic potential.
Featured Image: by Ray Mizumura-Pence at The Commons, Spooner Hall, KU, at rehearsals for “(Un)Rolling the Boulder: Improvising New Communities” performance in October 2013.
Caleb Lázaro Moreno is a doctoral student in the Department of American Studies at the University of Kansas. He was born in Trujillo, Peru and grew up in the Los Angeles area. Lázaro Moreno is currently writing about methodological designs for “the unexpected,” contributing thought and praxis that redistributes agency, narrative development, and social relations within academic research. He is also a multi-instrumentalist, composer, and producer, check out his Soundcloud.
REWIND! . . .If you liked this post, you may also dig:
Introduction to Sound, Ability, and Emergence Forum –Airek Beauchamp
Experiments in Agent-based Sonic Composition — Andreas Duus Pape
[Warning: Spoilers Ahead for Folks Not Caught Up with Season 7, Episode 5!]
In one of the more memorable – and squirm-inducing – scenes of this season of AMC’s Mad Men, brilliant but eccentric copywriter Michael Ginsberg (Ben Feldman) presents his colleague, agency copy chief Peggy Olsen (Elisabeth Moss) with his own severed nipple, placed carefully in a gift box. Ginsberg explains to the understandably horrified Peggy that the gift is both a token of his affection and a means of relieving pressure caused by the arrival of Sterling, Cooper & Partners’ (SC&P) newest acquisition: a humming, room-sized IBM System/360 mainframe computer. Explaining his enmity for the machine and his increasingly erratic behavior, Ginsberg tells Peggy that the “waves of data” emanating from the computer were filling him up, and that the only solution was to “remove the pressure” by slicing off his “valve.”
The arrival of the IBM 360 in the idealized 1960s office space inhabited by Mad Men is obviously an unsettling presence – and not only for Ginsberg. Since its debut in Episode 4, commentators (e.g. WaPo’s Andrea Peterson, Slate’s Seth Stevenson) have meditated on the heavy-handed symbolism surrounding the machine – both in terms of its historical significance and its implications for plot and character development. Typically cued through noise (or lack thereof), it is worth reflecting upon the role of sound in establishing the computer as a source of disruption. Between the pounding and screeching of installation and the drone of the completed machine’s air conditioner and tape reels, the sonic motifs accompanying the computer underline tensions between (and roiling within) SC&P staffers grappling with the incipient digital age. Likewise, the infernal racket produced by the installation and operation of the IBM 360 adds an important dimension to the tensions resulting from its presence, which can be read as allegories for the complexities and contradictions of our relationship with technology.
The tone of the conflict is set even before we meet the IBM 360 toward the end of Episode 4: The Monolith – a reference to Kubrick’s 1968 classic 2001: A Space Odyssey (Slate’s Forrest Wickman ably discusses the references). Like the unnerving silence used with such great effect in that film, the absence of sound frames our first encounter with the computer – or at least its promise. Early in the episode, Don Draper (Jon Hamm), newly rehabilitated from his forced exile from the agency, arrives one morning at SC&P to find the office deserted. The ghostly sequence is clearly meant to symbolize Draper’s detachment from the firm. But as the episode progresses and tensions mount over the possibility that the IBM 360 will render jobs obsolete, the desolate office suggests a more ominous meaning – a once lively space muted by cold, impersonal automation.
In following scenes, successive stages of mainframe installation are marked by convergences of conflict and cacophony. First, there is the din of the creative team as they evacuate their beloved lounge – now earmarked as computer space – and during which a distraught Ginsberg projects his indignation onto art director Stan Rizzo, who appears more accepting. “They’re trying to erase us!” Ginsberg exclaims bitterly. Later, Draper lounges on his office couch as a clop clopping of hammers outside signifies tangible change. As if this weren’t enough of a distraction, two men in the corridor begin to chat loudly over the noise. Going out to investigate, Draper strikes up a conversation with one of the men, Lloyd Hawley, installation supervisor and founder of a small technology company competing with IBM. “Who’s winning?” Draper asks innocently, “who’s replacing more people?” Clearly irritated by Draper’s tone, Harry Crane – SC&P media director and the computer’s lead cheerleader – offers Draper a condescending apology for the loss of his “lunchroom,” assures him the change was “not symbolic.” “No, it’s quite literal,” Draper retorts. Unabated, the pounding and screeching of construction work emphasizes his point.
For the remainder of the episode, the raucous noise of construction acts as a leitmotif underscoring tensions between characters – between Peggy and Lou Avery (Draper’s priggish replacement at creative director), and between Draper and the interloper Lloyd. Finally, the end of construction is punctuated by a return to silence, as Peggy arrives one morning to see workers glide mainframe components noiselessly into the office.
With this emphasis on technology as a source of symbolic, physical, and sonic disruption, Matthew Weiner and the creators of Mad Men draw upon a rich literary tradition. A relevant example contemporaneous with the show’s “present,” is literary critic Leo Marx’s 1964 text The Machine in the Garden, which examines the complicated relationships between a “pastoral ideal” and technological progress within American literature and popular imagination. Marx’s analysis reveals that sound is often used to convey the disruptive presence of technology within the bucolic landscape of the American continent. In Hawthorne’s Sleepy Hollow for example, it is the interrupting shriek of a locomotive whistle that breaks the author’s harmonious reverie: “Now tension replaces repose: the noise arouses a sense of dislocation, conflict, and anxiety” (15). In the decidedly un-pastoral modern office space, the noise of the computer installation nevertheless signifies a momentous social change and irrevocable loss. Picking out these tensions has always been one of the show’s strengths – whether it is the computer, Draper’s double identity, or the quiet endurance of women to the misogyny of midcentury work and domestic life.
Change, however, has significant consequences for Ginsberg, the young copywriter and Holocaust survivor who, as CBS’s Jessica Firger observes, has been deteriorating psychologically for some time. The proximity of the IBM 360, and the incessant drone of its mind-controlling waves eventually puts him over the edge. As Draper and Peggy enter the office early in Episode 5, Ginsberg glowers into the room housing the IBM 360. “Stop humming, you’re not happy!” he explodes. As Peggy attempts to soothe her colleague, our perspective shifts to look out at them from inside the glass-encased computer room. From here, the mainframe’s ambient noise muffles Peggy’s words, suggesting isolation between human and non-human. This play of speech and silence reoccurs later in the episode as Ginsberg, working alone on a Saturday with tissues wedged in his ears, spies Lou Avery and SC&P partner Jim Cutler inside the computer room, their voices made inaudible by the droning computer in a delicious homage to 2001 (see Vulture’s amusing gif). But the noise is clearly affecting Ginsberg. “It’s that hum at the office! It’s getting to me!” he tells Peggy later that evening. He even claims the computer has affected his sexuality.
Ginsberg’s noise complaints would have resonated in 1969 New York. In November of that year, the New York Times ran a feature on the city’s nerve-shattering noise pollution, calling it a “slow agent of death.” In addition to the myriad construction projects, subways, car horns, jet planes, and standing machinery populating the city soundscape, office workers found scant respite indoors where phones, air conditioners, “computers and typewriters and tabulators” whirred, whined, and clacked throughout the day. The article went on to report that scientists studying the impact of prolonged noise exposure on the human body had concluded a variety of ill effects on the heart and nervous system. Though no connection was made between computers and sexuality (as Ginsberg claimed), the article reported that laboratory rats under prolonged noise exposure had indeed “turned homosexual,” an opinion that underlined deterministic associations between sexuality, psychological disorder, and external stimuli.
As SO! editor Jennifer Stoever-Ackerman has argued, noise in midcentury New York also signified a sonic-racial politics, in which the mainstream “listening ear” recoiled at the “noise” created by Black and Puerto Rican others. In terms of Mad Men’s computer however, it is technology, economic anxiety, and mental illness, rather than ethnicity that frames sonic disruption. The basis of these tensions are similar however, and various interactions with SC&P’s IBM 360 demonstrate, as Stoever-Ackerman writes in SO!, “the ways in which Americans have been disciplined to consider some sounds as natural, normal, and desirable, while deeming alternate ways of listening and sounding as aberrant [and] dangerous.” Though similar, the conflict with technology on Mad Men does not suggest a clear us/them, or us/”it” binary. The banging of construction may be at first antagonistic, but it’s finite – eventually the computer is normalized within the SC&P office space to the extent that Peggy chides Ginsberg’s exasperation in Episode 5 by insisting “it’s just a computer!” Ginsberg’s reaction is more complex however, implicating a contradictory relationship with technology: once fully installed, has the droning computer become “natural, normal, and desirable” despite previous ambivalence? Is the keen awareness and anxiety towards technology symbolized through Ginsberg (albeit in a extreme form) suggested as the “aberrant” listening practice, or could it be Peggy’s apparent acceptance?
Like most cultural texts set in the past, it is possible to read Mad Men allegorically, as suggesting a certain ordering of meaning and values. From the perspective of those who have long since domesticated computers, the controversies and tropes activated by SC&P’s IBM 360 might strike us as familiar, even quaint. As the sociologist Bruno Latour has argued however, we would be wise to consider how technology exerts a kind of social agency that structures and impacts our daily lives. As historical symbolism, the sounds and noises of the IBM 360 on Mad Men should remind us that technological progress is not teleological, but a struggle over meaning in which anxieties (about jobs, mind-control, surveillance, subjectivity, etc.) may be variously accommodated, suppressed, or dismissed as irrational.
Featured image: An IBM 360 Mainframe. Borrowed from Wikimedia Commons CC 2.0
Andrew J. Salvati is a Media Studies Ph.D. candidate at Rutgers University. His interests include the history of television and media technologies, theory and philosophy of history, and representations of history in media contexts. Additional interests include play, authenticity, the sublime, and the absurd. Andrew has co-authored a book chapter with colleague Jonathan Bullinger titled “Selective Authenticity and the Playable Past” in the recent edited volume Playing With the Past (2013), and has written a recent blog post for Play the Past titled “The Play of History.”
REWIND!…If you liked this post, you may also dig:
“DIY Histories: Podcasting the Past” -Andrew J. Salvati
“The Noise of SB 1070: Or Do I Sound Illegal to You?”– Jennifer Stoever-Ackerman
“DIANE… The Personal Voice Recorder in Twin Peaks” -Tom McEnaney
This is the opening salvo in Sounding Out!‘s April Forum on “Sound and Technology.” Every Monday this month, you’ll be hearing new insights on this age-old pairing from the likes of Sounding Out! veterano Primus Luta, along with new voices Andrew Salvati and Owen Marshall. These fast-forward folks will share their thinking about everything from Auto-tune to productivity algorithms. So, program your presets for Sounding Out! and enjoy today’s exhilarating opening think piece from SO! Multimedia Editor Aaron Trammell. —JS, Editor-in-Chief
We drafted a manifesto.
Microsoft Research’s New England Division, a collective of top researchers working in and around new media, hosted a one-day symposium on music technology. Organizers Nancy Baym and Jonathan Sterne invited top scholars from a plethora of interdisciplinary fields to discuss the value, affordances, problems, joys, curiosities, pasts, presents, and futures of Music Technology. It was a formal debrief of the weekend’s Music Tech Fest, a celebration of innovative technology in music. Our hosts christened the day, “What’s Music Tech For?” and told us to make bold, brave statements. A kaleidoscope of kinetic energy and ideas followed. And, at 6PM we crumpled into exhausted chatter over sangria, cocktails, and imported beer at a local tapas restaurant.
The day began with Annette Markham, our timekeeper, offering us some tips on how to best think through what a manifesto is. She went down the list: manifestos are primal, they terminate the past, create new worlds, trigger communities, define us, antagonize others, inspire being, provoke action, crave presence. In short, manifestos are a sort of intellectual world building. They provide a road map toward an imagined future, but in doing so they also work to produce this very future. Annette’s list made manifestos seem to be a very focused thing, and perhaps they usually are. But, having now worked through the process of creating a manifesto with a collective, I would add one more point – manifestos are sloppy.
Our draft manifesto is a collective vision about what the blind-spots of music technology are, at present, and what we want the future of music technology to look like. And although there is general synergy around all of the points within it, that synergy is somewhat addled by the polyphonic nature of the contributors. There were a number of discussions over the course of the day that were squelched by the incommensurable perspectives of one or two of the participants. For instance, two scholars argued about whether or not technical platforms have politics. These moments of disagreement, however, only added a brilliant contour to our group jam. Like the distortion cooked into a Replacements single, it only serves to highlight how superb the moments of harmony and agreement are in contrast. This brilliant and ambivalent fuzziness speaks perfectly to the value of radical interdisciplinarity.
These disagreements were exactly the point. Why else would twenty academics from a variety of interdisciplinary fields have been invited to participate? Like a political summit, there were delegates from Biology, Anthropology, Computer Science, Musicology, Science and Technology Studies, and more. Rotating through the room, we did our introductions (see the complete list of participants at the bottom of this paper). Our interests were genuine and stated with earnestness. Nancy Baym declared emphatically that music is, “a productive site for radical interdisciplinarity,” while Andrew Dubber, the director of Music Tech Fest, noted the centrality of culture to the dialogue. Both music and technology are culture, he argued. The precarity of musical occupations, the gender divide, and the relationship between algorithm and consumer, all had to take a central role in our conversation, an inspired Georgina Born demanded. Bryan Pardo, a computer scientist, announced that he was listening with an open mind for tips on how to best design the platforms of tomorrow. Though collegial, our introductory remarks were all political, loaded with our ambitions and biases.
The day was an amazing, free-form, brainstorm. An hour and a half long each, the sessions challenged us to answer a big question – first, what are the problems of music technology, then what are some actions and possibilities for its future. Every fifteen or twenty minutes an alarm would ring and tables would exchange members, the new member sharing ideas from the table they came from. At one point I came to a new table telling stories about how music had the power to sculpt social relations, and was immediately confronted with a dialogue about problems of integration in the STEM fields.
In short, the brainstorms were a hodgepodge of ideas. Some spoke about the centrality of music to many cultural practices. Noting the ways in which humans respond to their environments through music, they questioned if tonal schema were ultimately a rationalization of the world. Though music was theorized as a means of social control many questions remained about whether it could or should be operationalized as such. Others considered different conversations entirely. Jocking sustainability and transduction as key factors in an ideal interdisciplinarity and shunning models that either tried to put one discipline in service of another, or simply tried to stack and combine ideas.
Some of the most productive debates centered around the nature of “open” technology. Engineers were challenged on their claim that “open source technology” was an unproblematic good, by Cultural Studies scholars who argued that the barriers to access were still fraught by the invisible lines of race, class, and gender. If open source technology is to be the future of music technology, they argued, much work must still be done to foster a dialogue where many voices can take part in that space.
We also did our best to think up actionable solutions to these problems, but for many it was difficult to dream big when their means were small in comparison. One group wrote, “we demand money,” on a whiteboard in capital letters and blue marker. Funding is a recurrent and difficult problem for many scholars in the United States and other, similar, locations, where funding for the arts is particularly scarce. On points like this, we all agreed.
We even considered what new spaces of interactivity should look like. Fostering spaces of interaction with public works of art, music, performance and more, could go a long way in convincing policy makers that these fields are, in fact, worthy of greater funding. Could a university be designed so as to prioritize this public mode of performance and interactivity? Would it have to abandon the cloistered office systems, which often prohibit the serendipitous occasion of interdisciplinary discussion around the arts?
There are still many problems with the dream of our manifesto. To start, although we shared many ideas, the vision of the manifesto is, if anything, disheveled and uneven. And though the radical interdisciplinarity we epitomized as a group led to a million excellent conversations, it is difficult, still, to get a sense of who “we” really are. If anything, our manifesto will be the embodiment of a collective that existed only for a moment and then disbursed, complete with jagged edges and inconsistencies. This gumbo of ideas, for me, is beautiful. Each and every voice included adds a little extra to the overall idea.
Ultimately, “What’s Music Tech For?” really got me thinking. Although I remain skeptical about the United States seeing funding for the arts as a worthy endeavor anytime soon, I left the event with a number of provocative questions. Am I, as a scholar, too critical about the value of technology, and blind to the ways it does often function to provoke a social good? How can technological development be set apart from the demands of the market, and then used to kindle social progress? How is music itself a technology, and when is it used as a tool of social coercion? And finally, what should a radical mode of listening be? And how can future listeners be empowered to see themselves in new and exciting ways?
What do you think?
Our team, by order of introduction:
Mary Gray (Microsoft Research), Blake Durham (University of Oxford), Mack Hagood (Miami University), Nick Seaver (University of California – Irvine), Tarleton Gillespie (Cornell University), Trevor Pinch (Cornell University), Jeremy Morris (University of Wisconsin-Madison), Diedre Loughridge (University of California – Berkley), Georgina Born (Oxford University), Aaron Trammell (Rutgers University), Jessa Lingel (Microsoft Research), Victoria Simon (McGill University), Aram Sinnreich (Rutgers University), Andrew Dubber (Birmingham City University), Norbert Schnell (IRCAM – Centre Pompidou), Bryan Pardo (Northwestern University), Josh McDermitt (MIT), Jonathan Sterne (McGill University), Matt Stahl (Western University), Nancy Baym (Microsoft Research), Annette Markham (Aarhus University), and Michela Magas (Music Tech Fest Founder).
Read the Manifesto here and sign on if you dig it. . . http://www.musictechifesto.org/
Aaron Trammell is co-founder and Multimedia Editor of Sounding Out! He is also a Media Studies PhD candidate at Rutgers University. His dissertation explores the fanzines and politics of underground wargame communities in Cold War America. You can learn more about his work at aarontrammell.com.
REWIND! . . .If you liked this post, you may also dig:
Sounding Out! Podcast #15: Listening to the Tuned City of Brussels, The First Night— Felicity Ford and Valeria Merlini
“I’m on my New York s**t”: Jean Grae’s Sonic Claims on the City— Liana Silva-Ford
The podcast is (or, can be) an intimate performance venue on the internet because it allows you to whisper into the ears of your fans. It allows you to grow close to communities of listeners. And podcasts also do one last thing, to be revealed at the end of this piece, after we’ve seen how far I can take you. For now, I will quote podcasters I admire to help explain these ideas in their own words. I also quote these podcasters in an audio format. I have recorded this essay as an episode of the Sounding Out! podcast. You can listen right here, and I suggest you do. Go ahead, press play:
The podcast is what’s happening if you’re listening to these words. Are you? Because remember: my central claim is that a podcast is an intimate performance venue on the internet. Keep that in mind.
I am a podcaster, musician, and assistant professor of Economics. I have released six episodes of my own podcast, The Lion in Tweed.
My podcast is primarily a narrative, with music and sound effects interwoven. It is about a character, The Lion in Tweed, and his experiences as a musician and professor of economics. He is also a lion. The second half of each podcast episode is a references section where I cite my sources and leave the fictional Tweedoverse canon to discuss real things.
I am not the first podcaster to remark on the ability of podcasts to traffic in intimacy. Chris Hardwick, of the Nerdist podcast, has claimed that, due to their intimacy, podcasts are the best medium going. He stressed: “you’re talking directly into the ears of your listeners.” There is no doubt that Hardwick was referring to those white iPod earbuds, which are a primary method of listening to podcasts. Part of a podcast’s intimacy comes from the closeness of the earbud to the membrane in the ear. Listening to earbuds evokes the intimate, and physical, closeness of someone whispering in your ear. Hardwick’s style of podcast is also intimate: vocal storytelling, mostly three comedians talking about the funniest things that have happened recently. The fact that it is not visual accentuates the intimacy of “whispering in your ears.” Although I would like to argue that the visual always reduces a sense of intimacy, I may simply prefer the sonic over the visual. Nevertheless, I believe that, the intimacy to which Hardwick is refers, is tied to the fact it is only sonic.
Podcasts as performance have a strange kind of liveness, episode-to-episode interactivity. By this I mean that they are not immediate; they lack the urgency of a theater-goer’s applause, or a heckler’s retort. Though not immediate, they are still dynamic, with their episodic pacing. And, unlike heckling, almost completely positive. This sense of long-term interactivity provides a foundation for understanding a second way that podcasts are intimate: they can cultivate intimate and interconnected communities of listeners.
They [Stop Podcasting Yourself, hosted by Graham Clark and Dave Shumka] have a really good community of people, community interaction: people send them stuff, sometimes people send them stuff unprompted. And they have a phone number [for people to call in messages that they play].
To illustrate how interconnected this community is, let me describe to you where this quote came from. It is a clip of Dan Sai, recorded by Davin Pavlas at MaxFunCon (the annual convention of the MaximumFun.org podcast label). I know Davin because of our mutual love of MaxFun podcasts. When I brought The Lion in Tweed into the world, I advertised it on podcasts in the MaxFun network. When Davin heard the description, he began to listen to my podcast. Now we are collaborating on an episode of The Lion in Tweed, which will quote these very words when it comes out two weeks from now. Similarly, UK resident Will Owens and I exchanged tweets after he started listening to my podcast and I found out he reviewed various narrative media on his website, and now he has written a review of my podcast, which we both promote. Ours is a community in which a feeling of value comes with a sense of connectedness. The podcasts give a shared culture.
SO IN PODCASTS, WE FIND A MEDIUM that is both sonic and vocal. They provide a platform for intimate and interconnected communities, which are rooted in an alternative kind of interactivity (long-term liveness), to grow. The whisper-in-the-ear quality of podcasts, as well as the feeling of community, all but completely explain why podcasts are so intimate.
Podcasts may be hip and modern, but they are not ironic. Podcasts represent a distillation of what the podcasters genuinely love, and in that they find their authenticity. According to Paul F. Tompkins, a comedian and podcaster:
It’s very freeing to be able to say: “Here are all the things that I like; I’m going to put them all into this [podcast].”
That was at minute 50:32 of Nerdist podcast ep 33 hosted by Chris Hardwick, with Paul F. Tompkins as a guest.
I can mostly just do things that I am interested in, and so I don’t have to do something that is false to me, and I can let my guiding light be, “Do I like this and think it’s worth doing?”
And we see that authenticity completes the puzzle: podcasts are intimate because they feel so real. In podcasts it feels like you are listening to a real person because you are listening to the things that a real person loves…and interacting with real people is much more intimate than feeling like you are interacting with a marketing department (as you may when listening to a CD, or radio-show).
This is how I construct intimate performance venues: Audio-only, voice/storytelling focused, in which I try to build and exploit supportive, interconnected communities of fans with a shared culture (the podcasts). And, in doing so, I try to remain true to what I truly love. This authenticity, I believe, deepens the intimacy.
In the background of this podcast episode, Andreas plays an instrumental cover of “Bound for Hell” by Love and Rockets.