Archive | Sound RSS for this section

DIY Histories: Podcasting the Past

4349330191_387059d08b_o

Sound and TechThis is article 3.0  in Sounding Out!‘s April  Forum on “Sound and Technology.” Every Monday this month, you’ll be hearing new insights on this age-old pairing from the likes of Sounding Out! veteranos Aaron Trammell and Primus Luta along with new voices Andrew Salvati and Owen Marshall.  These fast-forward folks will share their thinking about everything from Auto-tune to techie manifestos. Today, Salvati asks if DIY podcasts are allowing ordinary people to remix the historical record. Let’s subscribe and press play.  –JS, Editor-in-Chief

 

Was Alexander the Great as bad a person as Adolph Hitler? Will our modern civilization ever fall like civilizations from past eras?

According to Dan Carlin’s website, these are the kind of speculative “outside-the-box” perspectives one might expect from his long-running Hardcore History podcast. In Carlin’s hands, the podcast is a vehicle for presenting dramatic accounts of human history that are clearly meant to entertain, and are quite distinct from what we might recognize as academic history. Carlin, a radio commentator and former journalist, would likely agree with this assessment. As he frequently emphasizes, he is a “fan” of history and not a professional. But while there are particularities of training, perspective, and resources that may distinguish professional and popular historians, an oppositional binary between these kinds of historymakers risks overlooking the plurality of historical interpretation. Instead, we might notice how history podcasters like Carlin utilize this new sonic medium to continue a tradition of oral storytelling that in the West goes back to Herodotus, and has since been the primary means of marginalized and oppressed groups to preserve cultural memory. As a way for hobbyists and amateurs to create and share their own do-it-yourself (DIY) histories, I argue that audio podcasting suggests a democratization of historical inquiry that greatly expands the possibilities for everyone, as Carl Becker once said, to become his or her own historian.

"Modified Podcast Logo with My Headphones Photoshopped On" by Flickr user Colleen AF Venable, CC BY-SA 2.0

“Modified Podcast Logo with My Headphones Photoshopped On” by Flickr user Colleen AF Venable, CC BY-SA 2.0

Frequently listed among iTunes’ top society and culture podcasts, and cited by several history podcasters as the inspiration for their own creations, the popularity of Hardcore History stems from Carlin’s unconventional and dramatic recounting of notable (but sometimes obscure) historical topics, in which he will often elaborate historical-structural changes through contemporizing metaphors. Connecting the distant past to more immediate analogies of present life is the core of Carlin’s explanatory method. This form of explanation is quite distinct from the output of academic historians, who assiduously avoid this sort of “presentism.” But as the late Roy Rosenzweig (2000) has suggested, it is precisely this kind of conscious and practical engagement with the past – and not the litany of facts in dry-as-dust textbooks – that appeals to non-historians. Rosenzweig and David Thelen claim have found that most Americans perceive a close connection with the past, especially as it relates to the present, through their personal and family life. Using the medium of podcasting to talk about the past is a new way of making the past vital to the present needs and interests of most people. This is how podcasters make sense of history in their own terms. It is DIY insofar as it is distinct from professional discourse, and less encompassing (and expensive) than video methods.

Podcasts can present an alternative model for making sense of the past – one that underscores the historymaker’s interpretive imprints, and which cultivates a sense of liveness and interactivity. Admittedly, Dan Carlin’s own style can be rambling and melodramatic. But to the extent that he practices history as a kind of storytelling, and acknowledges his own interpretive interventions, Hardcore History, like other independently produced history podcasts (I am thinking about a few of my favorites – Revolutions, The History Chicks, and The British History Podcast) give their listeners the sense that history is not necessarily something that is “out there,” or distant from us in the present, but part of a living conversation in the present. Podcasters construct a dialogue about history which, when combined with the interactivity offered by website forums, draws the listener into a participatory engagement. Rosenzweig and Thelen’s explain, Americans interested in popular history are skeptical of “historical presentations that did not give them credit for their critical abilities – commercialized histories on television or textbook-driven high school classes.” Such analytic skills are precisely what we as historians and teachers aim to develop in our students. Podcasting, when it constructs a collaborative dialogue in which audience and producer explore history together, can both be a valuable supplement to traditional historiography, and a way for people to connect with the past that overcomes the abstraction of textbooks and video.

"The histomap - four thousand years of world history" by Flickr user 图表汇, CC BY-SA 2.0

“The histomap – four thousand years of world history” by Flickr user 图表汇, CC BY-SA 2.0

But is the podcast as intellectually freeing as it might seem? Jonathan Sterne (et. al., 2008) notes that podcasting encompasses a range of technologies and practices that do not necessarily determine the liberation of content production from the dominance of established institutions and economies of scale. Indeed, there are many professional historians and media producers who have utilized audio (and sometimes video) podcasting to reach a wider audience. While the History Channel has not (yet) entered the field, one can surely imagine the implications of corporate-produced history content that homogenizes local and cultural particularities, or which present globalized capitalism as a natural or inevitable historical trajectory.

The kind of podcasts I am concerned with, however, are created by independent producers taking a DIY approach to content production and historical inquiry. While their resources and motivations may differ, podcasts produced on personal computers in the podcaster’s spare time have an intimate, handcrafted feel that I find to be more appealing than, say, a podcasted lecture. Ideally, what results is an intimate and episodic performance in which podcasters can, to use Andreas Duus Pape’s phrasing from an earlier Sounding Out! post, “whisper in the ears” of listeners. This intimacy is heightened by the means of access – when I download a particular podcast, transfer it to my iPhone, and listen on my commute, I am inviting the podcaster into my personal sonic space.

Complimenting this sense of intimacy is a DIY approach to history practiced by podcasters who are neither professional historians nor professional media producers. Relatively cheap and easy to produce (assuming the necessary equipment and leisure time), podcasting presents a low barrier of entry for history fans inspired to use new media technologies to share their passion with other history fans and the general public. Though a few podcasters acknowledge that they have had some university training in history, they are usually proud of their amateur status. The History Chicks, for example, “don’t claim to know it all,” and that any pretense toward a comprehensive history “would be kinda boring.” Podcasting and historical inquiry are hobbies, and their DIY history projects allow the relative freedom to have fun exploring and talking about their favorite subject matter – without having to conform to fussy disciplinary constraints. For Jamie Jeffers, creator of the British History podcast, most people are alienated by the way history gets taught in school. However, “almost everyone loves stories,” he says, and podcasting “allows us to reconnect to that ancient tradition of oral histories.” Others justify the hobby in more bluntly. For the History Chicks, women in history is “a perfect topic to sit down and chat about.” Talking about history, arguing about it, is something that history fans (and I include myself here) enjoy. Podcasting can broaden this conversation.

Despite my optimistic tone in this post, however, I do not want to suggest uncritically that the democratizing, DIY aspects that I have noted (among just a handful of podcasts) comprises the entire potential of the format. Nuancing a common opposition between the bottom-up potential of podcasting with the prevalent top-down (commercial) model of broadcasting for example, Sterne and others have asserted that rather than constituting a disruptive technology – as Richard Berry has suggested – podcasting realizes “an alternate cultural model of broadcasting.” Referring to earlier models of broadcasting – such as those Susan Douglas (1992) described in her classic study of early amateur radio – Sterne and company assert that analyses of podcasting should focus not on the technology itself, but on practice; not on the challenge podcasting poses to corporate dominance in broadcasting, but rather how it might offer a pluralistic model that permits both commercial/elite and DIY/amateur productions.

"Podcast in Retro" by Flickr user David Shortle, CC BY-NC 2.0

“Podcast in Retro” by Flickr user David Shortle, CC BY-NC 2.0

Adapting these recommendations, I argue that podcasting can help us conceptualize an alternate cultural model of history – one that invites reconsideration of what counts as historical knowledge and interpretation, and about who is empowered to construct and access historical discourse. Rather that privileging the empirical or objective histories of academic/professional historians, such an expanded model would recognize the cultural legitimacy of diverse forms of historiographical expression. In other words, that history is never “just” history, or “just” facts, but is always a contingent and situated form of knowledge, and that, as Keith Jenkins writes, “interpretations at (say) the ‘centre’ of our culture are not there because they are true or methodologically correct … but because they are aligned to the dominant discursive practices: again power/knowledge” (1991/2003, p. 79). But to reiterate Sterne’s (et. al.) caution however, such an alternative model would not necessarily determine a role-reversal between professional and DIY histories. Rather through podcasting, we might discover alternative ways of performing history as a new oral tradition – of becoming each of us our own historian.

Andrew J. Salvati is a Media Studies Ph.D. candidate at Rutgers University. His interests include the history of television and media technologies, theory and philosophy of history, and representations of history in media contexts. Additional interests include play, authenticity, the sublime, and the absurd. Andrew has co-authored a book chapter with colleague Jonathan Bullinger titled “Selective Authenticity and the Playable Past” in the recent edited volume Playing With the Past (2013), and has written a recent blog post for Play the Past titled The Play of History.”

Featured image: “Podcasts anywhere anytime” by Flickr user Francois, CC BY 2.0

tape reelREWIND!…If you liked this post, you may also dig:

“Music is not Bread: A Comment on the Economics of Podcasting”-Andreas Duus Pape

“Pushing Record: Labors of Love, and the iTunes Playlist”-Aaron Trammell

“Only the Sound Itself?: Early Radio, Education, and Archives of ‘No-Sound’”-Amanda Keeler

The Blue Notes of Sampling

4287453890_eb4c960243_o

Sound and TechThis is article 2.0  in Sounding Out!‘s April  Forum on “Sound and Technology.” Every Monday this month, you’ll be hearing new insights on this age-old pairing from the likes of Sounding Out! veterano Aaron Trammell along with new voices Andrew Salvati and Owen Marshall.  These fast-forward folks will share their thinking about everything from Auto-tune to techie manifestos. So, turn on your quantizing for Sounding Out! and enjoy today’s supersonic in-depth look at sampling from from SO! Regular Writer Primus Luta.  –JS, Editor-in-Chief

My favorite sample-based composition? No question about it: “Stroke of Death” by Ghostface and produced by The RZA.

.

Supposedly the story goes, RZA was playing records in the studio when he put on the Harlem Underground Band’s album. It is a go-to album in a sample-based composer collection, because of the open drum breaks. One such break appears in the cover of Bill Wither’s “Ain’t No Sunshine”, notably used by A Tribe Called Quest on “Everything is Fair.”

.

RZA, a known break beat head, listened as the song approached the open drums, when the unthinkable happened: a scratch in his copy of the record. Suddenly, right before the open drums dropped, the vinyl created its own loop, one that caught RZA’s ear. He recorded it right there and started crafting the beat.

This sample is the only source material for the track. RZA throws a slight turntable backspin in for emphasis, adding to the jarring feel that drives the beat. That backspin provides a pitch shift for the horn that dominates the sample, changing it from a single sound into a three-note melody. RZA also captures some of the open drums so that the track can breathe a bit before coming back to the jarring loop. As accidental as the discovery may have been, it is a very precisely arranged track, tailor-made for the attacking vocals of Ghostface, Solomon Childs, and the RZA himself.

"How to: fix a scratched record" by Flickr user Fred Scharmen, CC BY-NC-SA 2.0

“How to: fix a scratched record” by Flickr user Fred Scharmen, CC BY-NC-SA 2.0

“Stroke of Death” exemplifies how transformative sample-based composition can be. Other than by knowing the source material, the sample is hard to identify. You cannot figure out that the original composition is Wither’s “Ain’t No Sunshine” from the one note RZA sampled, especially considering the note has been manipulated into a three-note melody that appears nowhere in either rendition of the composition. It is sample based, yes, but also completely original.

Classifying a composition like this as a ‘happy accident’ downplays just how important the ear is in sample-based composition, particularly on the transformative end of the spectrum. J Dilla once said finding the mistakes in a record excited him and that it was often those mistakes he would try to capture in his production style. Working with vinyl as a source went a long way in that regard, as each piece of vinyl had the potential to have its own physical characteristics that affected what one heard. It’s hard to imagine “Stroke of Death” being inspired from a digital source. While digital files can have their own glitches, one that would create an internal loop on playback would be rare.

*****

"Unpacking of the Proteus 2000" by Flickr user Anders Dahnielson, CC BY-NC-SA 2.0

“Unpacking of the Proteus 2000″ by Flickr user Anders Dahnielson, CC BY-NC-SA 2.0

There has been a change in the sound of sampling over the past few decades. It is subtle but still perceptible; one can hear it even if a person does not know what it is they are hearing. It is akin to the difference between hearing a blues man play and hearing a music student play the blues. They technically are both still the blues, but the music student misses all of the blue notes.

The ‘blue notes’ of the blues were those aspects of the music that could not be transcribed yet were directly related to how the song conveyed emotion. It might be the fact that the instrument was not fully in tune, or the way certain notes were bent but others were not, it could even be the way a finger hit the body of a guitar right after the string was strummed. It goes back farther than the blues and ultimately is not exclusive to the African American tradition from which the phrase derives; most folk music traditions around the world have parallels. “The Rite of Spring” can be understood as Stravinsky ‘sampling’ the blue notes of Transylvanian folk music. In many regards sample-based composing is a modern folk tradition, so it should come as no surprise that it has its own blue notes.

The sample-based composition work of today is still sampling, but much of it lacks the blue notes that helped define the golden era of the art. I attribute this discrepancy to the evolution of technology over the last two decades. Many of the things that could be understood as the blue notes of sampling were merely ways around the limits of the technology. In the same way, the blue notes of most folk music happened when the emotion went beyond the standards of the instrument (or alternately the limits imposed upon it by the literal analysis of western theory). By looking at how the technology has evolved we can see how blue notes of sampling are being lost as key limitations are being overcome by “advances.”

e-muFirst, let’s consider the E-Mu SP-1200, which is still thought to be the most definitive sounding sampler for hip-hop styled sample-based compositions, particularly related to drums. The primary reason for this is its low-resolution sampling and conversion rates. For the SP-1200 the Analog to Digital (A/D) and Digital to Analog (D/A) converters were 12-bit at a sample rate of 26.04 kHz (CD quality is 16-bit 44.1 kHz). No matter what quality the source material, there would be a loss in quality once it was sampled into and played out of the SP-1200. This loss proved desirable for drum sounds particularly when combined with the analog filtering available in the unit, giving them a grit that reflected the environments from which the music was emerging.

Sp1200_Back_PanelOn top of this, individual samples could only be 2.5 seconds long, with a total available sample time of only 10 seconds. While the sample and conversion rates directly affected the sound of the samples, the time limits drove the way that composers sampled. Instead of finding loops, beatmakers focused on individual sounds or phrases, using the sequencer to arrange those elements into loops. There were workarounds for the sample time constraints; for example, playing a 33-rpm record at 45 rpm to sample, then pitching it back down post sampling was a quick way to increase the sample time. Doing this would further reduce the sample rate, but again, that could be sonically appealing.

An under appreciated limitation of the SP-1200 however, was the visual feedback for editing samples. The display of the SP-1200 was completely alpha numeric; there were no visual representations of the sample other than numbers that were controlled by the faders on the interface. The composer had to find the start and end points of the sample solely by ear. Two producers might edit the exact same kick drum with start times 100 samples (a fraction of a millisecond) apart. Were one of the composers to have recorded the kick at 45 rpm and pitched it down, the actual resolution for the start and end times would be different. When played in a sequence, these 100 samples affect the groove, contributing directly to the feel of the composition. The timing of when the sample starts playback is combined with the quantization setting and the swing percentage of the sequencer. That difference of 100 samples in the edit further offsets the trigger times, which even with quantization turned off fit into the 24 parts per quarter grid limitations of the machine.

akaiAkai’s MPC-60 was the next evolution in sampling technology. It raised the sample and conversion rates to 16-bit and 40 kHz. Sample time increased to a total of 13.1 seconds (upgradable to 26.2). Sequencing resolution increased to 96 parts per quarter. Gone was the crunch of the SP-1200, but the precision went up both in sampling and in sequencing. The main trademark of the MPC series was the swing and groove that came to Akai from Roger Linn’s Linn Drum. For years shrouded in mystery and considered a myth by many, in truth there was a timing difference that Linn says was achieved by delaying certain notes by samples. Combined with the greater PPQ resolution in unquantized mode, even with more precision than the SP-1200, the MPC lent itself to capturing user variation.

Despite these technological advances, sample time and editing limitations, combined with the fact that the higher resolution sampling lacked the character of the SP-1200, kept the MPC from being the complete package sample composers desired. For this reason it was often paired with Akai’s S-950 rack sampler. The S-950 was a 12-bit sampler but had a variable sample rate between 7.5 kHz and 40 kHz. The stock memory could hold 750 KB of samples which at the lowest sample rate could garner upwards of 60 seconds of sampling and at the higher sample rates around 10 seconds. This was expandable to up to 2.5 MB of sample memory.

s950.

The editing capabilities made the S-950 such a powerful sampler. Being able to create internal sample loops, key map samples to a keyboard, modify envelopes for playback, and take advantage of early time stretching (which would come of age with the S-1000)—not to mention the filter on the unit—helped take sampling deeper into the sound design territory. This again increased the variable possibilities from composer to composer even when working from the same source material. Often combined with the MPC for sequencing, composers had the ultimate sample-based composition workstation.

Today, there are literally no limitations for sampling. Perhaps the subtlest advances have developed the precision with which samples can be edited. With these advances, the biggest shift has been the reduction of the reliance on ears. Recycle was an early software program that started to replace the ears in the editing process. With Recycle an audio file could be loaded, and the software would chop the sample into component parts by searching for the transients. Utilizing Recycle on the same source, it was more likely two different composers could arrive at a kick sample that was truncated identically.

waveformAnother factor has been the waveform visualization of samples for editing. Some earlier hardware samplers featured the waveform display for truncating samples, but the graphic resolution within the computer made this even more precise. By looking at the waveform you are able to edit samples at the point where a waveform crosses the middle point between the negative and positive side of the signal, known as the zero-crossing. The advantage of zero-crossing sampling is that it prevents errors that happen when playback goes from either side of the zero point to another point in one sample, which can make the edit point audible because of the break in the waveform. The end result of zero-crossing edited samples is a seamlessness that makes samples sound like they naturally fit into a sequence without audible errors. In many audio applications snap-to settings mean that edits automatically snap to zero-crossing—no ears needed to get a “perfect” sounding sample.

It is interesting to note that with digital files it’s not about recording the sample, but editing it out of the original file. It is much different from having to put the turntable on 45 rpm to fit a sample into 2.5 seconds. Another differentiation between digital sample sources is the quality of the files, whether original digital files (CD quality or higher), lossless compression (FLAC), lossy compressed (MP3, AAC) or the least desirable though most accessible, transcoded (lossy compression recompressed such as YouTube rips). These all result in a different degradation of quality than the SP-1200. Where the SP-1200’s downsampling often led to fatter sounds, these forms of compression trend toward thinner-sounding samples.

converter.

Some producers have created their own sound using thinned out samples with the same level of sonic intent as The RZA’s on “Stroke of Death.” The lo-fi aesthetic is often an attempt to capture a sound to parallel the golden era of hardware-based sampling. Some software-based samplers by example will have an SP-1200 emulation button that reduces bit rates to 12-bit. Most of software sequencers have groove templates that allow for the sequencers to emulate grooves like the MPC timing.

Perhaps the most important part of the sample-based composition process however, cannot be emulated: the ear. The ear in this case is not so much about the identification of the hot sample. Decades of history should tell us that the hot sample is truly a dime a dozen. It takes a keen composer’s ear to hear how to manipulate those sounds into something uniquely theirs. Being able to listen for that and then create that unique sound—utilizing whatever tools— that is the blue note of sampling. And there is simply no way to automate that process.

Featured image: “Blue note inverted” by Flickr user Tim, CC BY-ND 2.0

Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications.  He maintains his own AvantUrb site. Luta was a regular presenter for Rhythm Incursions. As an artist, he is a founding member of the collective Concrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012. Recently Concréte released the second part of their Ultimate Break Beats series for Shocklee.

tape reelREWIND!…If you liked this post, you may also dig:

“SO! Reads: Jonathan Sterne’s MP3: The Meaning of a Format”-Aaron Trammell

“Remixing Girl Talk: The Poetics and Aesthetics of Mashups”-Aram Sinnreich

“Sound as Art as Anti-environment”-Steven Hammer

Sound Off! // Comment Klatsch #16: Sound and Pleasure

Sounding Off2klatsch \KLAHCH\ , noun: A casual gathering of people, esp. for refreshments and informal conversation  [German Klatsch, from klatschento gossip, make a sharp noiseof imitative origin.] (Dictionary.com)

Dear Readers:  Team SO! thought that we would warm up the dance floor for our upcoming Summer Series on Sound and Pleasure (peep the Call for Posts here. . .pitches are due by 4/15/14).   –J. Stoever, Editor-in-Chief

What sounds give you pleasure and why? 

Comment Klatsch logo courtesy of The Infatuated on Flickr.

 

Revising the Future of Music Technology

671px-Sblive!

Sound and TechThis is the opening salvo in Sounding Out!‘s April  Forum on “Sound and Technology.”  Every Monday this month, you’ll be hearing new insights on this age-old pairing from the likes of Sounding Out! veterano Primus Luta, along with new voices Andrew Salvati and Owen Marshall.  These fast-forward folks will share their thinking about everything from Auto-tune to productivity algorithms. So, program your presets for Sounding Out! and enjoy today’s exhilarating opening think piece from SO! Multimedia Editor Aaron Trammell.  –JS, Editor-in-Chief

We drafted a manifesto.

Microsoft Research’s New England Division, a collective of top researchers working in and around new media, hosted a one-day symposium on music technology. Organizers Nancy Baym and Jonathan Sterne invited top scholars from a plethora of interdisciplinary fields to discuss the value, affordances, problems, joys, curiosities, pasts, presents, and futures of Music Technology. It was a formal debrief of the weekend’s Music Tech Fest, a celebration of innovative technology in music. Our hosts christened the day, “What’s Music Tech For?” and told us to make bold, brave statements. A kaleidoscope of kinetic energy and ideas followed. And, at 6PM we crumpled into exhausted chatter over sangria, cocktails, and imported beer at a local tapas restaurant.

The day began with Annette Markham, our timekeeper, offering us some tips on how to best think through what a manifesto is. She went down the list: manifestos are primal, they terminate the past, create new worlds, trigger communities, define us, antagonize others, inspire being, provoke action, crave presence. In short, manifestos are a sort of intellectual world building. They provide a road map toward an imagined future, but in doing so they also work to produce this very future. Annette’s list made manifestos seem to be a very focused thing, and perhaps they usually are. But, having now worked through the process of creating a manifesto with a collective, I would add one more point – manifestos are sloppy.

Our draft manifesto is a collective vision about what the blind-spots of music technology are, at present, and what we want the future of music technology to look like. And although there is general synergy around all of the points within it, that synergy is somewhat addled by the polyphonic nature of the contributors. There were a number of discussions over the course of the day that were squelched by the incommensurable perspectives of one or two of the participants. For instance, two scholars argued about whether or not technical platforms have politics. These moments of disagreement, however, only added a brilliant contour to our group jam. Like the distortion cooked into a Replacements single, it only serves to highlight how superb the moments of harmony and agreement are in contrast. This brilliant and ambivalent fuzziness speaks perfectly to the value of radical interdisciplinarity.

These disagreements were exactly the point. Why else would twenty academics from a variety of interdisciplinary fields have been invited to participate? Like a political summit, there were delegates from Biology, Anthropology, Computer Science, Musicology, Science and Technology Studies, and more. Rotating through the room, we did our introductions (see the complete list of participants at the bottom of this paper). Our interests were genuine and stated with earnestness. Nancy Baym declared emphatically that music is, “a productive site for radical interdisciplinarity,” while Andrew Dubber, the director of Music Tech Fest, noted the centrality of culture to the dialogue. Both music and technology are culture, he argued. The precarity of musical occupations, the gender divide, and the relationship between algorithm and consumer, all had to take a central role in our conversation, an inspired Georgina Born demanded. Bryan Pardo, a computer scientist, announced that he was listening with an open mind for tips on how to best design the platforms of tomorrow. Though collegial, our introductory remarks were all political, loaded with our ambitions and biases.

The day was an amazing, free-form, brainstorm. An hour and a half long each, the sessions challenged us to answer a big question – first, what are the problems of music technology, then what are some actions and possibilities for its future. Every fifteen or twenty minutes an alarm would ring and tables would exchange members, the new member sharing ideas from the table they came from. At one point I came to a new table telling stories about how music had the power to sculpt social relations, and was immediately confronted with a dialogue about problems of integration in the STEM fields.

In short, the brainstorms were a hodgepodge of ideas. Some spoke about the centrality of music to many cultural practices. Noting the ways in which humans respond to their environments through music, they questioned if tonal schema were ultimately a rationalization of the world. Though music was theorized as a means of social control many questions remained about whether it could or should be operationalized as such. Others considered different conversations entirely. Jocking sustainability and transduction as key factors in an ideal interdisciplinarity and shunning models that either tried to put one discipline in service of another, or simply tried to stack and combine ideas.

Borrowed from Margaret Atwater.

Borrowed from Margaret Atwater.

Some of the most productive debates centered around the nature of “open” technology. Engineers were challenged on their claim that “open source technology” was an unproblematic good, by Cultural Studies scholars who argued that the barriers to access were still fraught by the invisible lines of race, class, and gender. If open source technology is to be the future of music technology, they argued, much work must still be done to foster a dialogue where many voices can take part in that space.

We also did our best to think up actionable solutions to these problems, but for many it was difficult to dream big when their means were small in comparison. One group wrote, “we demand money,” on a whiteboard in capital letters and blue marker. Funding is a recurrent and difficult problem for many scholars in the United States and other, similar, locations, where funding for the arts is particularly scarce. On points like this, we all agreed.

We even considered what new spaces of interactivity should look like. Fostering spaces of interaction with public works of art, music, performance and more, could go a long way in convincing policy makers that these fields are, in fact, worthy of greater funding. Could a university be designed so as to prioritize this public mode of performance and interactivity? Would it have to abandon the cloistered office systems, which often prohibit the serendipitous occasion of interdisciplinary discussion around the arts?

Borrowed from bfishadow @Flickr.

Borrowed from bfishadow @Flickr.

 

There are still many problems with the dream of our manifesto. To start, although we shared many ideas, the vision of the manifesto is, if anything, disheveled and uneven. And though the radical interdisciplinarity we epitomized as a group led to a million excellent conversations, it is difficult, still, to get a sense of who “we” really are. If anything, our manifesto will be the embodiment of a collective that existed only for a moment and then disbursed, complete with jagged edges and inconsistencies. This gumbo of ideas, for me, is beautiful. Each and every voice included adds a little extra to the overall idea.

Ultimately, “What’s Music Tech For?” really got me thinking. Although I remain skeptical about the United States seeing funding for the arts as a worthy endeavor anytime soon, I left the event with a number of provocative questions. Am I, as a scholar, too critical about the value of technology, and blind to the ways it does often function to provoke a social good? How can technological development be set apart from the demands of the market, and then used to kindle social progress? How is music itself a technology, and when is it used as a tool of social coercion? And finally, what should a radical mode of listening be? And how can future listeners be empowered to see themselves in new and exciting ways?

What do you think?

Our team, by order of introduction:
Mary Gray (Microsoft Research), Blake Durham (University of Oxford), Mack Hagood (Miami University), Nick Seaver (University of California – Irvine), Tarleton Gillespie (Cornell University), Trevor Pinch (Cornell University), Jeremy Morris (University of Wisconsin-Madison), Diedre Loughridge (University of California – Berkley), Georgina Born (Oxford University), Aaron Trammell (Rutgers University), Jessa Lingel (Microsoft Research), Victoria Simon (McGill University), Aram Sinnreich (Rutgers University), Andrew Dubber (Birmingham City University), Norbert Schnell (IRCAM – Centre Pompidou), Bryan Pardo (Northwestern University), Josh McDermitt (MIT), Jonathan Sterne (McGill University), Matt Stahl (Western University), Nancy Baym (Microsoft Research), Annette Markham (Aarhus University), and Michela Magas (Music Tech Fest Founder).

Aaron Trammell is co-founder and Multimedia Editor of Sounding Out! He is also a Media Studies PhD candidate at Rutgers University. His dissertation explores the fanzines and politics of underground wargame communities in Cold War America. You can learn more about his work at aarontrammell.com.

tape reelREWIND! . . .If you liked this post, you may also dig:

Listening to Tinnitus: Roles of Media when Hearing Breaks Down– Mack Hagood

Sounding Out! Podcast #15: Listening to the Tuned City of Brussels, The First Night– Felicity Ford and Valeria Merlini

“I’m on my New York s**t”: Jean Grae’s Sonic Claims on the City– Liana Silva-Ford

%d bloggers like this: