Archive | Live Music RSS for this section

Musical Encounters and Acts of Audiencing: Listening Cultures in the American Antebellum

Sound in the 19th3Editor’s Note: Sound Studies is often accused of being a presentist enterprise, too fascinated with digital technologies and altogether too wed to the history of sound recording. Sounding Out!‘s last forum of 2013, “Sound in the Nineteenth Century,” addresses this critique by showcasing the cutting edge work of three scholars whose diverse, interdisciplinary research is located soundly in the era just before the advent of sound recording: Mary Caton Lingold (Duke), Caitlin Marshall (Berkeley), and Daniel Cavicchi (Rhode Island School of Design). In examining nineteenth century America’s musical practices, listening habits, and auditory desires through SO!‘s digital platform, Lingold, Marshall, and Cavicchi perform the rare task of showcasing how history’s sonics had a striking resonance long past their contemporary vibrations while performing the power of the digital medium as a tool through which to, as Early Modern scholar Bruce R. Smith dubs it, “unair” past auditory phenomena –all the while sharing unique methodologies that neither rely on recording nor bemoan their lack. The series began with Mary Caton Lingold‘s exploration of the materialities of Solomon Northup’s fiddling as self-represented in 12 Years a SlaveLast week, Caitlin Marshall treated us to a fascinating new take on Harriet Beecher Stowe’s listening practice and dubious rhetorical remixing of black sonic resistance with white conceptions of revolutionary independence.  Daniel Cavicchi closes out “Sound in the Nineteenth Century” and 2013 with an excellent meditation on listening as vibrant and shifting historical entity.  Enjoy! —Jennifer Stoever-Ackerman, Editor-in-Chief

 —

“To listen” is straightforward enough verb, signifying a kind of hearing that is directed or attentive. Add an “er” suffix, however, and “listen” moves into a whole new realm: it is no longer something one does, an attentive response to stimuli, but rather something one is, a sustained role or occupation, even an identity. Everybody listens from time to time, but only some people adopt the distinct social category of “listener.”

And yet listeners have emerged in diverse historical and social contexts. Arnold Hunt, in his recent book The Art of Hearing, for example, points to the congregants of the Church of England in the late sixteenth and early seventeenth centuries, whose sermon-gadding and intense repetitive listening to preachers became a form of popular culture. Shane White and Graham White, in The Sounds of Slavery, argue that early nineteenth-century black slaves adopted listening, or “acting soundly,” as a way of being that gave everyday sounds—conversation, cries of exertion, hymns—multiple layers of meaning and a power unknown to white overseers. Jonathan Sterne, in The Audible Past, describes the post-Civil War culture of sound telegraphy, in which young working class men trained themselves to employ “audile technique” for bureaucratic purposes, rendering their hearing objective, standardized, and networked.

Physical manifestations of the growing standardization of listening, Dodge's Institute of Telegraphy, circa 1910 - Valparaiso, Indiana, Image by Flickr User Mr. Shook

Physical manifestations of the growing standardization of listening, Dodge’s Institute of Telegraphy, circa 1910 – Valparaiso, Indiana, Image by Flickr User Mr. Shook

We might add our own contemporary iPod era to these examples. We live in a time, after all, when it entirely acceptable to appear alone in public, ears connected to an iPod, head bobbing to the grooves of a vast archive of recorded music. Sampling, playlists, streaming–thanks to playback technologies, the U.S. has become a nation of obsessive listeners, and the power to “capture” a sound and re-hear it, something that began with the phonograph, remains a time-bending drama that can awaken people to their own aurality. Technologized listening, in fact, has spawned many of the icons of music discourse in the past 100 years: Edison’s tone testers in 1910s, record-collecting jitterbugs in the 1930s, audiophiles of the Hi-Fidelity era in the 1950s, Beatles fans with their bedroom record players in the 1960s, the “chair guy” in Memorex’s famous ad campaign of 1980, dancing listeners silhouetted in iPod posters since 2003.

But I think also that phonograph-centric narratives have obscured earlier, equally powerful cultures of listeners. The focus of my recent research, for example, has been the world of antebellum concert audiences. Between 1830 and 1860, the United States developed concentrated population centers filled with boosters and recent migrants eager to embrace a life based on new kinds of economic opportunity. Shaping much of the urban experience was a growing commercialization of culture that generated new and multiple means of musical performance, including parades, museum exhibitions, pleasure gardens, band performances, and concerts. Together, these performances significantly enhanced the act of listening: for people used to having to make music for themselves in order to hear it, a condition common to most Americans before 1830, access to public performances by others provided an opportunity for working and middle-class whites (women, African Americans, and the poor were another matter) to stop worrying about making music and, with the purchase of a ticket, to solely, and at length, assume an audience role.

A young George Templeton Strong, Image from CUNY Baruch

A young George Templeton Strong, Image from CUNY Baruch

The odd circumstance of purchasing the experience of listening provided class-striving urbanites with new possibilities for self-transformation. For many young, rural, white men, for example, arriving to the city for the first time to take clerking jobs in burgeoning merchant houses, being able to hear diverse performances of music was associated with a cosmopolitanism that brimmed with social possibility. Thus, for instance, Nathan Beekley, a young clerk, recently arrived in Philadelphia in 1849, found himself attending multiple performances of music several nights a week, including more and more appearances at the opera as a way to avoid “rowdies.” In New York City during the 1840s, George Templeton Strong, a young lawyer in Manhattan, derided his own musical abilities and instead attended every public musical event he could find, carefully chronicling his listening experiences and analyzing his reactions in a multi-volume journal. Walt Whitman, a young man on the make in Brooklyn and New York between 1838 and 1853, regularly attended every sound amusement he could, including the Bowery Theatre, dime museums, temperance lectures, political rallies, and opera, writing in Leaves of Grass, “I think I will do nothing for a long time but listen/And accrue what I hear into myself.”

This culture of listening was, in many ways, very much unlike ours. Despite an expanded access to performance, for instance, professional concerts before the mid-1850s were often understood as part of a wider ecology of sound. Very few listened to music in ways that we might expect today–focused on a “work,” in a concert hall, without distraction. Listening, in fact, was as much a matter of local happenstance as personal selection—a passing marching band, echoes of evening choir practice at a nearby church, an impromptu singing performance at a party. Such experiences were marked by the momentary thrill of spontaneity and discovery rather than the studied appreciation of familiarity; in any moment of hearing, it was difficult to know how long the encounter might be, or even what sounds, exactly, were being heard. Cities like Boston and New York were especially rich with such surprise encounters.

Thomas Benecke's lithograph “Sleighing in New York” from 1855, which shows musicians performing on the balcony of Barnum's Museum on the corner of Broadway and Ann Street.

Thomas Benecke’s lithograph “Sleighing in New York” from 1855, which, among many other sounds, depicts musicians performing on the balcony of Barnum’s Museum on the corner of Broadway and Ann Street.

Francis Bennett, a young arrival to Boston in 1854, for example, encountered, in his first night in the city, a band concert and the “cries” from a “Negro meeting house,” and within weeks became enamored of fife and drum bands, often leaving work to follow one and then another as far as he dared. Young writer J. T. Trowbridge was more stationary but equally enthusiastic about what he heard from his New York rooming house in 1847: “The throngs of pedestrians mingled below, moving (marvelous to conceive) each to his or her ‘separate business and desire;’ the omnibuses and carriages rumbled and rattled past; while, over all, those strains of sonorous brass built their bridge of music, from the high café balcony to my still higher window ledge, spanning joy and woe, sin and sorrow, past and future….”

Music listeners were also often listeners of other forms of commercial sound, especially theater, oratory, and church services, which, together, comprised a complex sonic culture. This was especially reinforced by the physical spaces in which they shared such diverse aural experiences. In a rapidly-growing society, there often was not time or immediate resources to construct buildings dedicated to specific uses; instead, existing structures–typically a “hall” or “opera house”–served mixed uses.

Metropolitan Hall in New York City, where concert singer Elizabeth Taylor Greenfield debuted in 1853.

Metropolitan Hall in New York City, where concert singer Elizabeth Taylor Greenfield debuted in 1853. It also hosted abolitionist meetings, talks on women’s rights, and various other activities.

As historian Jean Kilde noted in When Church Became Theater, evangelists in the Second Great Awakening often rented urban theaters for services; and congregations, in turn, rented churches to drama troupes, ventriloquists, and musicians to raise money. This “mixed-use” of buildings was reinforced by hearers, who often engaged in their own “mixed-use” understandings of what they heard. They evaluated sermons as they would a theatrical performance or found church choirs thrillingly entertaining rather than piously inspirational. Conversely, they listened to symphonic concerts with a religious solemnity.

This culture of antebellum, middle-class urban listeners didn’t last long, succumbing to the class sorting by post-Civil War social reformers, who mocked the indiscriminate over-exuberance of antebellum listeners as a kind of “mania” and a form of social disorder. As Lawrence Levine explains in Highbrow Lowbrow, over the course of the nineteenth century, developing a “musical ear” became increasingly paramount, reverence for great works of art shaped audience response, and listening became a specific skill to be learned. Music became something to appreciate not simply hear. By the 1890s, a true listener was someone who, in the words of critic Henry Edward Krehbiel (in his enormously popular How to Listen to Music, from 1897), “will bring his fancy into union with that of the composer” (51).

 “Man With the Musical Ear.” Arthur’s Home Magazine (September 1853): 167.

“Man With the Musical Ear.” Arthur’s Home Magazine (September 1853): 167.

In many ways, the controlled silent listening favored by reformers directly paved the way for music technologies, like the phonograph, that similarly sought to control and manipulate listening. But it was the urban music listeners of the 1840s and 1850s who were responsible, in the first place, for identifying and accentuating the joys and possibilities of “just listening.”

Featured Image: Etching of Jenny Lind Singing at Castle Garden in New York City, 1851

Daniel Cavicchi is Dean of Liberal Arts and Professor of History, Philosophy, and the Social Sciences at Rhode Island School of Design. He is author of Listening and Longing: Music Lovers in the Age of Barnum and Tramps Like Us: Music and Meaning Among Springsteen Fans, and co-editor of My Music: Explorations of Music in Daily Life. His public work has included Songs of Conscience, Sounds of Freedom, an inaugural exhibit for the Grammy Museum in Los Angeles; the curriculum accompanying Martin Scorcese’s The Blues film series; and other projects with the Public Broadcasting System and the National Park Service. He is currently the editor of the Music/Interview series from Wesleyan University Press and serves on the editorial boards of American Music and Participations: the Journal of Audience Research

tape reelREWIND! . . .If you liked this post, you may also dig:

“Como Now?: Marketing ‘Authentic’ Black Music,” –J. Stoever-Ackerman

Hearing the Tenor of the Vendler/Dove Conversation: Race, Listening, and the “Noise” of Texts –Christina Sharpe

How Svengali Lost His Jewish AccentGayle Wald

Live Electronic Performance: Theory And Practice

This is the third and final installment of a three part series on live Electronic music.  To review part one, “Toward a Practical Language for Live Electronic Performance” click here. To peep part two, “Musical Objects, Variability and Live Electronic Performance” click here.

“So often these laptop + controller sets are absolutely boring but this was a real performance – none of that checking your emails on stage BS. Dude rocked some Busta, Madlib, Aphex Twin, Burial and so on…”

This quote, from a blogger about Flying Lotus’ 2008 Mutek set, speaks volumes about audience expectations of live laptop performances. First, this blogger acknowledges that the perception of laptop performances is that they are generally boring, using the “checking your email” adage to drive home the point. He goes to express what he perceived to set Lotus’s performance apart from that standard. Oddly enough, it isn’t the individualism of his sound, rather it was Lotus’s particular selection and configuration of other artists’ work into his mix – a trademark of the DJ.

Contrasting this with the review of the 2011 Flying Lotus set that began this series, both reveal how context and expectations are very important to the evaluation of live electronic performance.  While the 2008 piece praises Lotus for a DJ like approach to his live set, the 2011 critique was that the performance was more of a DJ set rather than a live electronic performance. What changed in the years between these two sets was the familiarity with the style of performance (from Lotus and the various other artists on the scene with similar approaches) providing a shift in expectations. What they both lack, however, is a language to provide the musical context for their praise or critique; a language which this series has sought to elucidate.

To put live electronic performances into the proper musical context, one must determine what type of performance is being observed. In the last part of this series, I arrive at four helpful distinctions to compare and describe live electronic performance, continuing this project of developing a usable aesthetic language and enabling a critical conversation about the artform.  The first of the four distinctions between different types of live electronic music performance concerns the manipulation of fixed pre-recorded sound sources into variable performances. The second distinction cites the physical manipulation of electronic instruments into variable performances. The third distinction demarcates the manipulation of electronic instruments into variable performances by the programming of machines. The last one is an integrated category that can be expanded to include any and all combinations of the previous three.

Essential to all categories of live electronic music performance, however, is the performance’s variability, without which music—and its concomitant listening practices–transforms from  a “live” event to a fixed musical object. The trick to any analysis of such performance however, is to remember that, while these distinctions are easy to maintain in theory, in performance they quickly blur one into the other, and often the intensity and pleasure of live electronic music performance comes from their complex combinations.

6250416351_d5ca1fc1f3_b

Flying Lotus at Treasure Island, San Francisco on 10-15-2011, image by Flickr User laviddichterman

For example, an artist who performs a set using solely vinyl with nothing but two turntables and a manual crossfading mixer, falls in the first distinction between live electronic music performances. Technically, the turntables and manual crossfading mixer are machines, but they are being controlled manually rather than performing on their own as machines.  If the artist includes a drum machine in the set, however, it becomes a hybrid (the fourth distinction), depending on whether the drum machine is being triggered by the performer (physical manipulation) or playing sequences (machine manipulation) or both. Furthermore, if the drum machine triggers samples, it becomes machine manipulation (third distinction) of fixed pre-recorded sounds (first distinction) If the drum machine is used to playback sequences while the artist performs a turntablist routine, the turntable becomes the performance instrument while the drum machine holds as a fixed source. All of these relationships can be realized by a single performer over the course of a single performance, making the whole set of the hybrid variety.

While in practice the hybrid set is perhaps the most common, it’s important to understand the other three distinctions as each of them comes with their own set of limitations which define their potential variability.  Critical listening to a live performance includes identifying when these shifts happen and how they change the variability of the set.  Through the combination their individual limitations can be overcome increasing the overall variability of the performance. One can see a performer playing the drum machine with pads and correlate that physicality of it with the sound produced and then see them shift to playing the turntable and know that the drum machine has shifted to a machine performance. In this example the visual cues would be clear indicators, but if one is familiar with the distinctions the shifts can be noticed just from the audio.

This blending of physical and mechanical elements in live music performance exposes the modular nature of live electronic performance and its instruments. teaching us that the instruments themselves shouldn’t be looked at as distinction qualifiers but rather their combination in the live rig, and the variability that it offers. Where we typically think of an instrument as singular, within live electronic music, it is perhaps best to think of the individual components (eg turntables and drum machines) as the musical objects of the live rig as instrument.

Flying Lotus at the Echoplex, Los Angeles, Image by Flickr User  sunny_J

Flying Lotus at the Echoplex, Los Angeles, Image by Flickr User sunny_J

Percussionists are a close acoustic parallel to the modular musical rig of electronic performers. While there are percussion players who use a single percussive instrument for their performances, others will have a rig of component elements to use at various points throughout a set. The electronic performer inherits such a configuration from keyboardists, who typically have a rig of keyboards, each with different sounds, to be used throughout a set. Availing themselves of a palette of sounds allows keyboardists to break out of the limitations of timbre and verge toward the realm of multi-instrumentalists.  For electronic performers, these limitations in timbre only exist by choice in the way the individual artists configure their rigs.

From the perspective of users of traditional instruments, a multi-instrumentalist is one who goes beyond the standard of single instrument musicianship, representing a musician well versed at performing on a number of different instruments, usually of different categories.  In the context of electronic performance, the definition of instrument is so changed that it is more practical to think not of multi-instrumentalists but multi-timbralists.  The multi-timbralist can be understood as the standard in electronic performance.  This is not to say there are not single instrument electronic performers, however  it is practical to think about the live electronic musician’s instrument not as a singular musical object, but rather a group of musical objects (timbres) organized into the live rig.  Because these rigs can be comprised of a nearly infinite number of musical objects, the electronic performer has the opportunity to craft a live rig that is uniquely their own. The choices they make in the configuration of their rig will define not just the sound of their performance, but the degrees of variability they can control.

Because the electronic performer’s instrument is the live rig comprised of multiple musical objects, one of the primary factors involved in the configuration of the rig is how the various components interact with one another over the time dependent course of a performance. In a live tape loop performance, the musician may use a series of cassette players with an array of effects units and a small mixer. In such a rig, the mixer is the primary means of communication between objects. In this type of rig however, the communication isn’t direct. The objects cannot directly communicate with each other, rather the artist is the mediator. It is the artist that determines when the sound from any particular tape loop is fed to an effect or what levels the effects return sound in relation to the loops. While watching a performance such as this, one would expect the performer to be very involved in physically manipulating the various musical objects. We can categorize this as an unsynchronized electronic performance meaning that the musical objects employed are not locked into the same temporal relations.

Big Tape Loops, Image by Flickr User  choffee

Big Tape Loops, Image by Flickr User choffee

The key difference between an unsynchronized and s synchronized performance rigs is the amount of control over the performance that can be left to the machines. The benefit of synchronized performance rigs is that they allow for greater complexity either in configuration or control. The value of unsynchronized performance rigs is they have a more natural and less mechanized feel, as the timing flows from the performer’s physical body. Neither could be understood as better than the other, but in general they do make for separate kinds of listening experiences, which the listener should be aware of in evaluating a performance. Expectations should shift depending upon whether or not a performance rig is synchronized.

This notion of a synchronized performance rig should not only be understood as an inter-machine relationship. With the rise of digital technology, many manufacturers developed workstation style hardware which could perform multiple functions on multiple sound sources with a central synchronized control. The Roland SP-404 is a popular sampling workstation, used by many artists in a live setting. Within this modest box you get twelve voices of sample polyphony, which can be organized with the internal sequencer and processed with onboard effects. However, a performer may choose not to utilize a sequencer at all and as such, it can be performed unsynchronized, just triggering the pads. In fact, in recent years there has been a rise of drum pad players or finger drummers who perform using hardware machines without synchronization. Going back to our three distinctions a performance such as this would be a hybrid of physical manipulation of fixed sources with the physical manipulation of an electronic instrument.  From this qualification, we know to look for extensive physical effort in such performances as indicators of the the artists agency on the variable performance.

Now that we’ve brought synchronization into the discussion it makes sense to talk about what is often the main means of communication in the live performance rig – the computer. The ultimate benefit of a computer is the ability to process a large number of calculations per computational cycle. Put another way, it allows users to act on a number of musical variables in single functions. Practically, this means the ability to store, organize recall and even perform a number of complex variables. With the advent of digital synthesizers, computers were being used in workstations to control everything from sequencing to the patch sound design data. In studios, computers quickly replaced mixing consoles and tape machines (even their digital equivalents like the ADAT) becoming the nerve center of the recording process. Eventually all of these functions and more were able to fit into the small and portable laptop computer, bringing the processing power in a practical way to the performance stage.

Flying Lotus and his Computer, Image by Flickr User  jaswooduk

Flying Lotus and his Computer, All Tomorrow’s Parties 2011, Image by Flickr User jaswooduk

A laptop can be understood as a rig in and of itself, comprised of a combination of software and plugins as musical objects, which can be controlled internally or via external controllers. If there were only two software choices and ten plugins available for laptops, there would be over seven million permutations possible. While it is entirely possible (and for many artists practical) for the laptop to be the sole object of a live rig, the laptop is often paired with one or more controllers. The number of controllers available is nowhere near the volume of software on the market, but the possible combinations of hardware controllers take the laptop + controller + software combination possibilities toward infinity. With both hardware and software there is also the possibility of building custom musical objects that add to the potential uniqueness of a rig.

Unfortunately, quite often it is impossible to know exactly what range of tools are being utilized within a laptop strictly by looking at an artist on stage. This is what leads to probably the biggest misnomer about the performing laptop musician. As common as the musical object may look on the stage, housed inside of it can be the most unique and intricate configurations music (yes all of music) has ever seen. The reductionist thought that laptop performers aren’t “doing anything but checking email” is directly tied to the acousmatic nature of the objects as instruments. We can hear the sounds, but determining the sources and understanding the processes required to produce them is often shrouded in mystery. Technology has arrived at the point where what one performs live can precisely replicate what one hears in recorded form, making it easy to leap to the conclusion that all laptop musicians do is press play.

Indeed some of them do, but to varying degrees a large number of artists are actively doing more during their live sets. A major reasons for this is that one of the leading Digital Audio Workstations (DAW) of today also doubles as a performance environment. Designed with the intent of taking the DAW to the stage, Ableton Live allows artists to have an interface that facilitates the translation of electronic concepts from the studio to the stage. There are a world of things that are possible just by learning the Live basics, but there’s also a rabbit hole of advanced functions all the way to the modular Max for Live environment which lies on the frontier discovering new variables for sound manipulation. For many people, however, the software is powerful enough at the basic level of use to create effective live performances.

Sample Screenshot from a performer's Ableton Live set up for an "experimental and noisy performance" with no prerecorded material, Image by Flickr User Furibond

Sample Screenshot from a performer’s Ableton Live set up for an “experimental and noisy performance” with no prerecorded material, Image by Flickr User Furibond

In its most basic use case, Ableton Live is set up much like a DJ rig, with a selection of pre-existing tracks queued up as clips which the performer blends into a uniform mix, with transitions and effects handled within the software. The possibilities expand out from that: multi-track parts of a song separated into different clips so the performer can take different parts in and out over the course of the performance; a plugin drum machine providing an additional sound source on top of the track(s), or alternately the drum machine holding a sequence while track elements are laid on top of it. With the multitude of plugins available just the combination of multi-track Live clips with a single soft synth plugin, lends itself to near infinite combinations. The variable possibilities of this type of set, even while not exploiting the breadth of variable possibilities presented by the gear, clearly points to the artist’s agency in performance.

Within the context of both the DJ set and the Ableton Live set, synchronization plays a major role in contextualization. Both categories of performance can be either synchronized or unsynchronized. The tightest of unsynchronized sets will sound synchronized, while the loosest of synchronized sets will sound unsynchronized. This plays very much into audience perception of what they are hearing and the performers’ choice of synchronization and tightness can be heavily influenced by those same audience expectations.

A second performance screen capture by the same artist, this time using pre-recorded midi sequences, Image by Flickr User Furibond

A second performance screen capture by the same artist, this time using pre-recorded midi sequences, Image by Flickr User Furibond

A techno set is expected to maintain somewhat of a locked groove, indicative of a synchronized performance. A synchronized rig either on the DJ side (Serato utilizing automated beat matching) or on the Ableton side (time stretch and auto bpm detection sync’d to midi) can make this a non factor for the physical performance, and so listening to such a performance it would be the variability of other factors which reveals the artist’s control over the performance. For the DJ, the factors would include the selection, transitions and effects use. For the Ableton user, it can include all of those things as well as the control over the individual elements in tracks and potentially other sound sources.

On the unsychronized end of the spectrum, a vinyl DJ could accomplish the same mix as the synchronized DJ set but it would physically require more effort on their part to keep all of the selections in time. This would mean they might have to limit exerting control on the other variables. An unsychronized Live set would be utilizing the software primarily as a sound source, without MIDI, placing the timing in the hands of the performer. With the human element added to the timing it would be more difficult to produce the machine-like timing of the other sets. This doesn’t mean that it couldn’t be effective, but there would be an audible difference in this type of set compared to the others.

What we’ve established is that through the modular nature of the electronic musician’s rig as an instrument, from synthesizer keyboards to Ableton Live, every variable consideration combines to produce infinite possibilities. Where the trumpet is limited in timbre, range and dynamics, the turntable is has infinite timbres; the range is the full spectrum of human hearing; and the dynamics directly proportional to the output. The limitations of the electronic musician’s instrument appear only in electrical current constraints, processor speed limits, the selection of components and the limitations of the human body.

Flying Lotus at Electric Zoo, 2010, Image by Flickr User TheMusic.FM

Flying Lotus at Electric Zoo, 2010, Image by Flickr User TheMusic.FM

Within these constraints however, we have only begun touching the surface of possibilities. There are combinations that have never happened, variables that haven’t come close to their full potential, and a multitude of variables that have yet to be discovered. One thing that the electronic artist can learn from jazz toward maximizing that potential is the notion of play, as epitomized with jazz improvisation. For jazz, improvisation opened up the possibilities of the form which impacted, performance and composition. I contend that the electronic artist can push the boundaries of variable interaction by incorporating the ability to play from the rig both in their physical performance and giving the machine its own sense of play. Within this play lie the variables which I believe can push electronic into the jazz of tomorrow.

Featured Image by Flickr User Juha van ‘t Zelfde

Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications. In 2014 he will be releasing an expanded version of this series as a book entitled “Toward a Practical Language: Live Electronic Performance”. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for theRhythm Incursions Podcast series with his monthly showRIPL. As an artist, he is a founding member of the live electronic music collectiveConcrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.

tape reelREWIND! . . .If you liked this post, you may also dig:

Evoking the Object: Physicality in the Digital Age of Music-Primus Luta

Experiments in Agent-based Sonic Composition–Andreas Pape

Calling Out To (Anti)Liveness: Recording and the Question of PresenceOsvaldo Oyola