Until the nineteenth century, very little medieval music had been rediscovered, and what little was known was known mainly to specialists. However, the nineteenth-century fascination with the Middle Ages, the Gothic Revival in art and literature, inspired composers to write symphonic works and operas using medieval stories and themes. By the time twentieth-century musicologists began decoding and publishing music from the medieval past, a general consensus as to “what sounds medieval” had already been established in the minds of educated listeners familiar with European art music (see Annette Kreuziger-Herr’s 2005 article “Imagining Medieval Music: A Short History”). Ideas of the medieval constructed in the 19th century via medievalist works of art persisted well into the twentieth, and eventually the imagined medieval sound of Romantic music turned out to be incompatible with the newly discovered historical record.
While Early Music performers in the 1970s and 1980s based their performance styles on historical research, their artistic work was inevitably conceived in relation to existing concert repertoires and standards of performance, as well as ideas about the Middle Ages known by them and by their audiences. A general goal was to create performances that audiences could feel were convincingly accurate to how the music would have sounded in the past. But a sense of authenticity depends not just on the performers’ historical accuracy, but on a complex interaction of experiences and expectations on the part of the audience.
Two films, Monty Python and the Holy Grail and Knightriders, play directly with the clash between romantic interpretation of the Middle Ages and historically informed performances of music, for comedic effect. In Music in Films on the Middle Ages: Authenticity vs. Fantasy, John Haines examines six tropes of music in medievalist films: bells, horn calls and trumpet fanfares, court and dance music, minstrels, chant, and warriors on horseback. Haines mentions Monty Python and the Holy Grail several times, but does not discuss any of its music; he does not discuss Knightriders, likely because of its modern setting.
This essay examines the way these two films used music to highlight issues of the real, the historical, and the existence of competing medievalisms central to the understanding of medieval music in modern times. In the director’s commentary for the film Monty Python and the Holy Grail (1975), writer/performer/director Terry Jones had this to say about the film’s music:
Neil Innes had originally written the music for the entire film, and when we showed it, it didn’t really seem to work… we’d all agreed that Neil should go for a very authentic sound, and authentic instruments. The trouble is, it sounded quaint, and when we went to do the read-up I realized that actually you needed kind of mock-heroic music. But of course at that stage, we couldn’t afford to go and record some more music, so the only thing I could do was to go to a music library – I went to DeWolfe’s [a music library that licenses stock music] in London, and spent sort of weeks going through their discs, and sorting out bits of music to put on to it, to give it that sort of … it was just that we realized that if you had sort of music that sounded slightly quaint, because it was on original instruments, and you had all these silly goings-on, it looked comic, the music, instead of actually looking, um, real….
They wanted an “authentic sound,” but what they got didn’t seem real: here is the problem of conflicting expectations in a nutshell. Jones likely meant that the newly composed music, including the sounds of pre-modern instruments, was consciously reminiscent of older music as it was beginning to be known. The song sung by Sir Robin’s minstrels is likely a survival of Innes’ “authentic” score. The singer – Innes himself – sings a minor-mode melody reminiscent of a Tudor-era tune (think “Greensleeves”), accompanied by recorders, some reed instrument, and drum beats. The terms “original instruments” or “authentic instruments” don’t just refer to antique instruments (or copies of antique instruments), they were code for a particular approach to the performance of old music, an approach that was becoming very popular in 1970s Britain (for more on the British Early Music revival, see The Art of Re-Enchantment: Making Early Music in the Modern Age by Nick Wilson).
Jones characterized the pre-existing stock music chosen at De Wolfe’s as “mock Heroic,” but there’s really nothing mock about it per se— it’s genuinely heroic-sounding orchestral music, in the romantic post-Wagnerian style of Hollywood film scores, probably written in the 1950s or 1960s. There are brass fanfares when the company sights Camelot in the distance, angelic choirs for the vision of God who speaks to Arthur from the clouds, and especially, there is the wonderful music used when the company is galloping with coconuts. Here they are, approaching the castle of Gui de Lombard
The music for the galloping horses is heard several times during the film, and the contrast between its epic majesty and the sight of King Arthur and his knights not riding horses but rather banging two halves of coconuts together to simulate the sound of hoofbeats enhances the sight gag. The melody consists of four repetitions of the same melody, a strong rhythmic fanfare. The melody itself is made up of two almost identical phrases that each rise and then return down. The rhythm both emphasizes the steady pace and, through its mixture of long and short durations (a so-called “dotted rhythm”), suggests the gait of galloping horses. First we hear percussion, the high-frequency snare drums and lower booming tympani, then the low brass play the melody, harmonized. Higher trumpets join in for the second repeat, with flutes doubling the melody at a higher octave for added brilliance. That combination plays the third repeat, with strings joining for the fourth repeat. The tympani play faster, approaching the fanfare’s final cadence, but it becomes an anti-climax as Patsy, played by Terry Gilliam, can’t really play his long herald’s trumpet. A very knowing joke, as onscreen Hollywood heralds are frequently accompanied by the sound of modern valved trumpets playing chromatic phrases impossible to play on the visible long trumpets.
[The whole piece can be heard here; the part heard repeatedly in the film begins at 1:27:]
In its instrumentation, especially the use of brass and percussion, and the dotted rhythms (long, short, long, short, etc.) suggesting hoofbeats, this fanfare draws on a long tradition of music written to suggest galloping horses, whether ridden by medieval knights or other hunters or warriors. The most famous example is probably the finale to the “William Tell” Overture by Gioachino Rossini (1839). That galloping music influenced many film scores featuring knights on horseback, and it is likely that Terry Jones recognized the style from films he could have seen since the 1950s, such as MGM’s Knights of the Round Table (1953), starring Mel Ferrer as King Arthur and the very wooden Robert Taylor as Sir Lancelot which features music by Miklós Rózsa (1907-1995), who wrote similar music for both medieval knights and Roman charioteers, including trumpets to suggest fanfares and uneven rhythms to suggest hoofbeats.
The music in my second example deals directly with the clash between then and now. The 1981 film Knightriders was written and directed by George A. Romero, better known for his horror films (he invented the modern zombie film with 1968’s Night of the Living Dead). Knightriders follows the adventures of a traveling Renaissance-faire troupe of actors who stage jousting tournaments while riding motorcycles; the interactions between the actors come to parallel Arthurian stories. The film revolves around the conflict between modern reality and the desire to live one’s life according to (a modern understanding of) medieval chivalric virtues. This scene shows the troupe’s act: the action is straightforward, but the music is complicated, with the score by Donald Rubenstein using five distinct kinds of music in a theoretically sophisticated manner. In order, we hear: live music played on-screen by visible performers, recorded music played onscreen, and then three musically distinct flavors of underscore music. Watch for a yet another knowing gag about the technical capabilities of heralds blowing straight trumpets, as well as a cameo appearance by the writer Stephen King as a scornful man eating a sandwich.
The music performed on-screen functions as part of the scene. The painfully amateur band of minstrels plays a tune from Handel’s oratorio “Judas Maccabeus” (1746), probably known to the squeaky violinist from its use as a Suzuki teaching piece. We can imagine that these musicians are playing the oldest piece of music they know, and in response the audience—both onscreen and offscreen—is led to believe that the rest of the show will be similarly amateurish. The professionally recorded trumpet fanfare demonstrates again how their (and our) musical associations for medieval scenes, especially tournaments, come so strongly from Hollywood. The actors can’t begin to play their instruments, but everyone knows that the heralds and their fanfares are crucial for the scene they’re trying to set.
When King William and Linet take their thrones, we hear underscore music—guitar and fiddle—for the first time; the shift from source music to underscore is done with great subtlety. This music has the informal flavor of folk music, which connects it to the little onscreen band, but now played professionally. The cue is extremely short, but serves as a transition from the real world of the tacky Ren-fest to the imagined world of chivalry and knighthood in some mythical past enacted by the modern-day warriors. The spectacular motorcycle stunts are underscored by more folk-sounding music: again fiddle and guitar, this time playing a fast jig. This music is exciting, as befits the action, but the specific sounds, reminiscent of Irish folk music, do more than just underscore excitement: they begin to move us backwards in time. Folk music reads as traditional, as old, but not ancient. The folk music sounds inhabit an intermediate place, a remembered past between the present and the imagined distant past of the middle ages.
When the individual jousts begin, the underscore music changes again, first to the heroic type of “Hollywood Knights” music, prompting the audience to equate the “real” riders onscreen with their fictional (and filmic) knightly counterparts. This kind of orchestral music, with prominent strings, is used throughout the film for moments of almost magical transformation, suggesting that for believers, such transformation is indeed possible. For the second and third jousts the underscore changes one last time, to trombones playing very archaic-sounding music. Here at last in 1980 is the sound of “authentic instruments” that Terry Jones eschewed back in 1974, and it doesn’t sound quaint at all here, it sounds real. These riders aren’t (just) carnival performers, they are knights at a tournament, risking bodily injury in their quest for personal excellence.
The music in this sequence plays with a basic distinction in film music and film music scholarship: is the music diegetic, that is, part of the film’s story, or nondiegetic, that is, as Robynn J. Stilwell describes in “The Fantastical Gap between Diegetic and Nondiegetic,” in Beyond the Soundtrack, “an element of the cinematic apparatus that represents that world?” (184). The terms were first used by Claudia Gorbman in her pioneering study, Unheard Melodies: Narrative Film Music (1987), and have been a major focus of film music scholarship ever since.
In this sequence, the amateur musicians and the taped fanfare, count as diegetic music, while everything else is nondiegetic. Stilwell points out that border crossings between these two categories are common, but meaningful; she coined the term “fantastical gap,” the gap between what we see and hear and what we hear without seeing, to discuss the liminal (conceptual) space between the two (185-187). Stilwell describes movement from diegetic to nondiegetic as a “trajectory [that] takes on great narrative and experiential import. These moments do not take place randomly; they are important moments of revelation, of symbolism, and of emotional engagement within the film and without” (200). The Knightriders tournament sequence, already liminal as a “play within the play” of the film, traverses the fantastical gap in a way that dramatizes a central question for the performance of Early Music: what happens when modern performances are historically informed, but emotionally unsatisfying? For George Romero, as for Terry Jones, the answer is to give priority to the emotional, drawing on their and their audiences’ expectation for “Hollywood Knights” soundtrack music. Early Music performers in the 1980s made similar choices, merging familiarity and unfamiliarity in exciting ways that unfortunately left them open to criticism that their performances were “inauthentic,” an advertising claim they found themselves defending.
The modern performance of surviving pieces of medieval music, recorded in notation, will always enact a kind of medievalism, involving the creative use of material survival from the past. But to understand the goal for the performance of medieval music as historical accuracy alone is to miss an important point. Medievalisms, the creative use of historical material, create meanings for modern audiences, meanings that then help to shape further understanding of the historical past.
The decades since these two films were made and released saw great growth in both artistic approaches to Early Music, and in the skill of performers in playing old instruments and singing in newer, non-Classical styles. Professional recordings enjoyed real commercial success as a sub-genre in Classical music starting in the 1980s, accelerating audiences’ familiarity with the novel sounds of old music re-vivified. The 1990s saw “new age” music and Celtic folk-music sounds added to the mix, while performers in the 2000s have drawn inspiration from Greek Orthodox chant, and continental folk musics. “What sounds medieval” will always be conditioned by what modern performers and listeners imagine “the medieval” to be, and how “the medieval” is understood to be different from later times. As those imaginings are themselves constantly changing, so will their representations in music.
Featured Image: “Monty Python Coconuts” by Flickr User Mark Turnauckus
Elizabeth Randell Upton is Associate Professor of Musicology at UCLA and the author of Music and Performance in the Later Middle Ages (“The New Middle Ages” series, Palgrave Macmillan, 2013), which examines late fourteenth and early fifteenth century vocal music to discover evidence for the experiences of performers and listeners in the medieval past, recorded in surviving musical notation. Her next book will explore mid-twentieth century Early Music revivals in the UK and US, moving beyond the usual focus on musicological scholarship and classical music traditions to examine Early Music’s interactions with both folk music revivals and popular music.
REWIND!…If you liked this post, you may also dig:
Mouthing the Passion: Richard Rolle’s Soundscapes–Christopher Roman
EPISODE LI: Creating New Words from Old Sounds–Marcella Ernest, Candace Gala, Leslie Harper, and Daryn McKenny
I’m fortunate to have quite a few friends with eclectic musical tastes, who continually expose me some of the best, albeit often obscure, sources for inspiration. They arrive as random selections sent with a simple “you’d appreciate this” note attached. Good friends that they are, they rarely miss the mark. Most intriguing is when a cluster of things from different people carry a similar theme, converging to a need on my part for some sort of musical action.
A few years back I received a huge dump of gigabytes of audio and video. Within it were some concert footage and performances this friend and I had been discussing; I consumed those quickly in an effort to keep that conversation going. Tucked amidst that dump however, was a copy of the movie Liquid Sky. I asked the friend about it because the description of the plot–“heroin-pushing aliens invade 80’s New York”–led me to believe it wasn’t really my thing (not a big fan of needles). Although my friend insisted I’d enjoy it, it took me several months if not a whole year before I finally pressed play.
Even though Liquid Sky was not my favorite movie by any measure, it was immediately apparent to my ears why my friend insisted I check it out. The film’s score was performed completely on a Fairlight CMI, capturing the synthesized undercurrent of the early 80’s New York music scene, more popularly seen in the cult classic Downtown 81, starring Jean Michel Basquiat. While the performances in that movie are perhaps closer to my tastes, none of them compare to one scene from Liquid Sky that I fell in love with, instantly:
The song grabbed me so much, I quickly churned out a cover version.Primus Luta “Me & My Rhythm Box (V1)”
While felt good to make, there remained something less than satisfying about it. The cover had captured my sound, but at a moment of transition. More specifically, the means by which I was trying to achieve my sound at the time had shifted from a DAW-in-the-box aesthetic to a live performance feel, one that I had already begun writing about here on Sounding Out! in 2013. Interestingly, the inspiration to cover the song pushed me back to my in-the-box comfort zone.
It was good, but I knew I could do more.
As I said, these inspirations tend to group around a theme. Prior to receiving the Liquid Sky dump, I had received an email out of the blue from Hank Shocklee, producer and member of the Bomb Squad. I’ve been a longtime fan, and we had the opportunity to meet a few years prior. Since then he’s played a bit of a mentoring role for me. In the email he asked if I wanted to join an experimental electronic jazz project he was pulling together as the drummer.
I was taken aback. Hank Shocklee asking me to be his drummer. Honestly, I was shook.
Not that I didn’t know why he might think to ask me, but immediately I started to question whether I was good enough. Rather than dwell on those feelings, though, I started stepping up my game. While the project itself never came to fruition, Shocklee’s email led me to building my drmcrshr set of digital instruments.
A year or so later, I ran into Shocklee again when he was in Philadelphia for King Britt’s Afrofuturism event with mutual friend artist HPrizm. By this time I had already recorded the “Me and My Rhythm Box” cover. Serendipitously, HPrizm ended up dropping a sample from it in the midst of his set that night. A month or so later, HPrizm and I met up in the studio with longtime collaborator Takuma Kanaiwa to record a live set on which I played my drmcrshr instruments.Primus Luta x HPrizm x Takuma Kanaiwa – “Excerpt”
Not too long after, I received an email from NYC-based electronic musician Elucid, saying he was digging for samples on this awesome soundtrack. . .Liquid Sky.
The final convergence point had been hanging over my head for a while. Having finished the first part of my “Toward a Practical Language series on Live Performance” series, I knew I wanted the next part to focus on electronic instruments, but wasn’t yet sure how to approach it. I had an inkling about a practicum on the actual design and development of an electronic instrument, but I didn’t yet have a project in mind.
As all of these things, people, and sounds came together–Liquid Sky, Shocklee, HPrizm, Elucid–it became clear that I needed to build a rhythm box.
What stands out in Paula Sheppard’s performance from Liquid Sky is the visual itself. She stands in the warehouse performance space surrounded by 80’s scenesters posing with one hand in the air, mic in the other while strapped to her side is her rhythm box, the Roland CR-78, wires dangling from it to connect to the venue’s sound system. She hits play to start the beat launching into the ode for the rhythm machine.
Contextually, it’s far more performance art than music performance. There isn’t much evidence from the clip that the CR-78 is any more than a prop, as the synthesizer lines indicate the use of a backing track. The commentary in the lyrics however, hone in on an intent to present the rhythm box as the perfect musical companion, reminiscent of comments Raymond Scott often made about his desire to make a machine to replace musicians.
My rhythm box is sweet
Never forgets a beat
It does its rule
Do you want to know why?
It is pre-programmed
Rhythm machines such as the CR-78 were originally designed as accompaniment machines, specifically for organ players. They came pre-programmed with a number of traditional rhythm patterns–the standards being rock, swing, waltz and samba–though the CR-78 had many more variations. Such machines were not designed to be instruments themselves, rather musicians would play other instruments to them.
In 1978 when the CR-78 was introduced, rhythm machines were becoming quite sophisticated. The CR-78 included automatic fills that could be set to play at set intervals, providing natural breaks for songs. As with a few other machines, selecting multiple rhythms could combine patterns into new rhythms. The CR-78 also had mute buttons and a small mixer, which allowed slight customization of patterns, but what truly set the CR-78 apart was the fact that users could program their own patterns and even save them.
By the time it appeared in Liquid Sky, the CR-78 had already been succeeded by other CR lines culminating in the CR-8000. Roland also had the TR series including the TR-808 and the TR-909, which was released in 1982, the same year Liquid Sky premiered.
In 1980 however, Roger Linn’s LM-1 premiered. What distinguished the LM-1 from other drum machines was that it used drum samples–rather than analog sounds–giving it more “real” sounding drum rhythms (for the time). The LM-1 and its predecessor, the Linn Drum both had individual drum triggers for its sounds that could be programmed into user sequences or played live. These features in particular marked the shift from rhythm machines to drum machines.
In the post-MIDI decades since, we’ve come to think less and less about rhythm machines. With the rise of in-the-box virtual instruments, the idea of drum programming limitations (such as those found on most rhythm machines) seems absurd or arcane to modern tastes. People love the sounds of these older machines, evidenced by the tons of analog drum samples and virtual and hardware clones/remakes on the market, but they want the level of control modern technologies have grown them accustomed to.Controlling the Roland CR-5000 from an Akai MPC-1000 using a custom built converter
The general assumption is that rhythm machines aren’t traditionally playable, and considering how outdated their rhythms tend to seem, lacking in the modern sensibility. My challenge thus, became clearer: I sought out to build a rhythm machine that would challenge this notion, while retaining the spirit of the traditional rhythm box.
Challenges and Limitations
At the outset, I wanted to base my rhythm machine on analog circuitry. I had previously built a number of digital drum machines–both sample and synthesis-based–for my Heads collection. Working in the analog arena allowed me to approach the design of my instrument in a way that respected the limitations my rhythm machine predecessors worked with and around.
By this time I had spent a couple of years mentoring with Jeff Blenkinsopp at The Analog Lab in New York, a place devoted to helping people from all over the world gain “further understanding the inner workings of their musical equipment.” I had already designed a rather complex analog signal processor, so I felt comfortable in the format. However, I hadn’t truly honed my skills around instrument design. In many ways, I wanted this project to be the testing ground for my own ability to create instruments, but prior experience taught me that going into such a complex project without the proper skills would be self defeating. Even more, my true goal was centered more around functionality rather than details like circuit board designs for individual sounds.
To avoid those rabbit holes–at least temporarily, I’ve since gone full circuit design on my analog sound projects–I chose to use DIY designs from the modular synth community as the basis for my rhythm box. That said, I limited myself to designs that featured analog sound sources, and only allowed myself to use designs that were available as PCB only. I would source all my own parts, solder all of my boards and configure them into the rhythm machine of my dreams.
The wonderful thing about the modular synth community is that there is a lot of stuff out there. The difficult thing about the modular synth community is that there’s a lot of stuff out there. If you’ve got enough rack space, you can pretty much put together a modular that will perform whatever functionality you want. How modules patch together fundamentally defines your instrument, making module selection the most essential process. I was aiming to build a more semi-modular configuration, forgoing the patch cables, but that didn’t make my selection any easier. I wanted to have three sound sources (nominally: kick, snare and hi-hat), a sequencer and some sort of filter, which would all flow into a simple monophonic mixer design of my own.
For the sounds I chose a simple kick module from Barton, and the Jupiter Storm unit from Hex Inverter. The sound of the kick module was rooted enough in the classic analog sound while offering enough modulation points to make it mutable. The triple square wave design of the Jupiter Storm really excited me as It had the range to pull off hi-hat and snare sounds in addition to other percussive and drone sounds, plus it featured two outputs giving me all three of my voices on in two pcb sets.
Filters are often considered the heart of a modular set up, as they way they shape the sound tends to define its character. In choosing one for my rhythm machine the main thing I wanted was control over multiple frequency bands. Because there would be three different sound sources I needed to be able to tailor the filter for a wide spectrum of sounds. As such I chose the AM2140 Resonant Filter.
The AMS2140 PCB layout, based on the classic eMu filter
I had no plans to include triggers for the sounds on my rhythm machine so the sequencer was going to be the heart of the performance as it would be responsible for any and all triggering of sounds. Needing to control three sounds simultaneously without any stored memory was quite a tall order, but fortunately I found the perfect solution in the amazing Turing Machine modules. With its expansion board the Turing machine can put out four different patterns based on it’s main pattern creator which can create fully random patterns or patterns that mutate as they progress.
I spent a couple of weeks after getting all the pcb’s parts and hardware together, wiring and rewiring connections until I got comfortable with how all of these parts were interacting with each other. I was fortunate to happen upon a vintage White Instruments box, which formally housed an attenuation meter, that was perfect for my machine. After testing with cardboard I laid out my own faceplates, which and put everything in the box. As soon as I plugged it in and started playing, I knew I had succeeded.Early test of RIDM before it went in the Box
I call it the RIDM Box (Rhythmically Intelligent Drum Machine Box). I’ve been playing it now for over two years, to the point where today I would say it is my primary instrument. Almost immediately afterward I built a companion piece called the Snare Bender which works both as a standalone and as a controller for the RIDM Box. That one I did from scratch hand wired with no layouts.
My current live rig with the RIDM Box and the Snare Bender (on the right)
While this is by no means a standard approach to modern electronic instrument design (if a standard approach even exists), what I learned through the process is really the value of looking back. With so much of modern technology being future forward in its approach, the assumption is that we’re at better starting positions for innovation than our predecessors. While we have so many more resources at our disposal, I think the limitations of the past were often more conductive to truly innovative approaches. By exploring those limitations with modern eyes a doorway opened up for me, the result of which is an instrument like no other, past or present.
I will probably continue playing the two of these instruments together for a while, but ultimately I’m leaning toward a new original design which takes the learnings from these projects and fully flushes out the performing instrument aspect of analog design. In the meantime, my process would not be complete if I did not return to the original inspiration. So I’ll leave you with the RIDM Box version of “Me & My Rhythm Box”—available on my library sessions release for the instrument.
Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications.
REWIND!…If you liked this post, you may also dig:
Heads: Reblurring The Lines–Primus Luta
SO! Amplifies. . .a highly-curated, rolling mini-post series by which we editors hip you to cultural makers and organizations doing work we really really dig. You’re welcome!
What is it about the environmental soundscape which makes us ‘tune-in’ or ‘tune-out’ to particular sounds? Do we as humans tend to seek out quiet zones for our acoustic pleasure or are there those among us who find urban soundscapes a more comforting prospect? Researchers at Glasgow Caledonian University in the UK have developed a mobile phone application to allow the personalized assessment of such questions regarding environmental soundscapes. I developed the free Think About Sound interactive map–downloadable as a mobile app and viewable online–to allow users to experience various locations in Glasgow by using 3D audio recordings and panoramic visuals.
By using a self-reporting methodology, Think About Sound removes the listener from traditional laboratory-based soundscape evaluation and locates them in real world experiences as they go about their day-to-day activities. The application aims to find out the various types of sound encountered as users understand them, asking how users feel before and after a recorded sound event and enabling them to describe the circumstances in which they heard the sound event. Think About Sound also asks listeners to provide semantic descriptors for the sound, toward the ultimate aim of creating more sophisticated environmental sound maps which communicate both location-specific sound information and the subjective effect of sound upon the listener.
To further enrich the experience, data sent from the application can be viewed online at http://www.thinkaboutsound.co.uk/ with an accompanying map where the public can view and audition submissions using the familiar Google map format. You will also find links to download the app in multiple formats.
I hoped that by collecting data in this way and to this scale, that I can obtain and share a greater understanding of how we perceive soundscapes. The next steps for the project includes the development of audio technology to analyze sound recordings, automatically predicting annoyance, valence (the emotional value associated with a stimulus), and the arousal features of environmental sounds for particular users.
While locale remains important, this research has far more reaching implications beyond my local region. Submissions on an international level can help us to understand how we perceive our environmental soundscapes, help shape local noise policy, and provide others with an understanding of sounds in their local area. What I want from you, the reader, is your help via contributions to this worldwide soundscape project. Stop for a minute and take in your sonic surroundings. What can you hear? How does it make you feel? Comfort? Anxiety?. . .Stop for a minute, listen and think about sound.
Adam Craig is a Ph.D researcher studying at Glasgow Caledonian University in the UK within the School of Engineering and Built Environment. After obtaining a first class honours in his undergraduate Audio Technology degree in 2011, Adam went on to embark on a Ph.D concentrating his research on using advanced audio technology for the creation of environmental sound maps. He Is currently a member of the AudioLab Research team at GCU and is a member of the Institute of Acoustics and the Audio Engineering Society. Out with his academic research, Adam teaches sound engineering to high-school students at a community based service within his local education authority, and at West College Scotland.
REWIND!…If you liked this post, you may also dig:
SO! Amplifies: Cities and Memory–Stuart Fowkes
SO! Amplifies: #hearmyhome and the Soundscapes of the Everyday–Cassie J. Brownell and Jon M. Wargo