How do digital media and technologies allow for closer critique and analysis of sound? What relationships or representations emerge between aural media (acoustic space, audio recording, editing, sound as artifact, memory and sound, etc.) & digital tools?

  • Transducing Analysis

    Casey Boyle's picture
    by Casey Boyle — University of Texas-Austin view

    Like Kyle, I too have a slight allergy to the prompt’s prompting of critical “analysis” as well concerning getting “closer”  to sound. If sound is anything, it’s not a kind of object that that you can get closer too but an altogether different kind of experience. I am informed here in part by David Cecchetto’s description of sound as a kind of non-object in Humaneisis. Cecchetto, starts by building on Aden Evens’s claim that:

    “‘…to hear is to experience air pressure changing… . One does not hear air pressure, but one hears it change over time [such that] to hear a pitch that does not change is to hear as constant something that is nothing but change. To hear is to hear difference’” (qtd. Cecchetto 2).

    From this, Cecchetto writes that sound “calls us to think of it as a particular object that has no substance, as a kind of ideal object that nonetheless has real material effects” and that “sound resists being placed at all and is in this sense as much relational as it is differential” (2). Thus, to get closer to something might be to miss altogether what sound does. So, I do not want to rephrase the question as much as I want to engage it differentially. Instead of implicitly considering sound an object that we can get closer to, perhaps we can propose instead that sound is a differential event with which digital media can help us participate.

    So, I’ll play along for a moment.

    Certainly, digital media offer a range of “new” ways to prolong and remediate sonic events in ways that are useful for traditional analysis. On one hand, the lowering costs for high-quality digital recording devices alongside the ease with which those devices can be deployed and operated allows for robust inductive sonic inquiry. The ability to capture a great amount of recordings and instantly share those recordings leads to really interest projects. I think immediately of Cities & Memory, a sound mapping project whose reach is the entire globe due largely—if not solely—to digital mediation. On the other hand, similar affordability and ease-of-use dynamics also help make more accessible editing and analysis tools through Digital Audio Workstations and other applications for visualizing sounds from a distance. This kind of perspective compliments the first in that that lend themselves to deductive analysis. These kinds of projects would approach sonic experience visually or at some remove from which sound at scale can be analyzed. Taken together, digital media offer an occasions for increased number of sounds as well as a distanced overall perspective on sonic events.

    What intrigues me most about exploring (notice the shift here from analyzing) sonic experience in and through digital media—in both research and teaching—is how such encounters lend themselves to differential experiences that, as Evens and Cecchetto claim, are constitutive of sound itself. I don’t want to oppose or even oscillate between inductive and deductive modes of analysis as a process of experiencing sound but suggest that processes of differential experiences with sound is what make digital media most useful in exploring sound. Thus, without negating induction or deduction, I want to turn to transduction as the mode of analysis offered (again) by digital media.

    Transduction refers to the process of how energy transforms across media.  The quick and easy way to think about this is, of course, is when someone speaks and records that voice, an energy transverses across media that include the biological, physical, digital, electrical. This differential is not just the frequency of a sound operating as oscillating air waves but also the media through which that energy travels. Writing about transduction in Keywords in Sound, Stefan Helmreich proposes that “Transduction names how sound changes as it traverses media, as it undergoes transformations in its energetic substrate (from electrical to mechanical, for example), as it goes through transubstantiations that modulate both its matter and meaning” (222) and concludes that “Even the most basic description of sound (as "traveling"-that is, as transduced) may be cross-contaminated, crosscut with leading questions as sound cuts across spaces, materials, and infrastructures. We should think, then, not with transduction, but across it (229). It is easy here to notice a resonance between Helmreich’s urging to think sound across as well as Cecchetto’s proposal about sound as differential in that both avoid locating sound anywhere at all but as a movement of encounters.

    In many senses though, digital media offer nothing new for sonic study since transduction is the way that any sound moves, digitally infected or not. What digital media most offer for a study of sound is an intense collection of ways, many-multiple ways, in which sound can be differentially experienced all at once. It makes the process quicker–and ever closer in differentials–as sound crosses devices, interfaces, software, outputs. By encountering sound (and/as sights in the space of a DAW interface) in many and multiple way across various media, we are afforded opportunity to transduce across it as well.

     

  • Analysis vs. Experience; or, Another Written Piece about Sound

    Kyle Stedman's picture
    by Kyle Stedman — Rockford University view

    I’m not sure that I’m the right person to answer this question.

    I mean, look at how the first part of the question focuses on how technology allows “closer critique and analysis of sound.” Yes, sometimes we do need that kind of “closer critique,” as proven by the other posts on this page. (Voice and gender in videogames, analyzed visually with digital tools!? Droumeva, you’re amazing.)

    But if I’m honest, I find myself wanting to question the question. I keep saying, “I value sound more for its effects on my body and mind than for what I can learn by analyzing it.” I like the second half of the question, with that beautifully messy word relationships, more than the first.

    What’s my problem? I’m thinking of Steph Ceraso’s point that “alongside and in addition to semiotic approaches to multimodality, it is necessary to address the affective and embodied, lived experience of multimodality in more explicit ways” (104).

    And I’m thinking of Steven B. Katz, whose book The Epistemic Music of Rhetoric argues for “a phonocentric rather than a logocentric theory of response,” which develops “aural, temporal modes of experiencing and reasoning” (2).

    And this, from David Burrows: “Musical significance doesn't depend, as verbal does, on a lexicon of referents… . The fact remains that living the sound itself, in all its sectors and over all its time spans, is the root of its musical meaning” (89).

    Putting those together: I don’t know how much I want to analyze sound, using technology or not. Or at least, I never want to analyze it in a way that erases Ceraso’s and Katz’s phonocentric theories, where “experiencing” is at least as important as “reasoning.” I never want to analyze sound in a way that erases what Burrows calls “the root of its musical meaning”—that is, “living the sound itself.” I don’t want to try to understand something I’ve never let myself feel.

    I know, I know. Analysis and experience often go together—maybe especially often when we’re studying sound and music. And I know that sometimes my experiences of sounds are heightened when I’ve taken the time to understand what I’m hearing. Tom Goldberg makes this point about understanding and experiencing Haydn’s music when he writes, “The pleasure taken in following (or anticipating) a pattern, in catching an allusion, or in associating a present experience with past experiences may be an emotional response, but it rests upon the informed—that is, intellectual—response that makes them possible. Art yields more of its secrets to the knowledgeable ear than to the ignorant one" (55). Yep. Sure. I’ve felt that. I’ve thought that.

    And yet. Don’t some of us scholars overemphasize the analysis? Don’t we write words about sound (like those four printed pieces about sound I just quoted, but also here, look at this page, which you’re experiencing on a screen, a screen with multimedia capabilities built in, your browser is literally built for this stuff)—don’t we write about it and sometimes maybe often forget to let the power of sound affect us, slam into our bodies, tickle the hairs in our ears, remind us of everything we’ve ever loved?

    That’s why I’m perhaps not the right person to answer the question. Not because it’s a bad question, but because I’m worried about my own writing over the years, how at times I’ve overemphasized the analytical over the embodied and the affective. Increasingly, I’d rather create experiences for people (like a DJ) than explain their experiences to them (like a lecturer). But here I am, another person arguing in print about the importance of sound, like Plato furiously scribbling his frustrations about writing, in writing, ignoring the drumbeats and the echoing declamations that were surely not too far from his open window.

     

    Works Cited

     

    Burrows, David. Sound, Speech, and Music. U of Massachusetts P, 1990.

     

    Ceraso, Steph. “(Re)Educating the Senses: Multimodal Listening, Bodily Learning, and the Composition of Sonic Experience.” College English, vol 77, no. 2, 2014, pp. 102-23.

     

    Goldberg, Sander M. “Performing Theory: Variations on a Theme by Quintilian.” Haydn and the Performance of Rhetoric, edited by Tom Beghin and Sander M. Goldberg, U of Chicago P, 2007, pp. 39-60.

     

    Katz, Steven B. The Epistemic Music of Rhetoric: Toward the Temporal Dimension of Affect in Reader Response and Writing. Southern Illinois UP, 1996.

  • Applying the Enhanced Hearing Abilities of Digital Media to More Deeply Understand the Relational

    Suzanne Thorpe's picture
    by Suzanne Thorpe — Suzanne Thorpe view

    Technologies aren’t new to sound, or music-making, as those that have crafted our beloved guitars, pianos and drums, to name a few instruments, know. But digital technologies as instruments of expression now offer vast possibilities of contribution to our musicking abilities, the expanse of which we are just beginning to discover. One attribute that digital media notably contributes to my research and practice of sound art is its ability to hear. Digital technologies are extremely effective at sensing sound, its impacts, and relaying the gathered information. With various sensors, such as transducers or electrophysiological monitors, and their accompanying analysis tools, such as spectral analysis software, we have access to a wealth of detail regarding the presence and characters of sound, its impact on, and interactions with, acoustic space and the heterogenous bodies therein.

     

    An enhanced ability to hear has particularly strong applications for those of us working in the practice of site-oriented sound art. Site-oriented sound composition and performance is by nature dialogic, as it originates in situ, arising out of interaction with elements in a place. Listening is, of course, one-half of the equation of dialog, and digital tools bring to the site-oriented sound artist an increased capacity to hear, that in turn contributes to our listening state. With a microphone of the appropriate sensitivity, I can analyze the response of an environment to a sound impulse, and adjust my compositional response accordingly. For example, we can sense that the material make-up of a surface might have changed (perhaps due to weather, decay or overgrowth) because its resonant response has shifted. With this new knowledge, I may change the frequency, duration or amplitude of my own sonic output, and thus a composition arises. An example of this process is Listening Is As Listening Does, a piece I composed for Caramoor Center for Music and the Arts inaugural sound art exhibit, In the Garden of Sonic Delights, Katonah, NY, 2013.  Listening Is As Listening Does simulates sonar, a system of navigation that determines the location of objects via reflected sound. In a Spanish-style courtyard at Caramoor, I carefully placed microphones to listen for the echo of specific impulses, generated by software. Using the same software, I was able to analyze qualities of the echo, which changed often due to the shifting content and nature of the outdoor environment, and produce musical responses accordingly, in real-time.

     

    With “Listening Is As Listening Does” we have an example of how the feedback provided by digital tools can help us craft sonic entanglements with environments that are dynamic and generative, especially when these processes are engaged in real-time. And, with this example, we can also propose that, with the hearing power of digital technologies, that our musicking, much like our living, is a real-time assemblage of many agential actors, not just that of sound producers or composers. With this in mind, I posit that this type of sound art can be framed as a non-anthropocentric mode of musicking, one that decentralizes the human, and features the agency and vitality of the nonhuman constituents. Site-oriented sound art produced in an environmentally dialogic mode has an expanded ability to offer humans active affirmations of our interdependent, generative processes. This reveals what critical theorist Rosi Braidotti calls a “non-unitary” form of subjectivity, which rests on an ethics of becoming.1 A non-unitary view of subjectivity proposes “an enlarged sense of inter-connection between self and others, including the non-human or earth others, by removing the obstacle of self-centered individualism.”2 With an enhanced ability to hear that digital technologies offers us, we can listen to the call and response among resonant bodies, and experience the sonic materializing of an inter-material world, relationally co-constructed amidst its own ecology. Thus, listening deeply, with the assistance of digital tools, has the potential to destabilize the habits of human-centeredness among us.

    1. Rosi Braidotti, The Posthuman (Cambridge: Polity Press, 2013), 49.

    2. Ibid., 49.

     

     

  • Night Listening: Play, Pause, Seek

    Jared Wiercinski's picture
    by Jared Wiercinski — Concordia University view

    I’ve always gravitated towards some form of auditory purism, or the idea that there are unmistakable benefits to listening in a focused way while minimizing or removing other sensory input. Listening to music in the dark is a good example. The first time I attended an electroacoustic musical performance, the audience was surrounded by the 16 speakers required for multi-channel playback. When the musicians turned out the lights just before the performance began, I thought: “Ah… they get it!”.

    As a result of this bias I’ve long been fascinated by a question Jason Camlot posed while working on his SpokenWeb poetry project: “What do we look at while we listen?”. It’s a question rich for varied interpretation, but I’ve typically understood it to mean “What should we look at while we listen?” and “How does what we look at while we listen affect the listening experience?”1. Camlot raised the question while the project team worked to design a web interface for the playback and critical analysis of poetry recordings. Although my knee-jerk response was “ideally, nothing - just listen”, I also realized this wouldn’t work nor would it be desirable in all contexts - certainly not for a web interface designed for recorded poetry readings. The web is first and foremost a visual medium. You can make some headway on the web without your speakers, but you always need your screen.

    I also reminded myself that in the natural world, multimodal sensory experience is the norm: a forest-dwelling animal looks while it listens. Except, of course, when it can’t. At night, for instance, when light is low or non-existent, and sounds become sharper, brighter, and more important.

    With this context in mind, I propose that audio interfaces (web-based or otherwise) should offer a “night listening” mode, which would be a minimal and basic interface that simply allows the user to initiate and stop playback, and not much more. UbuWeb Sound is a good example: no waveform, no spectrogram, no transcription, no user-generated content. Just: play, pause, and seek. With less to look at, the more the interface will help to promote immersed, focused listening and to facilitate emotional and aesthetic receptivity to aural content. In situations where a more intellectual or analytical approach may be necessary, the interface can be layered so that more complex features can be added, but these features or layers should also be removable.

    Listening is a relational process; the listening experience arises from aural content refracting through the listener’s emotional, aesthetic, and intellectual lenses. The listening experience, then, depends as much on the listener’s state of mind and body as much as it does on the content, which is delivered through the interface. But the interface itself can and does influence a listener’s state and should therefore be designed purposefully. It shouldn’t be a distraction, and in most cases, minimal interfaces allow for the most vibrant relational listening experiences, bringing us as close as possible to direct contact with sound.

    Part of the justification for this recommendation comes from my own listening experience. While listening to Spotify introspectively (in William James’ sense), I’ve noticed that strictly listening to a song versus listening and also reading or looking (using Spotify’s “Behind the Lyrics” service which superimposes lyrics and song facts over album art) are two different experiences. When strictly listening the song has more of an impact, and I feel more absorbed in the experience. Reading while listening made for shallower listening and tangential intellectualization, even when I was reading the lyrics in time with playback.

    When designing audio interfaces, creating audio installations, hosting listening parties, or other audio-based experiences, I suggest offering a “night listening” mode which foregrounds listening, and which adds visual elements cautiously, and in a purposeful way.

     

    1. Annie Murray and I explored this question in Looking at archival sound: enhancing the listening experience in a spoken word archive.

  • Sound, affect and digital performance

    by Marco Donnarumma — Berlin University of the Arts view

    The relation between sound and technology has always been one of closeness, interdependence and mutual influence. Music instrument design and musical performance are perhaps two of the fields where such relationship is more evident and complex. With the advent of digital media, both fields have been rapidly ramifying into multiple, specialised area of studies1 where digital technologies often become means of fragmentation, amplification, collectivization or networked distribution of sound experiences. Technological development does not influence only how musical instrument are created, but also how they are performed; and, by extension, how the resulting music is experienced by audiences. Looking through the lens of embodiment, it is safe to say that the development of new musical technologies influences how sound and music, in the form of acoustic vibrations, affect human bodies.

    One particular discipline within new music performance may be of particular interest when analysing novel types of relations between body and sound, and how they are enabled by digital technologies. This is called biophysical music and it refers to live music based on a combination of physiological technology - biosensors, computer and the related software - and markedly physical, gestural performance. In these works, the physical and physiological properties of the performers' bodies are interlaced with the material and computational qualities of a particular electronic musical instrument, with varying degrees of mutual influence. Musical expression thus arises from an intimate and, often, not fully predictable negotiation of human bodies, instruments and programmatic musical ideas.

    In a piece of biophysical music, specific properties of the performer's body and those of the instrument are interlaced, reciprocally affecting one another. The particular gesture vocabulary, sound processing, time structure and composition of a musical piece can be progressively shaped, live, through the performer's effort in mediating physiological processes and the instrument's reactions to, and influence on, that mediation. While the music being played may or may not be digitally generated by the instrument, the musical parameters cannot be fully controlled by the performer. Through the articulation of sound and digital technology, biophysical music blurs the notion of control by the player over the instrument, establishing a different relationship among them, one in which performer and instrument form a single, sounding body.

     

    1Such as sound and music computing, new interfaces for musical expression, movement and sound, to cite only a few.

  • Sound and Silence in AMC's Better Call Saul

    Jennifer Hartshorn's picture
    by Jennifer Hartshorn — Old Dominion University view

    The advent of television brought stories with images and sound into our living rooms, and our consumption of media has never been the same. In recent years, however, television has had to compete with laptops, tablets, and phones, making the viewing experience qualitatively different. Dialogue is equated to plot; explosions and gunshots and the sounds of fighting to action. Without these aspects, other screens may draw our attention away. At the same time, shows that push the artistic boundaries have experimented with episodes and storylines that are silent, or have long stretches with very little dialogue (including “Hush” on Buffy the Vampire Slayer, or sections of “Fly” on Breaking Bad.

    The AMC series Better Call Saul focuses primarily on the titular lawyer (here known as Jimmy McGill), a more or less reformed con man who has turned his talents to the law. His gift is language, and he uses it to manipulate and shape the people around him, however, the show also uses montages with musical accompaniment, and scenes with little or no dialogue, to great effect. The secondary protagonist, Mike Ehrmantraut, is an ex-cop turned reluctant hit man, and his storyline features some of the most powerful scenes, which have little to no dialogue. An absence of dialogue, however, does not mean an absence of sound, or a lack of skillful manipulation.

    In season three, in the episode “Sunk Costs,” Mike’s goal is to get a truck stopped at the border between Mexico and New Mexico and searched for drugs. To do this, he uses sound to manipulate the perceptions of the truck’s driver and passenger. This crucial scene plays out without any dialogue in English, and any viewer who equates dialogue to story, whose attention wanders to one of their other screens, will miss out on this part of the plot.  In Theory of the Film, Bela Belaz states, “A silent glance can speak volumes; its soundlessness makes it more expressive because the facial movements of a silent figure may explain the reason for the silence, make us feel its weight, its menace, its tension.” Put another way, Breaking Bad/Better Call Saul editor Chris McCaleb states, "I think in the best shows or movies … it finds the truth in those silences and doesn't bombard you with noise. If you are just getting bombarded, it ceases to mean anything. You stop connecting to it." These scenes with little dialogue require the viewer's full attention — something that is not easy to capture, but which rewards the viewer many times over on a show like Better Call Saul. Careful observation of this scene demonstrates that as much of a master manipulator as Saul Goodman can be with words, Mike Ehrmantraut is his equal when it comes to sound.

    From Better Call Saul, Season 3 Episode 3, "Sunk Costs"
  • Voice and Gender in Video Games: What can we do with spectrograms

    Milena Droumeva's picture
    by Milena Droumeva — Simon Fraser University view

    I'd like to talk a bit about one of my latest research projects, concerning sound, voice and gender in video games. I've been involved in sound research for over 13 years, coming at it mostly from acoustic ecology and media studies, where we look at a soundscape as a system of elements. The political, the socio-cultural critical aspect has traditionally been missing in that type of analysis, so looking at sound in video games in terms of gender united two of my passions nicely. As a system of elements, game soundscapes have fascinating roots in electronic music; later, when the technological constraints of chip space were solved, sound effects developed along cinematic models of montage and synchresis, with the functional elements of early computerized sound notification systems: that is, incorporating confirmatory, action-based, and affect-driven sound layers, combining foley sounds, a variety of signals, and music. Avatar voices specifically, have had an interesting progression from text-to-speech adaptations, synthesized vocalizations (battle cries), and more recently - professional voice acting and overdubbing, with cinematic quality cutscenes. With the rising fidelity and “resolution” of game narrative spaces, voice has inherited many of the classic gendered tropes of broadcasting and cinema: this is where my work began. In an attempt to compare historical developments of several female character typologies in games, my team and I looked at classic fighting RPGs (Role-playing games) and Adventure RPGs, and we counted the ratio of female-to-male battle cries specifically in combat sequences.

    What that content analysis revealed was an increase in female vocalizations over time, but we noticed a marked quality shift as well - a move from synthetic grunts and shouts to more hyper-real, dramatized and feminized vocalizations. This is where a spectrographic analysis, as a subset of digital audio tools, offered a rich perspective from which to compare and qualify female voices in games. Using a combination of SpectraFoo (for real-time analysis), Sonic Visualizer, and Adobe Audition I did a kind of ‘close reading’ of several game instances. You can see in the annotated screenshots, the female vocalizations are longer and literally take up more sonic space even if they are equal in frequency to the male battle cries. More importantly, they are much more intoned, inflected, with a dynamic envelope and pitch profile, and often - particularly at the end of fighting sequences - feature added reverberation with a very long tail. In combination with reflective accounts of player experience, digital audio tools and visualizations allow us to access gendered sonic tropes in novel ways. Looking at the spectral gestures and textures of character voices in the game soundscape begs questions about how we think male and female (as well as agender, atypical or non-human characters) ought to sound like, and why. In that, such tools allow us to pinpoint persistent stereotypes encoded in the very design of games and by extension - other popular media texts. I’m excited to see where research and analysis will be able to go with the help of emergent multimodal toolsets and novel digital methodologies.

    Image credits from top to bottom: Street Fighter II (Honda vs Chun Li); Soul Calibur V (featuring Ivy); Tomb Raider 2013 (Lara Croft, opening scene);