How do digital media and technologies allow for closer critique and analysis of sound? What relationships or representations emerge between aural media (acoustic space, audio recording, editing, sound as artifact, memory and sound, etc.) and new digital?

  • Sound, affect and digital performance

    by Marco Donnarumma — Berlin University of the Arts view

    The relation between sound and technology has always been one of closeness, interdependence and mutual influence. Music instrument design and musical performance are perhaps two of the fields where such relationship is more evident and complex. With the advent of digital media, both fields have been rapidly ramifying into multiple, specialised area of studies1 where digital technologies often become means of fragmentation, amplification, collectivization or networked distribution of sound experiences. Technological development does not influence only how musical instrument are created, but also how they are performed; and, by extension, how the resulting music is experienced by audiences. Looking through the lens of embodiment, it is safe to say that the development of new musical technologies influences how sound and music, in the form of acoustic vibrations, affect human bodies.

    One particular discipline within new music performance may be of particular interest when analysing novel types of relations between body and sound, and how they are enabled by digital technologies. This is called biophysical music and it refers to live music based on a combination of physiological technology - biosensors, computer and the related software - and markedly physical, gestural performance. In these works, the physical and physiological properties of the performers' bodies are interlaced with the material and computational qualities of a particular electronic musical instrument, with varying degrees of mutual influence. Musical expression thus arises from an intimate and, often, not fully predictable negotiation of human bodies, instruments and programmatic musical ideas.

    In a piece of biophysical music, specific properties of the performer's body and those of the instrument are interlaced, reciprocally affecting one another. The particular gesture vocabulary, sound processing, time structure and composition of a musical piece can be progressively shaped, live, through the performer's effort in mediating physiological processes and the instrument's reactions to, and influence on, that mediation. While the music being played may or may not be digitally generated by the instrument, the musical parameters cannot be fully controlled by the performer. Through the articulation of sound and digital technology, biophysical music blurs the notion of control by the player over the instrument, establishing a different relationship among them, one in which performer and instrument form a single, sounding body.


    1Such as sound and music computing, new interfaces for musical expression, movement and sound, to cite only a few.

  • Sound and Silence in AMC's Better Call Saul

    Jennifer Hartshorn's picture
    by Jennifer Hartshorn — Old Dominion University view

    The advent of television brought stories with images and sound into our living rooms, and our consumption of media has never been the same. In recent years, however, television has had to compete with laptops, tablets, and phones, making the viewing experience qualitatively different. Dialogue is equated to plot; explosions and gunshots and the sounds of fighting to action. Without these aspects, other screens may draw our attention away. At the same time, shows that push the artistic boundaries have experimented with episodes and storylines that are silent, or have long stretches with very little dialogue (including “Hush” on Buffy the Vampire Slayer, or sections of “Fly” on Breaking Bad.

    The AMC series Better Call Saul focuses primarily on the titular lawyer (here known as Jimmy McGill), a more or less reformed con man who has turned his talents to the law. His gift is language, and he uses it to manipulate and shape the people around him, however, the show also uses montages with musical accompaniment, and scenes with little or no dialogue, to great effect. The secondary protagonist, Mike Ehrmantraut, is an ex-cop turned reluctant hit man, and his storyline features some of the most powerful scenes, which have little to no dialogue. An absence of dialogue, however, does not mean an absence of sound, or a lack of skillful manipulation.

    In season three, in the episode “Sunk Costs,” Mike’s goal is to get a truck stopped at the border between Mexico and New Mexico and searched for drugs. To do this, he uses sound to manipulate the perceptions of the truck’s driver and passenger. This crucial scene plays out without any dialogue in English, and any viewer who equates dialogue to story, whose attention wanders to one of their other screens, will miss out on this part of the plot.  In Theory of the Film, Bela Belaz states, “A silent glance can speak volumes; its soundlessness makes it more expressive because the facial movements of a silent figure may explain the reason for the silence, make us feel its weight, its menace, its tension.” Put another way, Breaking Bad/Better Call Saul editor Chris McCaleb states, "I think in the best shows or movies … it finds the truth in those silences and doesn't bombard you with noise. If you are just getting bombarded, it ceases to mean anything. You stop connecting to it." These scenes with little dialogue require the viewer's full attention — something that is not easy to capture, but which rewards the viewer many times over on a show like Better Call Saul. Careful observation of this scene demonstrates that as much of a master manipulator as Saul Goodman can be with words, Mike Ehrmantraut is his equal when it comes to sound.

    From Better Call Saul, Season 3 Episode 3, "Sunk Costs"
  • Voice and Gender in Video Games: What can we do with spectrograms

    Milena Droumeva's picture
    by Milena Droumeva — Simon Fraser University view

    I'd like to talk a bit about one of my latest research projects, concerning sound, voice and gender in video games. I've been involved in sound research for over 13 years, coming at it mostly from acoustic ecology and media studies, where we look at a soundscape as a system of elements. The political, the socio-cultural critical aspect has traditionally been missing in that type of analysis, so looking at sound in video games in terms of gender united two of my passions nicely. As a system of elements, game soundscapes have fascinating roots in electronic music; later, when the technological constraints of chip space were solved, sound effects developed along cinematic models of montage and synchresis, with the functional elements of early computerized sound notification systems: that is, incorporating confirmatory, action-based, and affect-driven sound layers, combining foley sounds, a variety of signals, and music. Avatar voices specifically, have had an interesting progression from text-to-speech adaptations, synthesized vocalizations (battle cries), and more recently - professional voice acting and overdubbing, with cinematic quality cutscenes. With the rising fidelity and “resolution” of game narrative spaces, voice has inherited many of the classic gendered tropes of broadcasting and cinema: this is where my work began. In an attempt to compare historical developments of several female character typologies in games, my team and I looked at classic fighting RPGs (Role-playing games) and Adventure RPGs, and we counted the ratio of female-to-male battle cries specifically in combat sequences.

    What that content analysis revealed was an increase in female vocalizations over time, but we noticed a marked quality shift as well - a move from synthetic grunts and shouts to more hyper-real, dramatized and feminized vocalizations. This is where a spectrographic analysis, as a subset of digital audio tools, offered a rich perspective from which to compare and qualify female voices in games. Using a combination of SpectraFoo (for real-time analysis), Sonic Visualizer, and Adobe Audition I did a kind of ‘close reading’ of several game instances. You can see in the annotated screenshots, the female vocalizations are longer and literally take up more sonic space even if they are equal in frequency to the male battle cries. More importantly, they are much more intoned, inflected, with a dynamic envelope and pitch profile, and often - particularly at the end of fighting sequences - feature added reverberation with a very long tail. In combination with reflective accounts of player experience, digital audio tools and visualizations allow us to access gendered sonic tropes in novel ways. Looking at the spectral gestures and textures of character voices in the game soundscape begs questions about how we think male and female (as well as agender, atypical or non-human characters) ought to sound like, and why. In that, such tools allow us to pinpoint persistent stereotypes encoded in the very design of games and by extension - other popular media texts. I’m excited to see where research and analysis will be able to go with the help of emergent multimodal toolsets and novel digital methodologies.

    Image credits from top to bottom: Street Fighter II (Honda vs Chun Li); Soul Calibur V (featuring Ivy); Tomb Raider 2013 (Lara Croft, opening scene);