“What Steven Wants: Gestural Computing, Digital Manual Labor, and the Boom! Moment”
by Lisa Nakamura — University of Illinois, Urbana Champaign
March 11, 2008 – 03:10
“Vision” is a huge part of Apple’s marketing strategy, and this clip from the Apple iphone demo at MacWorld 2007 demonstrates a moment of shared spectactorial pleasure in digital interface use. As Apple CEO and head visionary Steve Jobs slides his finger across the iphone screen to the ecstatic reaction of Apple fans, he reaches into the interface—“I just take my finger and slide it across”—to achieve a “boom” moment. His emphasis on the manual “I just take my finger, and scroll,” and “You can see what I’m doing with my finger” augments the visual and tactile appeal of the device, which he introduces as “something wonderful in your hand.” The “boom” moment signals the coming together of the manual and the visual, as the interface becomes a thing you feel with your hand as well as consume with your eye. Device demonstrations strive towards these “boom” moments, and this one is almost absurdly successful: as “Contegni” writes in a YouTube response to the clip, “It was really funny when the ‘Slide to unlock’ was demonstrated and everybody clapped and cheered as if Jesus were present.” As iphone users know, this “boom” moment wears off—the manual labor of interface manipulation becomes laborious soon enough, just like all the other interface interventions required of us for work and entertainment. Nonetheless, Steve Jobs knows what he wants, as does another messianic Steven. In the DVD production supplement to his 2002 film Minority Report, Steven Spielberg is continually referenced as the source of the film’s interfaced “visions.” Both Stevens are “precogs” or precognitives of a sort—able to envision interfaces that meld gesture and vision, as well as ways to make them pay. Making work seem like play is a staple of the digital interface industries—however, Minority Report ultimately defines these “boom” moments as untrustworthy. In what ways do we need to interrogate them in other devices and interfaces, like the iphone’s? In what way does the disappearance of “keyboards and mice” in these gestural interfaces both free up and tightly bind our own bodies and visions?