Recent Comments

November 23, 2015 - 21:43

Sean, I’d like to elaborate on the final bullet point you graciously attributed to me in more elegant and concise words than I originally used. The question of whether algorithms need to attune themselves to the way “language gets woven into the environment and becomes inextricable at the level of ambient attunement” is one I’m taking seriously. As I dig deeper into one particular algorithm — Google Deepmind’s algorithm that has been programmed to learn to play and win Atari 2600 video games without pre-programmed knowledge of specific game mechanics other than an awareness of pixel positions in game play frames and the “greedy” desire to earn points — I recognize that algorithms are being programmed as perpetually self-teaching learning machines. Deepmind’s algorithms are about artificial intelligence, and that intelligence, while artificial, is making decisions about how to succeed. It’s important to remember that “algorithm writers are at high risk of perpetuating this particularly destructive metaphorical tendency,” but it’s also important to consider whether the algorithms themselves need to be cautioned to avoid network metaphors when constructing their (admittedly limited) realities.

As Deepmind’s algorithm teaches itself to play and win Pong using the memory and processing of deep neural networks and the reward system of deep reinforcement learning, what are its creation and learning metaphors? Will it seek to learn within the framing confines of network metaphors? Should we ask it to push against those metaphors and become attuned to the intersection of language and environment? In the case of the Deepmind algorithm learning to play an Atari game, I sense attunement would result in breakthrough knowledge — an understanding of game outcomes that transcends the iterative procedural activity and learning encoded in the language of the algorithm.

The Deepmind algorithm is able only to follow its iterative coding within the confines of its prescribed memory, processing, and frame limits. Were the algorithm to attune itself through self-taught iterative processes to achieve shortcuts in learning, to find pathways that break out of iterative procedures and achieve exponentially advanced outcomes, our first inclination might be to believe that Skynet had become sentient. And maybe that’s true. But it might also suggest a breakthrough in neural networking that could attune itself to its ambient rhetorical environment. What would that mean for our understanding of the role algorithms can and/or should play in learning? In teaching?

These are questions I’m barely able to formulate at this point, much less begin answering. And missing from all this is the ethics of algorithmic development, production, and function. If algorithms can achieve some level of attunement, what are the ethical ramifications of such attunement? Can algorithms achieve some type of conscience, wherein via attunement an algorithm might recognize the nuances of falsehood and deceit?

November 23, 2015 - 12:08

Very interesting piece, Stephanie. Especially fascinating to me is the introduction to the term 'customer churn,' which is think is especially poignant when we consider how one of the criticisms levied against casual games is the 'grinding' that they require for success. 

Your discussion about the algorithmic tracking of player movements and choices in casual games has me thinking about the hacks that players come up with to circumvent or confuse the tracking algorithms embedded in the games. I spoke with a colleague a few weeks ago who is an avid Candy Crush player (he is currently sitting at the end of the map and waiting for new level releases) and he described how he would open multiple tabs/instances of the games in his browser before playing a new level. That way, he can play the level in one tab and, if he fails, simply close it and move to the next one that has already cached his progress as having the full amount of live. This prevents him from having to wait on the timer to earn extra lives or pay money for powerups to finish the level.

Of course, one might argue that the game isn't particularly concerned about this type of player. After all, someone who is willing to invest that much effort and energy into circumventing the game's barriers in order to play it probably isn't as much of an attrition risk as someone who plays by the rules and might run into all of the deadends and fail states that come with doing so.

November 23, 2015 - 09:43

Bill, thank you for this excellent response. I really agree, especially your closing lines: "This will only happen, though, if we play an active role in the design of algorithmic tools and focus their use on assistive applications rather than seeking to replace human thinking." The work you all are doing is so, so important. The future appears to be one where more technologies are used to (presumably) save time and make things easier, and I am all for that. However, I want to ensure that these technologies reflect the value systems of our particular discourse communities—that is, writing technologies such as the ones you explore here should reflect the values of writing faculty, not what corporations outside of our community believe we want or need.

You're incredibly lucky to be at an institution that supports that kind of work through WIDE; do you have suggestions for others interested in getting started in similar work? Especially if they're at a smaller school or one with fewer resources—how can we help support the work of "creat[ing] systems that give humans a chance to focus more on how we might improve as writers and communicators"?

November 23, 2015 - 09:39

Carl, great points here, and I especially nodded along when I got to your line "These algorithms are not evil. They are us, or they are working with us, or they are working us." That line made me think of Michael Wesch's The Machine is Us/ing Us, but your point is well taken—algorithms are programmed by people, and therefore are embedded with the values, mores, and ideologies of people.

As someone who has done some work critiquing the "promises" of automation that Turnitin in particular has offered us, I'm also interested in seeing how responses to your piece here shape up. I know that there are potential positives for that particular technology, but because of my own fierce disinterest in using their tech and supporting the company (and its values), I find it difficult to explore "the pluses and minuses they each hold." In other words, my biases toward the company and its values make it hard for me to step back and "consider the algorithms and how they are bundled"—I can't separate the algorithm from the company's ideological frame. Do you find ways to make this kind of approach possible? I know you have argued elsewhere against the tradition of rejection, but in cases like Turnitin, I feel that rejection is a way to vocally make my (and some others') concerns be heard.

November 23, 2015 - 09:02

Isn't the Internet of Things just crazy? Of course, that's what I'd say in 2015. Ask me again in a few years. If the pundits are right, it won't be long before we can't remember or imagine an offline world of things. The challenge, I think, is to reclaim for our interdependence, not just among people, but within a broader ecology of things, plants, animals, etc., an appreciation for our deep relationality without relying on its networked, digital connections. This is an ethical move directed toward sustainability in a world where all our actions have a consequence for others, if by others we include indirect consequences and things that aren't just human. Algorithms, in a way, make these connections and consequences more visible, though of course not visible enough that their black-boxed rules are obvious to see. As pedagogy goes, then, my impulse would be to start with post-human moves to identify our offline interdependence within a broader ecology and then start looking at its more circuited (okay, wireless) connections in a digital one.

November 23, 2015 - 08:56

Great questions, Stephanie. I certainly don't have definitive answers here. The rhetor and audience, I suppose, depends on which perspective one chooses to take. If you're a post-humanist, you'd have to be careful about framing the question without posing a false choice. If you're more traditional, it would depend what perspective you're taking: that of the algorithm/device (which, after all, "listens" to what your body "says"), or that of the wearer (who, of course, wears the device precisely because s/he intends to "listen" to what it "says"). As best I can tell, there's a mutually constitutive rhetoric here, almost a dialogue, such that both agents, device and wearer, are brought into rhetorical relation with one another in practice. 

November 23, 2015 - 08:51

This is a great example, Daniel, of the ways algorithmic technologies that are already part of our daily lives can be leveraged, if accessed, to serve different ends. In this case, as you mention, physicians using Open mHealth would need explicit permission to access your monitored health data, presumably checking all the necessary moral precaution boxes in doing so. But I'm suspicious of the outsourcing of surveillance to anyone, particularly to for-profit enterprises (even those ostensibly directed toward our health).

As for the feedback loop, I see the Open mHealth example less as a breach of that loop than as an illustration of the loop's recurrence within a larger actor network — a sort of ripple in a pond, whereby we don't quite know what the pebble is. 

November 18, 2015 - 22:33

These are fantastic prompts to include in classroom discussion, Stephanie. Speaking of Facebook coders, I've been interested (for some time) to learn how the ALS ice bucket challenge received so much attention within the space at a time when the unfortunate death of Michael Brown in Ferguson, MO provided a touchstone for #blacklivesmatter on Twitter. It would be fascinating (as as class project) to reach out to Facebook representatives to learn (in what ways the company will publically share) how the site's algorithms prioritize certain trends over others. Thank you for the reply. This has got me thinking! 

November 18, 2015 - 22:29

As I read your post Chris, I immediately connected a rhetorical algorithmic agency with the Internet of Things movement to place sensor technologies within common household items. The Internet of Things movement is big business. International consulting firm, McKinsey & Company projects a 4 to 11 trillion dollar economic impact by 2025 due to the movement. 

With such an unfathomable impact, I think it's important to think about the rhetorical effects of algorithms in people's everyday lives. Just as your post illuminates, discussing agency in connection with algorithms will be an increasingly important topic to explore. At the same time, as I think about rhetorical effects I wonder about public education on algorithmic agency. How do rhetoricians teach the public about this concept (if at all)?

November 18, 2015 - 20:28

Estee, I'm so glad you commented that "each time I ask students to get online and click around, I am forced to think about their digital data trail." I wonder about this sometimes when I ask students to participate in digital spaces: What will happen as a result of my request? Will I have the time and knowledge to share with them some of the potential side effects or consequences of their participation? I even think about the thousands of dead websites littering the web, a few created by my own past students—sites still lingering, no longer updated, but still part of the vast and searchable expanse of the online world.

I don't know that we're doing enough to think about sustainability when we ask students to participate online. And I don't think we're asking them, as you suggest here, to do enough activist work related to the legal and social effects of algorithmic discrimination. It's a fantastic idea. For example, I was just speaking with someone recently about how the Facebook photo tool to support Paris by turning your profile picture blue-white-and-red is of course potentially discriminatory—what about other causes that don't have an easy change-your-profile-picture button? Why not support Syria? Why not Libya? Why not Beirut? As you note here, "people write and program algorithms; thus, the complex equations are not free of bias or human influence." These technological systems of course reflect the politics of the individuals who helped code them, but you're calling for us to do more to speak up against the racism, classism, sexism, ableism, and so on that can be coded into the interfaces and algorithms that surround us.

It's an excellent point, and I agree. A timely assignment right now would be to ask Facebook coders how decisions are made to support certain causes (such as Digital India or Celebrate Pride) and not others. To ask what the constraints are for something like Facebook Safety Check, turned on when a natural disaster strikes (but natural disaster when and where, and what kind). To ask when the designers decide to co-opt or modify something like the natural viral spread of an Internet meme like the Human Rights Campaign meme. Asking about how and when and why those decisions get made would not only be an amazing learning experience, but it could have actual impact (as Facebook has made changes in the past to its interface when enough users complained). And that's a fantastic opportunity for students to see rhetoric at work in the world.