Carl, great points here, and I especially nodded along when I got to your line "These algorithms are not evil. They are us, or they are working with us, or they are working us." That line made me think of Michael Wesch's The Machine is Us/ing Us, but your point is well taken—algorithms are programmed by people, and therefore are embedded with the values, mores, and ideologies of people.
As someone who has done some work critiquing the "promises" of automation that Turnitin in particular has offered us, I'm also interested in seeing how responses to your piece here shape up. I know that there are potential positives for that particular technology, but because of my own fierce disinterest in using their tech and supporting the company (and its values), I find it difficult to explore "the pluses and minuses they each hold." In other words, my biases toward the company and its values make it hard for me to step back and "consider the algorithms and how they are bundled"—I can't separate the algorithm from the company's ideological frame. Do you find ways to make this kind of approach possible? I know you have argued elsewhere against the tradition of rejection, but in cases like Turnitin, I feel that rejection is a way to vocally make my (and some others') concerns be heard.
Isn't the Internet of Things just crazy? Of course, that's what I'd say in 2015. Ask me again in a few years. If the pundits are right, it won't be long before we can't remember or imagine an offline world of things. The challenge, I think, is to reclaim for our interdependence, not just among people, but within a broader ecology of things, plants, animals, etc., an appreciation for our deep relationality without relying on its networked, digital connections. This is an ethical move directed toward sustainability in a world where all our actions have a consequence for others, if by others we include indirect consequences and things that aren't just human. Algorithms, in a way, make these connections and consequences more visible, though of course not visible enough that their black-boxed rules are obvious to see. As pedagogy goes, then, my impulse would be to start with post-human moves to identify our offline interdependence within a broader ecology and then start looking at its more circuited (okay, wireless) connections in a digital one.
Great questions, Stephanie. I certainly don't have definitive answers here. The rhetor and audience, I suppose, depends on which perspective one chooses to take. If you're a post-humanist, you'd have to be careful about framing the question without posing a false choice. If you're more traditional, it would depend what perspective you're taking: that of the algorithm/device (which, after all, "listens" to what your body "says"), or that of the wearer (who, of course, wears the device precisely because s/he intends to "listen" to what it "says"). As best I can tell, there's a mutually constitutive rhetoric here, almost a dialogue, such that both agents, device and wearer, are brought into rhetorical relation with one another in practice.
This is a great example, Daniel, of the ways algorithmic technologies that are already part of our daily lives can be leveraged, if accessed, to serve different ends. In this case, as you mention, physicians using Open mHealth would need explicit permission to access your monitored health data, presumably checking all the necessary moral precaution boxes in doing so. But I'm suspicious of the outsourcing of surveillance to anyone, particularly to for-profit enterprises (even those ostensibly directed toward our health).
As for the feedback loop, I see the Open mHealth example less as a breach of that loop than as an illustration of the loop's recurrence within a larger actor network — a sort of ripple in a pond, whereby we don't quite know what the pebble is.
These are fantastic prompts to include in classroom discussion, Stephanie. Speaking of Facebook coders, I've been interested (for some time) to learn how the ALS ice bucket challenge received so much attention within the space at a time when the unfortunate death of Michael Brown in Ferguson, MO provided a touchstone for #blacklivesmatter on Twitter. It would be fascinating (as as class project) to reach out to Facebook representatives to learn (in what ways the company will publically share) how the site's algorithms prioritize certain trends over others. Thank you for the reply. This has got me thinking!
As I read your post Chris, I immediately connected a rhetorical algorithmic agency with the Internet of Things movement to place sensor technologies within common household items. The Internet of Things movement is big business. International consulting firm, McKinsey & Company projects a 4 to 11 trillion dollar economic impact by 2025 due to the movement.
With such an unfathomable impact, I think it's important to think about the rhetorical effects of algorithms in people's everyday lives. Just as your post illuminates, discussing agency in connection with algorithms will be an increasingly important topic to explore. At the same time, as I think about rhetorical effects I wonder about public education on algorithmic agency. How do rhetoricians teach the public about this concept (if at all)?
Estee, I'm so glad you commented that "each time I ask students to get online and click around, I am forced to think about their digital data trail." I wonder about this sometimes when I ask students to participate in digital spaces: What will happen as a result of my request? Will I have the time and knowledge to share with them some of the potential side effects or consequences of their participation? I even think about the thousands of dead websites littering the web, a few created by my own past students—sites still lingering, no longer updated, but still part of the vast and searchable expanse of the online world.
I don't know that we're doing enough to think about sustainability when we ask students to participate online. And I don't think we're asking them, as you suggest here, to do enough activist work related to the legal and social effects of algorithmic discrimination. It's a fantastic idea. For example, I was just speaking with someone recently about how the Facebook photo tool to support Paris by turning your profile picture blue-white-and-red is of course potentially discriminatory—what about other causes that don't have an easy change-your-profile-picture button? Why not support Syria? Why not Libya? Why not Beirut? As you note here, "people write and program algorithms; thus, the complex equations are not free of bias or human influence." These technological systems of course reflect the politics of the individuals who helped code them, but you're calling for us to do more to speak up against the racism, classism, sexism, ableism, and so on that can be coded into the interfaces and algorithms that surround us.
It's an excellent point, and I agree. A timely assignment right now would be to ask Facebook coders how decisions are made to support certain causes (such as Digital India or Celebrate Pride) and not others. To ask what the constraints are for something like Facebook Safety Check, turned on when a natural disaster strikes (but natural disaster when and where, and what kind). To ask when the designers decide to co-opt or modify something like the natural viral spread of an Internet meme like the Human Rights Campaign meme. Asking about how and when and why those decisions get made would not only be an amazing learning experience, but it could have actual impact (as Facebook has made changes in the past to its interface when enough users complained). And that's a fantastic opportunity for students to see rhetoric at work in the world.
Chris, I was excited to read your piece because I have within the last couple of years taken up running and have just recently bought a Garmin Forerunner 225 GPS watch, using it to track my morning runs, my sleep patterns, my calorie burns, and more. It also functions as a pedometer and counts my daily steps. My husband bought one too at the same time I did and when he began noticing his steps (the watch automatically sets a goal for you and adjusts it up or down depending on your previous activity), he started saying things like, "Oh, do you want to go take a walk to [nearby store] after dinner? I need to make my steps today."
I found this compelling in the discussion of actors and agents you bring up above. Why did the watch compel him to feel like he needed to "make his steps"? Intriguingly, I routinely ignore mine—sometimes I "make my steps" above and beyond if I ran that morning or had lots of errands to do; other times I'm far below the number the watch suggests (those are writing days, usually), but I don't feel like I need to walk around the block just to make the watch congratulate me for hitting the step target goal.
So when you talk about "between rhetor and audience, between capacity and effect" here with regard to a fitness tracker, I wonder—who is the rhetor and who the audience? Who, exactly, does my husband worry will care if he does or does not make his steps? Why are some people compelled to walk further simply because their smart watch suggests they do so, while others ignore the suggestions?
One area where algorothmic rhetorical agency is expanding is into the health care industry. The data collected by your FitBit or Apple Watch can be standardized, stored in the cloud, integrated with other health data, shared with physicians, and used to monitor treatment efficacy in real time by patients and physicians alike. That infinite feedback loop picks up a few additional actors along the way — an additional cloud server and its connective and supporting networks; an algorithm that rewrites the collected data into a standard format for readability and portability; the larger health care network including its shared data across corporate and industry lines; and your physician and those inside and outside the office who support the office itself, including pharmaceutical reps and big pharma.
Take a look at Open mHealth, a start-up co-founded by Deborah Estrin, Professor of Computer Science at Cornell Tech. Open mHealth does everything I just mentioned with the goal of providing physicians and their patients real-time monitoring of health-related data for advanced diagnostic and palliative care. In a 2014 presentation, Estrin described the data collection and sharing functions of Open mHealth as a means of improving the timeliness and efficacy of treatment. Consider the following scenario: a patient visits a physician with a particular set of symptoms that the physician seeks to diagnose and treat. The treatment provided is a prescription medication which the patient is to take daily for the next two weeks; after two weeks, the patient is to return to the physician to evaluate the effectiveness of the treatment and determine whether the patient is symptom free — indicating the original diagnosis was accurate and the treatment appropriate to the symptoms — or retains those symptoms or has developed other symptoms — indicating the original diagnosis was inaccurate, in part or in whole, and requiring a new or revised treatment plan.
Open mHealth inserts itself by collecting and standardizing data that we’re already collecting with our FitBits, smartphones, watches, and sharing that data — with our explicit permission, of course — with the physician, who can use Open mHealth to poll and visualize the data to determine whether the treatment prescribed is, for lack of a more clinical term, working. In short, the physician who accesses our shared data can know within hours or days of prescribing a treatment whether that treatment is working. The physician can also uncover whether additional underlying causes or symptoms not immediately visible or shared during the office visit, like lack of sleep or sleep apnea or high blood pressure, might be reducing the efficacy of the treatment. The result, according to Estrin, is that more effective and proactive, palliative health care services can be provided using the data we’re already collecting, at a fraction of the cost of time, effort, and dollars expended toward attempting to treat illness without adequate or timely feedback from patient to physician.
Is this a good use of an algorithmic rhetorical feedback loop, where patient and physician communicate via algorithm-created and initiated data transfers? Where does agency reside when we expand the actor-networks or creative entanglements or rhizomatic systems to include so many more actors and agents?
I appreciate your survey response: it raises a number of interesting and significant questions, and it makes me realize that I haven't given enough thought to algorithms.
This point really stood out to me: "whether one adheres to or deviates from the data set, these become the defining criteria — not according to GLBTQ communities, members, individuals, participants, allies, etc., but according to the collection of algorithms of so-called 'contextual' or 'associative' search engines."
It has me thinking broadly about the way people are shaped by even the most mundane interactions – with people, with environments, with tools. While these interactions may go unnoticed (algorithms seem so invisible, just working in the background of our online activities), these interactions can shape perceptions of one another, of the self. If these systems operate according to an established binary “of conformity or deviation from a norm,” then any interaction with this system shapes the user’s perception of normativity. That’s significant!
So we are influenced by the algorithm (we’re shaped by our interaction with it), but who influences the algorithm?
As demonstrated by your example, these algorithms are limited: they operate according to a specific model, and affected parties have little (to no) room to influence it. Even though your journal followed the “correct” citation method, even though you contacted them to correct the error, JFK remains your co-author. "It produces rather than identifies” normative values, potentially taking a level of control away from communities who would otherwise do the work of defining and identifying themselves.
As a result, algorithms don’t appear to be very “democratizing and liberating.” Instead, they seem to constrain online behaviors and quietly influence our sense of self. It makes me uncomfortable. I wonder what JFK would say.