What opportunities are available to influence the way algorithms are programmed, written, executed, and trusted?

  • Curator’s Reflection in a Tag Cloud

    Daniel Hocutt's picture
    by Daniel Hocutt — Old Dominion University view

    I’m drawn to tag clouds as tools for visualizing patterns in texts. Selfe and Selfe (2013) offer a useful heuristic for employing tag clouds to map a field, which I’ve employed as a tool for reflecting on the contributions to this MediaCommons Field Guide Survey. As recommended by Selfe and Selfe, I’ve identified the question I hope to answer using the tag cloud: What do scholars (as represented in this collection) have to say about algorithms?

    It’s fitting to use an algorithm-driven tool for analyzing the contributions to this Field Guide on algorithms, both as a practical matter and in terms of a theoretical perspective. In their heuristic, Selfe and Selfe remind us that algorithm-driven digital tools like tag cloud generators remain mediating tools whose use requires thoughtful, careful consideration as what DePew (2015) calls “applied rhetoric” (p. 440).

    Thinking of text clouds as wholly determined by computers, however, can mask a number of important issues involved in generating a text cloud and much of the work that must be done to make text clouds useful to a particular audience. To make good use of computerized text-cloud generators, you need to make certain decisions about the rules that structure the terms within the cloud. (Selfe & Selfe, 2013, p. 29).

    I used Tag Crowd (www.tagcrowd.com) for creating the tag cloud. It’s a basic tag cloud creator, but its ability to group English words intelligently and to list word frequency (rather than relying on text size alone) makes it a useful tool for illustrating trends and patterns.

    I set the corpus for analysis as the entire text of all 11 contributions plus my own introduction. I copied the text into the Tag Crowd interface to generate a tag cloud of the responses. I set the minimum frequency of a word to 15, meaning a word had to have been used at least 15 times in the corpus to appear in the cloud. I arrived at this limitation through multiple attempts, from minimums of 5 to 20 words. Raising the minimum frequency to 20 reduced the number of terms too much, while lowering the frequency to a minimum of 10 or fewer crowded the tag cloud too densely. Increasing the minimum frequency to 15 made irrelevant the maximum words in the tag cloud, but I set the maximum words to 100 as a failsafe to avoid too crowded a cloud.

    Tag Crowd offers the opportunity to group similar words so that the combined frequency of similar words like learn, learned, and learning is reported under a single term, learn. Tag Crowd also enables users to omit terms in order to focus attention on specific aspects of the cloud. For this cloud, I omitted the terms algorithms, comments, help, login, picture, post, and register. All but one of these terms appears on the page as part of the MediaCommons user, post, and comment management process (comments, help, login, picture, post, and register). I omitted the term algorithms to focus attention beyond the stated focus of the Field Guide Survey question.

    The resulting tag cloud, showing 35 of 1,825 possible words, appears below.

    This word cloud shows points of intersection between human experience and algorithmic existence. Many who responded to this survey question are teachers, and this word cloud seems to point toward the intersection of pedagogy and algorithms in the way students use, characterize, recognize, analyze, and get mediated by algorithms and their functions. Algorithms seem to push us toward ways of engaging with one another: through social experiences, games, writing, play, rhetoric, reading, and courses. Algorithms work in the realm of data, networks, technology, and software. Maybe we look to algorithms for ways to score or predict the unpredictable, like recent terrorist attacks in Paris or human activity in general. And perhaps we place algorithms on a continuum between human and machine, then seek to question whether algorithms can and should be expected to act ethically, to be rhetorical, to be social, to learn, to play games, to be like humans. And maybe we even question whether humans ought to engage with more data, to use networks and scoring to be more like algorithms.

    Thanks to all who have participated, and all who will continue to participate, in this survey. The collection is made richer by the growing number of voices joining the conversation, and I hope you’ll take a few minutes to read some of the selections and offer your own reaction.

    References

    DePew, K. (2015). Preparing for the rhetoricity of OWI. In B. L. Hewett & K. E. DePew (Eds.), Foundational practices of online writing instruction (pp. 439-468). Anderson, SC: Parlor Press.

    Selfe, R. J., & Selfe, C. L.(2013). What are the boundaries, artifacts, and identities of technical communication? In J. Johnson-Eilola & S. A. Selber (Eds.), Solving problems in technical communications (pp. 19-48). Chicago, IL: University of Chicago Press.

  • Imagine the Odds

    Bryan Carter's picture
    by Bryan Carter — University of Arizona view

    Curator’s Introduction: Bryan and I chatted at length back in October about his contribution to the Field Guide Survey. He planned for students in his African Americans in Paris class, during their Thanksgiving field trip to Paris, to post their experiences and reflections to Twitter using a course hashtag. Upon returning at the end of the break, Bryan planned to use Twitter to search the hashtag and use his post to reflect on why and how Twitter’s search algorithm selected and organized the “Top” posts the way it did. (Consider, for example, the results of searching #ParisClimateConference on Twitter: On December 2, the first item on the “Top” results page was not about Paris at all, but was a post from Slate titled Indonesia’s Fires Are an Environmental Catastrophe and a Climate Nightmare. On December 3, the first item on the “Top” results page was a tweeted collection of four photos from China Chinhua News. Why is that? How does that happen?)

    The terrorist attacks in Paris on November 13, 2015, canceled the field trip to Paris and sent Bryan, already on location preparing for students’ arrival, back to Arizona. The canceled trip resulted in no Twitter posts in Paris, no hashtags, no Twitter algorithm at work on class hashtag search results, and no opportunity to reflect on why and how those results were what they were. Instead, Bryan contributed the following reflection reminding us that, despite our best plans and attempts, our algorithmically mediated lives sometimes succumb to the inexplicable and unpredictable.

    Each year as I begin a new semester and its courses, I always wonder about the sorts of students I will encounter. Will they be bright? Enthusiastic? Engaged? Or, will some of my best students from previous semesters choose to take one of my courses again? I’m always pleasantly surprised when I see familiar faces among those staring back at me. Imagine my surprise when, at the beginning of this fall term, there were not one or two, but seven of my best students from the previous semester enrolled in my African Americans in Paris course. I was even more delighted when an additional three volunteered to be Preceptors for the course.

    It was almost like a family reunion on the first day, and I couldn’t wait to have these students take the lead intellectually and technologically. I had already witnessed remarkable growth in all of these students just one semester earlier, and I couldn’t believe the odds of so many being in the follow-on class. Imagine the odds of their not necessarily wanting to be in the same collaborative groups, choosing instead to mix with the newbies in class, using past experience and familiarity with my teaching style to explain to their peers what I was talking about when explaining assignments such as the Experiential Writing Assignment or VoiceThreads.

    And if this were not enough to encourage me to skip to class every Tuesday and Thursday afternoon, imagine my sheer delight when over half the class decided to participate in our optional field trip. This course focuses on the African American expatriate presence in Paris just after the turn of the 20th Century through the 1960s. We study not only the expatriate experience but also the French interactions with their colonies and the people living there along with the notion of negrophilia in France during the period. Our field trip, taken annually around Thanksgiving, is extremely exciting for students, particularly those who have never traveled outside of the country or imagined themselves participating in a study abroad experience. The activities planned while in Paris support and reinforce that which we study throughout the term and we visit locations where African American writers, artists, entertainers and educators lived, wrote, created and performed.

    Along with these experiential activities, students design and complete a number of digitally oriented projects. These include using Augmented Reality to “augment” aspects of their experience while in Paris: statutes, doorways, monuments and more. Additionally, students complete a mini-documentary video of their collective experiences while walking around the city or on one or more of the variety of culturally-oriented tours that we take while there. The students also have an opportunity to interact and collaborate with students from the University of Paris IV-Sorbonne where I am a visiting professor. This collaboration includes a wonderful Thanksgiving meal with one another.

    It’s during these experiences that I wonder at the odds — odds-making being a statistical, if not an algorithmic, process — of such an enthusiastic and engaged number of students all enrolling in the same class and deciding to take the optional, self-financed field trip to Paris towards the end of our semester. Imagine the planning that goes into such an adventure, coordinating accommodations, hoping that the perfect number of male and female students decide to go so that an even number can be assigned to each of the shared, reserved apartments. We began the term meeting once per week to discuss practical, programatic and digital aspects of our trip. Two of the students had been to Paris in the past and three spoke French, two had video editing experience, one was a computer science major and wanted to help program or use his skills to create, there was one spoken word artist and one musician. What a talented mix of students! Imagine the odds.

    We began meeting via Google Hangouts, including some of the students from the Sorbonne, during our weekly meetings. During one meeting, the conversation shifted to where students lived. Most of my American students live on or near campus here at the University of Arizona, whereas nearly all of the students from the Sorbonne live in the suburbs. “What’s it like in the suburbs?” one of my students asked, probably envisioning a stereotypical U.S. suburb. What my students heard silenced the room for a moment. They heard of hour-plus train commutes, double-digit unemployment, cramped living conditions, crime, lack of opportunities, education that is not on par with schools in Paris, and other social problems. But, it’s less expensive and sometimes all that students in Paris can afford.

    “So what do people do about that?” another of my students asked. This time, it was silent for a moment on the Sorbonne end of the wire. “It’s tough here for some” one of the Sorbonne students said. Others chimed in and related how some turn to crime because it’s the only way to survive, others to drugs. Still others, far worse, one of the French students related. You don’t see them for awhile and later when you do, they are different. “But everybody is not like that” another French student quickly said. These conversations enlightened both sides on a number of issues that, prior to these brief interactions, they had only experienced through mediated versions of each other’s cultures. These are the kinds of experiences of which faculty members dream. Imagine the odds.

    I watched as these students planned projects with one another, showed one another how to use various applications, decided where video shoots would occur or live broadcasts back to the states, where 360 photospheres would be taken and where they would socialize with one another. I watched friendships emerge and some understanding of one another’s cultures take root. Imagine the odds that exactly one week before my Arizona students were to travel to Paris that the most horrific attack occurs in Paris where gunmen and suicide bombers kill dozens of people at a concert hall and dining at nearby cafes. Imagine the odds that as a result of these tragic events that a wave of effects occurred that ultimately affected our field trip. State Department warnings, the announcements by President Holland restricting school groups from traveling in the country, the raids in the suburbs and in Belgium, the second suicide bombing and shootout in Saint Dennis and more. I arrived in Paris the day after the first attack occurred and witnessed a city in shock yet trying to get on with their daily lives. Mournful, yet out and vigilant. Resistant yet contemplative about how and why these events happened. Students on both sides expressed the same ideas, some even sprinkled in a bit of anger or frustration that there are some who are so intolerant of the beliefs of others.

    I was conflicted. First because I thought that no way would I want anything to happen to any of my students, either the French or American ones. One can never predict or guarantee where something tragic will or will not happen, of course, and the thought of any of my students socializing in a cafe when something like this occurred mortifies me. On the other hand, I wanted my students to witness this city’s resistance to such ideologies. I wanted them to carry out their collaborations, meet their colleagues on the other end, and experience the City of Light. I wanted to see those lights shine bright inside them.

    Imagine the odds, after planning for almost a year, that the night before departure, my university would cancel the trip for the students. I was relieved, disappointed, angry, frustrated and thankful. I’m not a mathematician, nor a statistician, but I believe that the odds of predicting all these events occurring in such a way, and in such an order and magnitude, are unimaginable.

  • Algorithms & Pedagogy: Or, Why Is it Difficult to Use Big Pedagogical Data as an Individual Instructor?

    Rochelle (Shelley) Rodrigo's picture
    by Rochelle (Shelley... — University of Arizona view

    Photo of two individuals in a dark room with multiple different colored lights from different directions. The individuals are only visible as dark shadows.It is no secret that upper level administrators in higher education are continuously prompted to participate in “data driven decision making” (e.g., Stratford, 2015; Sander, 2014). As their budgets shrink, various stakeholders want to know both what data is informing a decision as well as what data will be collected to measure a return on investment (ROI). This pressure trickles down to individual program administrators as well as individual instructors.

    Time and energy are probably the biggest obstacles to individual faculty implementing data-driven pedagogical decisions. If I were to assess the usefulness of low-stakes, scaffolding activities in my writing courses (e.g., “Does this specific peer review prompt help students make higher-order, or global, as well as lower-order, or local, revisions in their writing project?” or “Does this particular synthesis activity help students develop more nuanced arguments?”) it would mean that I have carefully constructed the course with learning outcomes for the course, the major units/projects, and every scaffolded activity. I would need to use something like Wiggin’s & McTighe’s (2008) backwards instructional design method. It would mean I have a carefully designed course map that can track course, major project/assessment, and individual activity learning outcomes up and down, back and forth across the course curriculum. Once I had tested my scaffolded assignments, trusting that they functioned in the curriculum as I think they did, then I could use students’ work on those assignments to potentially forecast their project and/or course grades.

    Time and energy aside, there are two other reasons it is extremely difficult for individual instructors to use algorithmic and/or big data to inform their pedagogy. First, it is technologically and/or mathematically difficult (aka, most instructors have not been taught how to write code, even if it is “simply” medium to advanced calculation formulas in spreadsheet applications like Microsoft Excel or Google Sheets). Second, which is a much more slippery slope, individual faculty do not necessarily have access to all of the data that helps construct a more fleshed out picture of the individual student outside of an individual course.

    First, I must have well-designed curriculum to adequately assess and then forecast student success. Second, I must then have the technologies to help me run the numbers. Again, with smaller classes, a single instructor might develop formulas in spreadsheets to look at how students did on certain activities and project how they are doing in the course. If faculty struggle developing the spreadsheets and/or processing large numbers of students, they might use applications like the Google Sheets Add-ons Flubaroo and Goobric. Many learning management systems (LMSs) also provide this type of gradesheet analytics, some provide detailed charts and graphs of students, some simply start to add color to the row of each student (for example, if Desire-2-Learn believes the student is doing well in the course, he or she will be assigned cooler colors like green and blue and if other students are doing less well, they’ll be assigned warmer warning colors like red). On the one hand, this can be useful for an instructor, as she can quickly look at the calculated projections and then design an intervention to help the student reverse course and succeed in the class.

    The problem, however, emerges when the instructor tries to design an intervention; how can she if she does not know how the forecast is being calculated? Some retention scholarship heavily emphasizes student attributes prior to and/or outside the course (Bean & Metzner, 1985), others emphasize the student’s social integration within institution (Tinto, 2012); both types of data are difficult for an individual instructor to come by. If this data is collected, it is usually on an institutional level. Similarly, most LMSs track when, for how long, and how often, students log on to the system. This is data an LMSs can use to incorporate in its predictive analytics.

    As with most social science research, the predictive analytics problem is context. The algorithm may know whether or not a student is first generation in her family to go to college, works full time, or has taken an online course before. But the algorithm doesn’t know if that student has a friend and mentor who has gone to college, how many hours over 40 she usually works, or the obsessive nature of her time management skills. If the LMS tracks student access, the predictive algorithm does not know how the course is designed (how many deadlines per week does this instructor require; if only one, students may only log on once a week). Similarly, some students like to print everything; they may have spent a chunk of hours earlier in the course printing the course materials and then may not spend a lot of time in the LMSs later during the term. The formula used to assess whether or not a student is sufficiently “active” in a course would privilege one course design or type of student over another.

    Recognizing that individual instructors might not have the time, out-of-course information, and/or technological/mathematical know-how, we already see more institutions of higher education prompting faculty to more carefully incorporate LMS analytics into their teaching processes. We also see an increasing number of companies like Civitas Learning selling predictive analytics packages to institutions, with products like Inspire, that prompt individual faculty to work within the analytics environment. In both cases, the predictive formula is like Colonel Sander’s herbs and spices for Kentucky Fried Chicken—secret. I appreciate the help to try and run “big data” style analytics on my individual courses; however, if I don’t understand what data is being used, and how it is being weighed, how can I possibly make meaning from the resulting analytics? With this data, all I see is the shadowed outline of the student cast by a variety of different lights, unclear of the significance of each light and the depth and weight of the shadow.

    I would like to thank my colleagues in the field of writing studies who have already tackled this mediacommons prompt as it specifically relates to writing, rhetoric, and composition (another concern I have and they are better qualified to talk about it). I would also like to thank Catrina Mitchum for studying a topic I have only dabbled in, online student retention, and helping me to learn more.

    Creative Commons licensed image posted at Flickr by Paint

  • Experiments in Ethics: The Ethical Programs of Self-Driving Cars

    Jim Brown's picture
    by Jim Brown — Rutgers University-Camden 2 Comments view

    In a recent study, “Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?” Jean-Francois Bonnefon, Azim Shariff, and Iyad Rahwan surveyed people regarding their feelings about autonomous vehicles (AVs) and what kinds of decisions they would prefer those machines to make. Unsurprisingly, participants thought that AVs should make decisions that save the most lives, unless of course they themselves were the collateral damage.

    This study is not the only experiment in ethics that is currently underway. The moment we put these vehicles on the roadways, releasing code-driven automobiles into the wild, we began a massive experiment in ethics. We often think of ethics as a philosophical discourse, removed from everyday life. In fact, this is the assumption that undergirds the study conducted by Bonnefon, Shariff, and Rahwan: they posed a number of hypotheticals and then drew broader conclusions about cultural anxieties surrounding AVs. But the construction of AVs and the writing of the code that determines how they operate is itself an experiment in ethics, one that writes an ethical program directly into computational machines.

    Brett Rose of TechCrunch argues that the approach of Bonnefon, Shariff, and Iyad Rahwan is limited, and I would tend to agree, but not for the same reasons. Rose sees the discussion of ethical algorithms as a waste of finite resources:

    “Don’t get me wrong, these hypotheticals are fascinating thought experiments, and may well invoke fond memories of Ethics 101’s trolley experiment. But before we allow the fear of such slippery-slope dilemmas to de-rail mankind’s progress, we need to critically examine the assumptions such models make, and ask some crucial questions about their practical value.”

    For Rose, these models assume that we consciously weigh all of the possibilities prior to making a decision. From everyday experience (and especially behind the wheel of a car), we know this isn’t the case. Rose’s solution? Physics and hard data should always trump the squishiness of ethics:

    “Even if a situation did arise in which an AV had to decide between potential alternatives, it should do so not based on an analysis of the costs of each potential choice, information that cannot be known, but rather based on a more objective determination of physical expediency.

    This should be done by leveraging the computing power of the vehicle to consider a vast expanse of physical variables unavailable to human faculties, ultimately executing the maneuver that will minimize catastrophe as dictated by the firm laws of physics, not the flexible recommendations of ethics. After all, if there is time to make some decision, there is time to mitigate the damage of the incident, if not avoid it entirely.” (emphasis mine)

    This attempt to separate physics from ethics is built on a dream (one that we are especially likely to find on sites like TechCrunch) that computational machines allow for an escape from the sloppy inefficiencies of the human (and, perhaps, even the humanities).

    In Ethical Programs: Hospitality and the Rhetorics of Software, I argue that our dealings with software happen at multiple levels of rhetorical exchange. We argue about software, discussing how it can or should function. But we also argue with software, by using computation to make arguments, and in software, becoming immersed in computational environments that shape and constrain possibilities. The paper by Bonnefon, Shariff, and Rahwan sits primarily at the level of arguing about software, and it risks treating ethics as something rational and deliberative, which we know leaves out most of the ethical decisions we make each day. We decide in the face of the undecidable; we execute ethical programs (computational or otherwise) without the luxury of walking through all of the possibilities.

    Rose's argument is that arguing about software is a waste of resources and that we should instead focus our time and energy on “mankind's progress.” Such an approach ignores that an AV’s algorithm will have to make decisions about what data to include or exclude (and how to prioritize that data) as it determines whether to accelerate, brake, or turn. Rose argues that AV algorithms should be focused on “physical expediency” rather than the endless regress of ethics. But his very use of the word “should” is a signal that this approach is not confined to the “firm laws of physics” but is instead already caught up in ethical questions.

  • Crowdsourcing Out the Sophistic Algorithms: An Ancient View

    Walt Stevenson's picture
    by Walt Stevenson — University of Richmond 1 Comment view

    While only very recently have scholars begun to address the complex rhetorical strategies behind algorithms(see Ingraham, 2014), to discuss them I would like to pull one of the oldest rhetorical tricks out of the bag, the dissoi logoi. Most electronic media users, I suspect, will think of algorithms as a vehicle for marketing, but in opposition I would like to describe another model, a crowd-sourced algorithm for linguistic and literary study.

    The marriage of big data with sophisticated algorithms has spawned an electronic Big Brother that few of us have been able to ignore as he continuously looks over our shoulder. The press have even celebrated algorithms that subtly observe our electronic behavior, often triangulate it with data on similar users, crunch a vast record of credit card purchases, and come to accurate conclusions: perhaps most celebrated, the Algorithm will know when a woman is pregnant before her family will (see Hill, 2012). Suffice it to say here, Aristotle would easily understand that an enterprise cleverly attaining insight into individual motivations (e.g., instinct to shop for a crib – Rhetoric 2.2-11) in order to deliberately craft a sales pitch (Target famously had its algorithm mix non-pregnant-woman ads into fliers to disguise their real purpose – Rhetoric 2.12-17) that will persuade someone to a specific course of action (buying crib, car seat, diapers, etc., at Target – Aristotle is reserved on this part, but see Rhetoric 3.19) is fundamentally rhetorical. The algorithm, in good rhetorical style, learns all it can about the audience, ponders the best way to persuade it, and then makes its well-researched pitch. And as with the majority of good rhetoric over the ages (pace Plato and Aristotle), it performs all the stages of persuasion not only without any complicit consent of the audience, but under the well-rewarded intention that the audience should remain as unconscious of the whole procedure as possible. This algorithmic rhetoric should be called in blunt terms sophistic. Such forms of persuasion are so ubiquitous, and the goal of selling car seats so benign, that most audiences happily direct their attention away from this form of persuasion.

    Voluntarily crowd-sourced algorithms do the opposite. Let’s take the classical example of dependency treebanking as used in the award winning Perseus Project, an algorithm that recruits expert users to mark up grammatical information in a text with the goal of creating scientifically accurate dictionaries, grammars, and even tables of references and allusions (see The Ancient Greek and Latin Dependency Treebank and Bamman and Crane, 2011). So when Target’s algorithm peeks down a dark alley to figure out what you are buying, the treebank calls you over to volunteer to help paint a mural on the wall. When Target asks your acquaintances behind your back what you really like to buy, treebank tells you that your friends are going to double check your work for accuracy. When Target analyzes the information it snuck from you to create a flyer that will deceive you from their real purpose, treebank analyzes the information it elicited from your hard work to illuminate new meanings in the text for you.

    So much for the first part of the dissoi logoi, the stark opposition. The next step needs to be an effort at synthesis. On the one hand it should come as no surprise that voluntary crowdsourcing algorithms have become suddenly popular in the corporate world. Microsoft, for instance, has made a recent and concerted push in this direction, and we won’t be surprised if these algorithms start to get sophistic and make lots of money. On the other hand, very few teachers of ancient Greek and Latin literature will resist for long a sophistic algorithm that would, for instance, surreptitiously observe students reading an electronic version of the Iliad, triangulate their habits with the immense data on reading habits we have that use English-language treebank projects, and eventually find covert ways to seduce them into reading more effectively and pleasurably (see, for instance, R. Levy, K. Bicknell, T. Slattery and K. Raynor, 2009, where Penn Treebank data is synthesized with eye movement data to come to new understandings of reading strategies). By and by, the dazzling efficiency and capabilities of contemporary corporate algorithms will be welcomed and celebrated even in the most humanistic disciplines. Let’s just hope that the authors of all these algorithms, corporate and humanistic, assume a rhetorical bent that respects the volition of their audience.

  • How are we tracked once we press play? Algorithmic data mining in casual video games

    Stephanie Vie's picture
    by Stephanie Vie — University of Central Florida 1 Comment view

    Casual video games are played by millions, and the Electronic Software Association (ESA) offers research showing that casual and social games are among the most frequently played games today (by 47% of the population playing games). Social games are those that are played within social networking sites, such as Facebook, or as applications on mobile devices that connect to the user’s social network. Many social games can also be considered casual games, those that are easy to play as well as simple to pick up, play for a while in short bursts, then put down.

    Casual social games like Fruit Ninja, Candy Crush Soda Saga, and Plants vs. Zombies are popular even amongst those who might not normally label themselves “gamers.” However, players of these games frequently fail to consider the convergence of networked game play and game algorithms that mine both our own data as well as the data of individuals in our network. Already we see our gameplay options being shaped by this surveillance of casual game play, and these algorithms bring up compelling issues regarding user privacy and data mining that digital humanities scholars are well poised to address.

    Online social networks thrive on the constellations of strong and weak ties that make up our personal networks of friends, family members, casual acquaintances, work buddies, friends of friends, and so on. As Jesper Juul (2010) has explored in his book A Casual Revolution: Reinventing Video Games and Their Players, because of their connections to these massive webs of strong and weak ties, socially networked games are embedded within highly complex networks. And as I have explored elsewhere, casual game players who play these socially networked games leave digital traces of their actions in their wake, “their game play activities … visible to corporate entities … as well as the player’s own social networks.”

    Surveillance, of course, is not unique to casual social games. From security cameras at schools and stores to facial recognition software, keystroke loggers, and license-plate tracking programs, the United States has long used technological tools to gather information about its citizens. However, the intersection of social networks and casual gameplay—a space where players generate massive amounts of data by the minute—is particularly intriguing for digital humanities scholars. For example, Runge, Gao, Garcin, and Faltings(2014) analyze when casual game players of Diamond Dash and Monster World quit playing the game; this moment in time, often called “customer churn” or attrition rate, can be predicted via gaming algorithms. Furthermore, in games where customer churn can be predicated via algorithmic analysis, players can then be retained with targeted email and social media campaigns or notifications as well as the implementation of free in-game currency or similar benefits. In other words, if the game algorithms predict that you as a player are likely to leave soon (and these predictions may be based on elements like time and date of last login, accuracy rates of your gameplay, the country you’re from, the number of other players in your network who are active, etc.) then the game may “reward” you with free elements like in-game currency, extra lives, or a new level to hook you in and keep you actively playing. Meanwhile, the player is frequently unaware that these games have been logging all the facets of their game play activities and that the algorithms are busy breaking down patterns and making predictions about the “stickiness” of the game play.

    Greater awareness of this constant surveillance in casual social games is the first step in influencing humanities research into such algorithms. We should be paying greater attentionto casual games, an area of study too easily dismissed as frivolous and unacademic; however, the seeming simplicity of these games belies the importance of the algorithmic analysis occurring by the second underneath the surface.

  • algorithms at the seam: machines reading humans +/-

    Carl Whithaus's picture
    by Carl Whithaus — UC Davis 1 Comment view

    Mapping the seam where humans and software interact and evaluate written language is a challenge for scholars in the humanities and the social sciences as much as it is a challenge for researchers in computer science and computational linguistics.

    Automated Essay Scoring (AES) software is driven by algorithms that attempt to codify the complexities of written language. AES packages have been developed and marketed by educational content and assessment firms such as ETS, Pearson, and Vantage Learning; these efforts have been publicly critiqued with over 4,000 signatures on the petition Professionals Against Machine Scoring of Student Essays in High-Stakes Assessment. In addition, algorithms act as readers in TurnItIn’s new auto-grading service. Based on human readers’ scoring of sample essays, the Turnitin Scoring Engine “identifies patterns to grade new writing like your own instructors would.” These patterns are based on algorithms that attempt to map what the human readers have done on the sample essays and create a statistical model that the software can apply to “an unlimited number of new essays.” Speed and reliability are the promised benefits of this algorithmic reading.

    This post sketches some of the history, challenges, and dynamics around having software algorithms score and respond to written language. Understanding how different algorithms and software packages work is essential for entering into debates about not only the reliability of AES but also its validity. Like human readers, each algorithmic reading engine emphasizes different aspects of a piece of writing, has its own quirks, its own biases, if you will. These algorithmic-readerly biases have something to do with history. In this case, it is not a reader’s biographical history but the software’s developmental history that shapes the reading approach encapsulated in the automated essay scoring and response engines.

    ETS’s e-rater’s core engine was developed during the 1990s by researchers at ETS. It constructs an ideal model essay for a task based on up to twelve features. When it reads, it reports its comparative analysis back through different areas of analysis such as style, errors in grammar, usage, and mechanics, and identification of organizational segments, such as thesis statement, and vocabulary content. Its writing construct and model of the reading process is informed primarily by Natural Language Processing (NLP) theories (Attali & Burstein, 2006; Leacock & Chodorow, 2003; and Burstein, 2003).

    Cengage/Vantage Learning’s Intellimetric® development was also informed by Natural Language Processing (NLP) theories.  This algorithmic model draws on up to 500 component proxy features and organizes them into a few selected clusters. These clusters include elements such as content, word variety, grammar, text complexity, and sentence variety. Intellimetric supplements its NLP-based approach with with semantic wordnets that attempt to determine how close the written response is to vocabulary used in model pieces. Intellimetric also integrates grammar and text complexity assessments into its formulas. It aims to predict the expert human scores and then optimized to produce a final predicted score (Elliot, 2003).

    Pearson’s Intelligent Essay Assessor (IEA) is delivered through the Write to Learn! platform. Originally it was developed by Thomas Landauer and Peter Flotz. It uses Latent Semantic Analysis (LSA) to measure how well an essay matches the model content within a piece of writing. In contrast to e-rater, mechanics, style, organization are not fixed features, but rather are constructed as a function of the domains assessed in the rating rubric. The weights for proxy variables then associated with these domains are predicted based on sample human readings (rating scores). These can then be combined by the software with the score calculated by the LSA-based algorithm for the piece’s semantic content (Landauer, Foltz, & Laham, 1998; Landauer, Laham & Foltz, 2003).

    TurnItIn’s Scoring Engine (sometimes called Revision Assistant) builds on the algorithms developed by Elijah Mayfield at LightSide Labs. This scoring engine attempts to read and respond to student writing based on the elements that human readers would value in the same writing and reading contexts. Samples of graded student writing are input into the algorithms to help the scoring engine learn; this machine learning approach differs from the algorithms used in the NLP- and LSA-based systems. This set of algorithms can point out both strengths and weaknesses in a sample of writing. This algorithmic reader’s history and practice might be as much about feedback and revision as it is about scoring and placement.

    These four sketches of how algorithms have been bundled in AES software only scratch the surface of the ways in which algorithms are being used at the seam where humans and machines interact — where humans and machines read and write each other. These algorithmic readers are only a few of the pieces of software that are being developing within the AES, or what is beginning to be called Automated Writing Evaluation (AWE), field. (For a more comprehensive review see Elliot et al., 2013). This field itself is only a sliver of the ways in which algorithms are being used to read and manage written texts as data.

    In some ways, the iterative feedback possible with these emerging scoring engines is the logical evolution of the intimate green or red squiggly lines already in Microsoft Word, Gmail, and Google Docs. These pieces of AWE software, these bundles of algorithms, are shaping how we think about our words, our writing.

    They are intervening in our writing processes, become our intimates. They may be closer to our thoughts than our lovers are. What does it mean when we text a love note, and a software agent autocorrects our spelling before our lover reads it? What does it mean when students are writing essays and a bundle of algorithms pushes and shapes their language before a peer or a teacher even sees it?

    These algorithms are not evil. They are us, or they are working with us, or they are working us.

    As a foray into machine reading practices, this post should not end as a dystopia. Rather I want it to end by asking that we go back, that we consider the algorithms and how they are bundled. E-rater is not Intellimetric, and IEA is not TurnItIn’s Revision Assistant. We need to pick at the seam where algorithms read our writing and where we write into the deep well of natural language processing, semantic webs, and machine learning.

    We need to pick at these threads not to undo them, but to come to understand the pluses and minuses they each hold. Knowing how these algorithmic readers work, knowing the threads that bundle these software packages together is a vital task for humanists as well as for computational linguists, for teachers and writers as well as for software developers. I’m curious to see how this discussion plays out.

    References

    Attali, Y., & Burstein, J. (2006). Automated Essay Scoring With e-rater V.2. Journal of Technology, Learning, and Assessment, 4(3). Available from http://www.jtla.org

    Burstein, J. (2003). The E-rater scoring engine: Automated essay scoring with natural language processing. In M. D. Shermis & J. Burstein (Eds.), Automated essay scoring: A cross-disciplinary perspective (pp. 113-122). Mahwah, NJ: Erlbaum.

    Elliot, S. (2003). Intellimetric: From here to validity. In M. D. Shermis & J. Burstein (Eds.), Automated essay scoring: A cross-disciplinary perspective (pp. 71-86). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

    Elliot, N., Gere, A. R., Gibson, G, Toth, C, Whithaus, C, &; Presswood, A. (2013). Uses and Limitations of Automated Writing Evaluation Software, WPA-CompPile Research Bibliographies, No. 23. WPA-CompPile Research Bibliographies. http://comppile.org/wpa/bibliographies/Bib23/AutoWritingEvaluation.pdf.

    Landauer, T. K., Foltz, P. W., & Laham, D. (1998). Introduction to latent semantic analysis. Discourse Processes, 25, 259-284.

    Landauer, T. K., Laham, D., & Foltz, P. W.(2003). Automated scoring and annotation of essays with the Intelligent Essay Assessor. In M. D. Shermis & J. Burstein (Eds.), Automated essay scoring: A cross-disciplinary perspective (pp. 87-112). Mahwah, NJ: Erlbaum.

    Leacock, C., & Chodorow, M. (2003). C-rater: Scoring of short-answer questions. Computers and the Humanities, 37(4), 389-405.

  • How will near future writing technologies influence teaching and learning in writing?

    Bill Hart-Davidson's picture
    by Bill Hart-Davidson — Michigan State University 1 Comment view

    I want to begin with this: using algorithms to evaluate writing is nothing new. A holistic scoring rubric and procedure for, say, calibrating human raters of essays…that’s an algorithmic approach. The algorithm is the procedure, after all, and the motive to use it lies in the goal of systematizing something that is typically a complex interpretive task. Whether we do it with humans or hand over the interpretive task to machines, we do so at the cost of much of the nuance that an individual reader might bring to the task.

    Thankfully, Les Perelman at MIT has been working hard to demonstrate that teaching students to write for an algorithmic scoring process is a bad idea. And as Perelman recently said in an interview, it is a bad idea to let computers execute algorithms for holistic evaluation for a straightforward reason: it doesn’t work. You can ask students to produce text that satisfies the rules of the algorithm and is still bad writing. With human readers, the problem can be overcome, but only if the readers are free to go beyond what the algorithm specifies.

    But there is another reason to be skeptical of algorithmic approaches that may be, in the long run, even more compelling. They shift the focus away from giving students formative feedback. They teach humans to behave more like computers. MIT Media Lab director Joi Ito puts it bluntly:

    “The paradox is that at the same time we've developed machines that behave more and more like humans, we've developed educational systems that push children to think like computers and behave like robots.”

    But Ito is not anti-technology. And neither am I. We can imagine a different sort of role for technology to play in teaching and learning if we work at it. Rather than trying to create machines that read (or write) like humans, we can instead create systems that give humans a chance to focus more on how we might improve as writers and communicators.

    What would these look like?

    At the Writing, Information, and Digital Experience (WIDE) research center, we’ve been working on assistive technologies that can improve written communication. We’ve used a number of algorithmic approaches to do this. In our peer learning service Eli Review, for instance, we track activity – feedback that reviewers and instructors give to writers,  as well what writers do with that feedback – to evaluate the “helpfulness” of a review. We do this as a way to help students learn to become better reviewers, something which itself has been shown to improve writing performance.

    Another WIDE project uses a machine learning algorithm to help online discussion facilitators visualize and moderate comments in online forums. We worked with science educators at theMuseum of Life & Science in Durham, NC to develop an application called The Faciloscopewhich gives the museum staff a way to see when contributors to a thread are interacting in ways that are likely to be productive, and when they are not. The facilitator can then choose to intervene, perhaps by asking a question to get the discussion back on track, or perhaps by banning participants engaged in unwanted or unacceptable behavior. The Faciloscope doesn’t automate any of the work, but it does help facilitators who may have many simultaneous threads to attend to know when one could use their attention.

    So, how will near future writing technologies influence teaching and learning in writing? It is up to us. Algorithms can have a positive role to play, Joi Ito eloquently argues, in making our world more human. This will only happen, though, if we play an active role in the design of algorithmic tools and focus their use on assistive applications rather than seeking to replace human thinking.

  • Toward Ambient Algorithms

    Sean M. Conrey's picture
    by Sean M. Conrey — Syracuse University 1 Comment view

    In spite of recent work attempting to complicate the concept, the metaphor of the “network” as a mechanistic descriptor for how data connects us to people, places and things online persists. A common critique claims that thinking of networks this way implies an information ecology where explicit and obvious connections between “links”are most valuable because they can be tracked, marketed to, and mined for greater means of connection with “users” later. This is inadequate, the thinking goes, because living organisms and the ecologies they inhabit are simply not machines (or “not simply machines”). Even the notion of the network as an “information ecology” typically conceives of its world as too closed and too human to foster viable, holistic care for all of the people, places and things involved in it.

    In common industry parlance, “network” still has these mechanistic connotations in spite of landmark work like Mark C. Taylor’s The Moment of Complexity (2001) that attempted to recover the term from this history of use. In A Counter-History of Composition (2007), Byron Hawk claims that Taylor’s take on network complexity is an outgrowth of “complex vitalism,” an attempt to articulate the intricate relationships in digital technology, for example, as a living system. Thomas Rickert counters in Ambient Rhetoric (2013) thatTaylor’s theories (and even Hawk’s reconsideration of them) have a difficult time developing a theoretical language adequate for ecological concerns largely because they emphasize the explicit and overt and thereby fail to articulate ways to attune to the more implicit and covert ambient background that shapes the context from which language and conscious thought arise (99-107). Rickert claims that if we attune to ambience we will start to “push against the metaphors of node, connection, and web,” and arrive first at “metaphors of environment, place, and surroundings and second to metaphors of meshing, osmosis, and blending” (105). Taking these metaphors as a way of thinking through our immersive connection to the environments we inhabit, Rickert claims that language gets woven into the environment and becomes inextricable at the level of ambient attunement. This means that “language and environment presuppose each other or become mutually entangled and constitutive,” and this “opens us to forms of ‘connection’ that are not driven solely by links.” The implications of this are profound, knowing that this guiding metaphor of the “network” plays such a significant role in how we write online, both in algorithmic code and at the more obvious, interface level that most “users” only see.

    As key participants in the construction and maintenance of digital environments, algorithm writers are at high risk of perpetuating this particularly destructive metaphorical tendency. Tarleton Gillespie claims in “The Relevance of Algorithms” (2014) that algorithms produce and certify knowledge, and this has political implications. Through Rickert we might extend this to say that when the language of an algorithm becomes presupposed, its driving metaphors do as well. Gillespie’s model is a good starting point, but his concern is largely for the human public. His ideas do not do enough to look to the larger, nonhuman ecological matters in which algorithms interact. Without great care, such algorithms risk describing people, but also places and things as mere quantities, as purified, aloof or otherwise violently abstracted nouns whose role in the lives of  “users” (itself a violently abstracted noun) is simplified to, say, “marketability,” or “function,” or any other violative descriptor. They risk reifying this violence into the lives of the people, places and things that come into contact with their code.

    If these risks are legitimate, then a few key questions come to mind:

    • How do we avoid treating the quantities (be they people, places, or things) that play an essential role in algorithms as mere (violently abstracted) objects so that, rather than becoming connotatively denigrated, they are invited to responsibly participate in whole ecologies of people, places and things?
    • Rather than sweeping out these holistic connections for the sake of simplicity, marketing or other “uses,” what moral imperatives could replace this normalized thinking?” How would this thinking integrate a deeper ecological sense to be equally concerned with person-to-person ethics and those of nonhuman interaction? 
    • There is also the interesting chicken-and-egg metaphysical question (offered by Daniel Hocutt) of whether it is only algorithm writers who must attune to their respective environments in order to write morally sound algorithms, or whether the algorithms themselves are not, in ways, seeking attunement. 

  • Algorithmic Discrimination in Online Spaces

    Estee Beck's picture
    by Estee Beck — UT-Arlington 2 Comments view

    Silicon Valley engineers and programmers create the flow of information people engage with online. Whether it is a curated newsfeed or timeline on social media, personalized search results or recommendations from large online retailers, websites and apps collect a lot of data about people’s habits, values, and actions online. The collection of big data is a multi-billion dollar industry. It’s becoming commonplace to use such data in connection with employment, health care, policing, purchases, and even housing.

    And, it’s not human beings who are routinely making quick decisions on whether to extend credit to an individual or to hire a person based on their social media profile. Instead, it’s computers. More specifically, computer algorithms.

    But, are algorithms objective purveyors of truth? Can algorithms accurately predict future outcomes based on previous trends, without bias?

    There is a common understanding among people that algorithms are neutral or objective. Perhaps this is due, in part, because of the mathematical properties of computer algorithms. However, people write and program algorithms; thus, the complex equations are not free of bias or human influence.

    This means computer algorithms can discriminate and affect real changes in people’s everyday lives.

    What’s worse is the blurring of social and legal boundaries when algorithms discriminate, because there is little regulatory oversight or legal protection for citizens when algorithms do discriminate.

    As a digital rhetoric and writing/media studies educator, each time I ask students to get online and click around, I am forced to think about their digital data trail. When students Google or use collaborative document sharing, I wonder about how their data is tracked — and sold to advertisers.

    More importantly, I reflect upon the best educational practices for teaching students about algorithms, tracking technologies, and algorithmic discrimination.

    Because computer algorithms are exceedingly complex, I’m not necessarily inclined to teach students a literacy for algorithmic calculations.

    Instead, I am more inclined to integrate activism in coursework, encouraging students to speak up and out about the legal and social affects of algorithmic discrimination to seek regulatory and legal protections.

    However, this is only one model for integrating discussions about algorithms in classrooms. What models might work for you, within your own classroom/department/institution? What assignments and recommendations might offer students opportunity to learn about algorithms? How will your guidance help prepare students for the shifting online information economy?

  • The Essential Context: Theorizing the Coming Out Narrative as a Set of (Big) Data

    Marc Ouellette's picture
    by Marc Ouellette — Old Dominion University 1 Comment view

    Considering the impact of “big data,” particularly predictive or anticipatory technologies, in terms of their ability to organize assumptions about gender and sexual identities reveals the primacy of (a) coming out narrative as (an) a priori, regardless of the perspective taken. Moreover, these technologies presume a gender normative or heterosexual subject because the accumulation and construction of the necessary algorithm(s) and data rely on the binary structure of conformity or deviation from a norm. An example: Google Scholar lists JFK as my co-author for a paper for which I used a quotation as an epigraph (see Figure 1). Their crawler reads it that way. When contacted, their response was to make the journal change its citation method to conform to their preferred format. Thing is, the journal already does, but the Google algorithm only recognizes things that agree with its normative values: those that don’t must change.

    Figure 1. Google Scholar screen capture. November 14, 2015

    Indeed, tracing the roots of the unstated construction and/or assumption of the coming out narrative also reveals a tendency in the scholarship to assume that this cultural construct applies primarily to GLBTQ youth (Hammack & Cohler, 2011; Waitt & Gorman-Murray, 2011). Similarly, the scholarship also presumes that GLBTQ youth will be the ones most likely to be seeking out digital technologies (Cover & Prosser, 2013). In this way, the coming out narrative becomes both rationale and outcome whether one considers the mass customization potential of big data as democratizing and liberating, scrutinizing and stereotyping, or necessarily intrusive but manipulable. The principal driving force remains the ability to identify and to predict consumers (consider, for example, the sale of predictive data for rape victims). Not only does this place GLBTQ users into the position of consumers instead of citizens, it produces rather than identifies (a) “queer” data set(s), whose derivation depends on deviations from the (heteronorm), against which all others will be assessed. This data set essentially comprises the (new) coming out, rendering even the most “strategic outness” nearly impossible to manage once digital technologies come into play (Orne, 2011).

    Therefore, whether one adheres to or deviates from the data set, these become the defining criteria — not according to GLBTQ communities, members, individuals, participants, allies, etc., but according to the collection of algorithms of so-called “contextual” or “associative” search engines. These often take the form of meta-search engines which glean data from portals, clicks-through, and the more familiar web search engines such as Google and Yahoo. As Search Engine Watch puts it, the ultimate goal for these search engines is “precise results and advertisements” particularly for mobile users (J. Slegg, 19 Feb. 2014). As well, these users are presumed to be heavily skewed towards a youthful demographic both in terms of spending and in mobile usage.

    In light of these algorithmic identifications, I offer this question to explore the role algorithms have in coming out narratives: What are the multiple and simultaneous modes of understanding the alternatively liberatory and constraining potentialities for the big data provided by contextual or associative search engines? I’m especially interested in their ability to determine and to instantiate what can only be described as “control” data, in all senses of that term.

    As we address this question, I am reminded of Alex Doty’s axiom that any text can be a site for a queer reading in order to locate such an understanding of these data accumulation technologies. Our investigation affords an opportunity to evaluate and elucidate the critical (and commercial) bias towards a youthful audience, especially in terms of constructions and conceptions of GLBTQ identities.

    References and Additional Resources

    Anderson, Eric. “Updating the Outcome Gay Athletes, Straight Teams, and Coming Out in Educationally Based Sport Teams.” Gender & Society 25 (2011): 250-68.

    Bracke, Sarah. “From ‘Saving Women’ to ‘Saving Gays’: Rescue Narratives and Their Dis/continuities.” European Journal of Women’s Studies 19.2 (2012): 237-52.

    Cover, Rob and Rosslyn Prosser. “Memorial Accounts Queer Young Men, Identity and Contemporary Coming Out Narratives Online.” Australian Feminist Studies 28 (2013): 81-94.

    Craig, Shelley L. and Lauren McInroy. “You Can Form a Part of Yourself Online: The Influence of New Media on Identity Development and Coming Out for LGBTQ Youth.” Journal of Gay & Lesbian Mental Health 18 (2014) :95–109.

    Hammack, Phillip L. and Bertram J. Cohler. “Narrative, Identity, and the Politics of Exclusion: Social Change and the Gay and Lesbian Life Course.” Sexual Research and Social Policy 8.3 (2011): 162-82.

    Kahne, Joseph, Ellen Middaugh, and Danielle Allen. “Youth, New Media, and the Rise of Participatory Politics.” Youth, New Media and Citizenship (2014).

    Orne, Jason. “‘You Will Always Have to ‘‘Out’’ Yourself’: Reconsidering Coming Out through Strategic Outness.” Sexualities 14.6 (2011): 681–703.

    Taylor, Yvette, Emily Falconer, and Ria Snowdon. “Queer Youth, Facebook and Faith: Facebook Methodologies and Online Identities.” New Media & Society (2014): 1-16.

    Vivienne, Sonja and Jean Burgess. “The Digital Storyteller’s Stage: Queer Everyday Activists Negotiating Privacy and Publicness.” Journal of Broadcasting & Electronic Media 56.3 (2012): 362-377.

    Waitt, Gordon and Andrew Gorman-Murray. “‘It’s About Time You Came Out’: Sexualities, Mobility and Home.” Antipode 43.4 (2011): 1380–1403.

    Zuckerman, Ethan. “New Media, New Civics?Policy & Internet 6.2 (2014): 151–168.

  • Algorithms and Rhetorical Agency

    Chris Ingraham's picture
    by Chris Ingraham — North Carolina State University 6 Comments view

    Recently, I’ve been thinking about the fitness monitors that more and more people are wearing to monitor their health. Of course, it's less accurate to say that people wear fitness trackers to monitor their health, than that they do so to monitor the monitoring of their health by the media technologies they’ve allowed access to their bodies. That difference matters. And it matters particularly because algorithms, on which these technologies run, are making it necessary to rethink the idea of agency.

    Fitness monitors work first through sensors that measure input from our bodies, principally measurements of movement, and then through algorithms that interpret that data to generate an output feedback. It’s the output feedback that we monitor when we use these devices to track our health. So a Fitbit wristband or Apple Watch tells us our heart rate, or how many steps we’ve walked, or how fast we’ve run, or how soundly we slept, and so forth, and we respond (or not) to that feedback with the belief that it offers an insightful guide to help us toward a healthier life.

    Insofar as the feedback offered in response to a fitness tracker’s measurements influences human action—e.g., actually prompts you out of bed to walk around the room until you log the day’s 10,000th step before sleeping—these devices can be understood to act rhetorically. If the algorithms driving fitness monitors can influence people, and if the process of exerting that influence can be understood as rhetorical, then in some ways these algorithms must exhibit a kind of rhetorical agency. The question is, what kind?

    Carolyn Miller’s 2007 piece, “What Can Automation Tell us About Agency?,” is our best hope for an answer. Miller is the first rhetorician to recognize that automated technologies driven by algorithms challenge us to rethink what agency is and where it functions. For her, rather than “locate” rhetorical agency in a capacity to act rhetorically or in the effectivity of rhetorical action, as many have proposed we do, we should instead think of agency as a kind of kinetic energy attributed to the relationship between the two: between rhetor and audience, between capacity and effect.

    Understanding rhetorical agency as kinetic energy might seem to "fit" fitness monitors perfectly, considering that these monitors inspire movement by measuring it. They're fundamentally kinetic. But if movement is both the necessary input for these algorithms to operate and that which the algorithmic output endeavors to inspire, then what agent function can be attributed to move-ability as such? In other words, movement needs to originate somewhere before the monitors and their algorithms can measure it, and hence before the algorithms can influence further movement. Is there not, then, a kind of rhetorical agency in potential energy as well?

    Even if we follow Latour and attribute an agency function to an "actor network" (or, as I'm more inclined to do, follow Tim Ingold and call it a "creative entanglement"), the problem remains that we are not accounting for any originary sense of agency: for move-ability in the case of fitness trackers, but more generally for that ur-agency that sends agency's distributed capacities into motion to begin with. In a feedback system whereby human motion feeds a device’s sensors, which then feed output results back to the human, so that the human then moves or doesn’t move in response, thereby providing further input for the sensors, on and on, how can agency exist in any one node or in-between any of them?

    An infinite loop has no start. Infinity has no middle. There is no in-between. Rhetorical agency, then, at least in this case, may well be the one that initiates or terminates the loop itself. In a world of algorithmic rhetoric, where algorithms play an increasingly prevalent role in mediating and influencing human affairs, that’s a powerful thought to remember.

  • Curator Introduction: Organisms in a World of Algorithms

    Daniel Hocutt's picture
    by Daniel Hocutt — Old Dominion University view

    Although it’s not standard protocol for the MediaCommons Field Guide, I’d like to contribute a brief editorial perspective on developing the topic and survey question and requesting contributions from among colleagues old and new. First and most important, I offer sincere gratitude to all who responded to my email invitations to contribute. So many whose work I consider instrumental in shaping my research agenda replied warmly to my request, either to agree enthusiastically to contribute or regretfully to decline the invitation but suggest other colleagues I might (and did) invite. Such responses encouraged and strengthened my resolve to continuing exploring algorithms as applied rhetoric (Ingraham, 2014), encouragement and strength I needed.

    My interest in algorithms originates in my profession as a web manager on a continuing higher education marketing team. My interest piqued at the intersection of a practical and a theoretical approach to algorithms. The practical approach involved the function of algorithms in Google Analytics, a tool with which I have become intimately familiar. I completed a brief study of Google Analytics as an object of study during a Theories of Networks class taught by Julia Romberger and Shelley Rodrigo; the study highlighted the remarkable levels of control over data and information flow integral to the function of Google Analytics in tracking website traffic. The theoretical approach involved a more convoluted pathway via game studies. A doctoral colleague I’ve collaborated with on research into the use of Google Drive in the composition classroom, Maury Brown, introduced me to the use of algorithms as organizational patterns for rhetorical choices in game design, a reference that turned me toward Ian Bogost (2007) and procedural rhetoric and, eventually, to Ingraham’s chapter “Toward an Algorithmic Rhetoric.” I am deeply indebted to these colleagues, and especially to Ingraham and Rodrigo for their contributions to this collection.

    Since these encounters, I see algorithms at work everywhere. For example, I attended a BBC Future event called the World Changing Ideas Summit in October 2014, where leading-edge thinkers and inventors shared what they believed would be the upcoming best ideas in education, the social sciences, medicine, and more. Here I heard the first inklings of Google’s forays into the intersection of education and artificial intelligence from Alfred Spector, Vice President of Research at Google — powered by algorithms. I also learned about the use of drones to automatically conduct military operations and to deliver goods without direct human guidance — powered by algorithms. And I learned about the implications of self-driving vehicles on human experience, on the use of virtual reality in journalism, and the arrival of social robots — all powered by algorithms. Algorithms circumscribe our daily experiences, from the ads we’re exposed to on myriad media (even that digital roadside billboard responds to the amount of traffic on the road) to the results of our searches on Google, Facebook, YouTube, Bing, Yahoo and more. Our computers, tablets, and smartphones are, at their core, collections of deeply encoded algorithmic procedures running in response to geolocation, wired and wireless signals, and gesture, voice, and text inputs. Traffic control systems, food service cashier terminals, delivery logistics systems, learning management systems, content management systems — these and more represent intersections of daily lived experience among algorithms.

    This field guide seeks to identify and question these points of intersection, to explore algorithms and the worlds of code, function, structure, and outcomes they inhabit as another of Cargile-Cook’s (2002) layered literacies. As curator of this collection, I invite you to read these contributions and the responses they generate, and to respond in kind, to become part of a conversation that seeks to understand what it means to be an algorithm in a world of organisms — and to be an organism in a world of algorithms.

    References

    Bogost, I. (2007). Persuasive games: The expressive power of videogames. Cambridge, MA: The MIT Press.

    Cargile Cook, K. (2002). Layered literacies: A theoretical frame for technical communication pedagogy. Technical Communication Quarterly, 11(1), 5-29. http://dx.doi.org/10.1207/s15427625tcq1101_1

    Ingraham, C. (2014). Toward an algorithmic rhetoric. In G. Verhulsdonck & M. Limbu (Eds.), Digital rhetoric and global literacies: Communication modes and digital practices in the networked world (pp. 62-79). Hershey, PA: IGI Global.