Recent Comments

February 9, 2016 - 02:13

Technology works really well, but it isn't cheap. Also if you want a good quality product you need to tailor it to your language model, language field and your specifications. YouTube is a free demo, as Google Translator, so basically they are models which are getting trained with the input from users. As you mention, the number of languages and their diversity makes a "universal" free machine something like utopia. A nice budget, some determined languages, and a semantic field will offer great results.

As to the reason why close captions are so different from … what? Are the close captions produced in real time? Because there are several modalities for close captions and they affect directly the result. If done by stenotype what you get is not close caption but a court report, almost verbatim. No annotations for sound though. If they are produced by a respeaker, you get a decalage of text and image, plus heavy edition of original dialogue. Finally if they are done not in real time and in the old fashion way, with someone typing and editing, you get really nice close captions, sometimes with colours, emoticons, etc, etc. 

February 8, 2016 - 14:27

Automatic tools to monitor quality and to endorse labels are not yet fully developed, with unsolved legislation at European level and also at national levels. Agencies to independently monitor quality would also have to be established, and in this new scenario the translator will slightly change his working profile from first hand producer of assets, to editing and monitoring quality.

This is such a salient point and I have been thinking a lot about this. I know that several automated translators exist, but that they look a bit like YouTube's closed captions. The cultural translation and context remain outside of these current automation systems. However, with the volunteer force that many translation and scanlation communities provide, humans could easily quality check the computers. Not to always refer back to YouTube, but it's ability to identify copyrighted (or in this instance region coded) material seems to way-too-conservatively favor the copyright holders. Will these algorithms be able to handle the cultural understandings that go into the many facets of translation. I would love to hear more about the models that are already being suggested. 

I wonder, also, if this software will align with what volunteers want. Many translation volunteers are either interacting with a source material they love or they are using these crowdsourcing communities to develop language skills. Does automation software remove what makes these communities attractive to their members?  

Thanks for your response! What a great opening to the conversation. 

February 8, 2016 - 09:12

It's really interesting that you brought up the idea of creative common, copyright, watermarks, and distribution within this post because of what happened recently with the Fine Brothers. Long story short they are a YouTube channel that reecently faced a lot of backlash because they tried to trademark a term/format that had been around a long time. But seeing how technology is constantly changing as well as translations with different countries and languages (of which was the reason of their trademark bid as they wanted to try and create something they entire world could get in on) there has been a lot of confusion surrounding it, especially in terms of what it was they were trying to accomplish, which proves that language over technology is something that should continuously be considered.

Though this does still bring up my question of how is it that with all of the advances in technology we have, how is it that some closed captioning is much weirder than the actual dialogue?

December 5, 2015 - 07:54
Thanks for this excellent response, David. Here are some (disconnected?) thoughts:   "there is a machinic quality to thought experiments"   Yes, this is exactly what I'm thinking. There is also a machinic quality to any ethical decision, even if those machinations are not entirely understandable. We often associate "machinic" with "knowable" or "predictable." But I'm always reminded of Turing's quote: "Machines take me by surprise with great frequency." So, the idea is to attempt to understand how these procedures work, even though those attempts will always be experimental and partial. Further, those experiments would have to grant that there is no final answer to these questions of responsibility, especially if we grant that we're talking about response-ability…the ability to respond, the exposedness to others that exposes us before we get to decide what we are or are not "required" to do. Which gets us to this…   "For example, IBAD would not be responsible for providing CPR or counseling or even alerting those who might provide those services. IBAD would not be required to provide shelter, clothing, food, health care, or an education to any of those whom IBAD has “saved” or whom IBAD has not let die. We are hospitable, but there are limits."   As you note, there are limits to responsibility and hospitality, and those limits are exactly what points back to the infinite question of hospitality. Am I (as the IBAD) required to provide CPR, food, clothing, shelter, etc. Perhaps not, but I am respons-able to these responsibilities. This means that I am affected by these exigencies, even though I will inevitably draw lines at some point. In my book, I label the identification of these limits as "ethical programs," procedures for writing the laws of hospitality in the face of the infinite Law of hospitality (which welcomes others before I ever get to choose).    But as you note, even saying that the IBAD writes these ethical programs doesn't quite work, because the ethical situation involves so many other machines (the IBAD is just one machine…or one component in the machine):   "Rather than using TE to identify the coordinates for producing and substituting narratives/scenes, then layering those coordinates onto other situational planes, I wonder if there is a way to map or follow the connections and see how and where the thought experiments emerge and with what they do or don’t connect."   Yes. Perfect. Ethical programs are both computational and linguistic, they involve humans and machines, and they emerge in specific rhetorical situations. This means mapping the specific actors and networks without falling back on any easily identifiable, generalizable program.  

 

All of this suggests to me that Autonomous Vehicles will never really exist as "autonomous," because there will always be a massive assemblage of people, objects, machines, etc at the scenes of ethical decision. The dream that AV will solve all of these problems is perhaps associated with the dream of a perfectly theorizable ethical program, one that solves all of the problems ahead of time. Such a dream sees ethics as arhetorical, and I think it is bound to fail.
December 3, 2015 - 09:57

Hi, Jim: I’m intrigued by your statement that the construction of AV’s and the writing of the code are experiments in ethics. And l appreciate your prompting us to reflect on the quality of those experiments.

Thought Experiments (TE)—like the Trolley Problem referenced in Bonnefon, Shariff and Rahwan’s work—I’m a big fan. With the help of student clicker technology, TE’s have supported attempts to make large lectures more interactive: if you were the trolley driver whose vehicle’s brakes have failed, would you mow down 5 people on track A or mow down 1 person on track B? Insert input from audience…. What if you were not the trolley driver but a bystander next to the track switch? Insert input from audience….

In the right hands, TE's make the How-to-Be-We almost magical. It’s as if the philosopher—following the example of Bullwinkle Moose—pulls concepts (duty, responsibility, rights…) unexpectedly out of her hat—which, in this case, is the audience engaged in the experiment.

Your post prompts me also to think that there is a machinic quality to thought experiments, which is well worth exploring. Thought experiments generate procedures, and—as philosophers like Foot and Thomson use/develop the Trolley Experiment—they are tools (if not situational surrogates) that help a certain kind of philosopher to track down intuitions and thematize principles that support those intuitions. TEs promise that there is some WE out/in there and that this WE is more or less in agreement with itself once we have properly identified the principles that inform our (principled) behavior. These principles can then be collected/grouped as concepts, and/or used to test principles generated by other means. And one TE generates and is substituted by others: Foot, for example, pairs the Trolley Problem with a Transplant Problem (1 person has organs; 5 people need them).

I do have concerns about TE’s as a mode of ethical inquiry: when connections are only mapped as principles, it’s too easy to exchange the scene of the trolley with the scene of the hospital with the scene of the driver with the scene of the manufacturer. We then wonder whether a consumer-self would buy this AV. Would a consumer-self be willing to pay for an AV that might well decide that it’s better to effect the death of the 1 consumer-self behind the wheel rather than hitting 5 pedestrians. Well, there’s the marketing pitch: “Buy inFinitude®’s new AV. It’s for the Greater Good.”

The basic, if unspoken, action of the Trolley Experiment is that there are conditions under which we can/should (or most people think we can/should) limit our responsibility to others. In particular, the trolley problem is set up to determine whether or when we think our responsibilities to more others outweighs our responsibility to fewer others. Even when some Individual Burdened with that Awful Decision (IBAD) has made the choice to redirect or not redirect the trolley, there are still limits to this person’s responsibility to these others. For example, IBAD would not be responsible for providing CPR or counseling or even alerting those who might provide those services. IBAD would not be required to provide shelter, clothing, food, health care, or an education to any of those whom IBAD has “saved” or whom IBAD has not let die. We are hospitable, but there are limits.

So, let’s try to map more than IBAD’s non-responsiveness. That’s one of the many helpful paths your post invokes for me. I think you're asking us to chart more: for example, chart the machines that are responsive to, are sometimes arguing with, AV. Rather than using TE to identify the coordinates for producing and substituting narratives/scenes, then layering those coordinates onto other situational planes, I wonder if there is a way to map or follow the connections and see how and where the thought experiments emerge and with what they do or don’t connect.

If we did, I suspect, we would see other machines at work, producing th-oughts. And I'm not certain there would be much of a place for us to identify with one of the characters or agency positions in a TE. We would become part of the work. When the machinic opens onto a more inclusive mapping of the potentiality for machinic response, the work is both the input and the output for the experiment, as David Hockney said of Tim’s Vermeer.

Do you think the machinic orients us toward a new way of mapping the US that is more than a chart of IBAD’s non-responsiveness?

 

Bonnefon, Jean-François, Azim Shariff, and Iyad Rahwan. “Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?” http://arxiv.org/abs/1510.03346

Foot, Philippa. “The Problem of Abortion and the Doctrine of the Double Effect.” Oxford Review 5(1967): 5-15.

Thomson, Judith Jarvis. “The Trolley Problem.” The Yale Law Journal. 94.6(1985): 1395-1415.

November 29, 2015 - 21:21

Walt, I see hope infusing your post, hope that infuses several of the responses to this field guide survey. There’s doubt, too. Hope that algorithms will not become sentient beings that seek the destruction of the human race. Hope that developers, engineers, and programmers will write algorithms that treat the human-generated data they collect and aggregate with dignity, respect, and individualism. And doubt that commercial interests funding research and development into advanced, self-teaching algorithms will, as you write, “respect the volition of their audience.” Inherent in both the doubt and hope I’ve seen woven into several responses is a call to programmers, developers, and engineers to write algorithms with a humanistic perspective. I wonder whether that call could also be turned around — are humanities professionals called to write humanities from an algorithmic perspective? Might digital humanists, public historians, writers, scholars, and other humanities professionals learn to program machines, to write complex humanities algorithms? Can humanities professionals compete in the marketplace with the deep-pocketed funders of research and development into algorithms by building humane algorithms and procedures that respect audience volition? To date, large-scale algorithms as encoded symbolic procedures have remained the near-exclusive purview of mathematicians and computer scientists, often funded by multinational conglomerates. Should we, as humanities scholars, seek to challenge that monopoly?

Your reference to treebanking projects may represent one way that the power of algorithms is not simply harnessed by digital humanists, but is researched and developed by humanities scholars using the power of the very audience these algorithms seek to serve. Maybe the key to the call you’ve highlighted is the audience. In true rhetorical fashion, our role is first to analyze and authentically represent the audience in transparent ways in the algorithms we propose, develop, and deploy.

November 23, 2015 - 21:43

Sean, I’d like to elaborate on the final bullet point you graciously attributed to me in more elegant and concise words than I originally used. The question of whether algorithms need to attune themselves to the way “language gets woven into the environment and becomes inextricable at the level of ambient attunement” is one I’m taking seriously. As I dig deeper into one particular algorithm — Google Deepmind’s algorithm that has been programmed to learn to play and win Atari 2600 video games without pre-programmed knowledge of specific game mechanics other than an awareness of pixel positions in game play frames and the “greedy” desire to earn points — I recognize that algorithms are being programmed as perpetually self-teaching learning machines. Deepmind’s algorithms are about artificial intelligence, and that intelligence, while artificial, is making decisions about how to succeed. It’s important to remember that “algorithm writers are at high risk of perpetuating this particularly destructive metaphorical tendency,” but it’s also important to consider whether the algorithms themselves need to be cautioned to avoid network metaphors when constructing their (admittedly limited) realities.

As Deepmind’s algorithm teaches itself to play and win Pong using the memory and processing of deep neural networks and the reward system of deep reinforcement learning, what are its creation and learning metaphors? Will it seek to learn within the framing confines of network metaphors? Should we ask it to push against those metaphors and become attuned to the intersection of language and environment? In the case of the Deepmind algorithm learning to play an Atari game, I sense attunement would result in breakthrough knowledge — an understanding of game outcomes that transcends the iterative procedural activity and learning encoded in the language of the algorithm.

The Deepmind algorithm is able only to follow its iterative coding within the confines of its prescribed memory, processing, and frame limits. Were the algorithm to attune itself through self-taught iterative processes to achieve shortcuts in learning, to find pathways that break out of iterative procedures and achieve exponentially advanced outcomes, our first inclination might be to believe that Skynet had become sentient. And maybe that’s true. But it might also suggest a breakthrough in neural networking that could attune itself to its ambient rhetorical environment. What would that mean for our understanding of the role algorithms can and/or should play in learning? In teaching?

These are questions I’m barely able to formulate at this point, much less begin answering. And missing from all this is the ethics of algorithmic development, production, and function. If algorithms can achieve some level of attunement, what are the ethical ramifications of such attunement? Can algorithms achieve some type of conscience, wherein via attunement an algorithm might recognize the nuances of falsehood and deceit?

November 23, 2015 - 12:08

Very interesting piece, Stephanie. Especially fascinating to me is the introduction to the term 'customer churn,' which is think is especially poignant when we consider how one of the criticisms levied against casual games is the 'grinding' that they require for success. 

Your discussion about the algorithmic tracking of player movements and choices in casual games has me thinking about the hacks that players come up with to circumvent or confuse the tracking algorithms embedded in the games. I spoke with a colleague a few weeks ago who is an avid Candy Crush player (he is currently sitting at the end of the map and waiting for new level releases) and he described how he would open multiple tabs/instances of the games in his browser before playing a new level. That way, he can play the level in one tab and, if he fails, simply close it and move to the next one that has already cached his progress as having the full amount of live. This prevents him from having to wait on the timer to earn extra lives or pay money for powerups to finish the level.

Of course, one might argue that the game isn't particularly concerned about this type of player. After all, someone who is willing to invest that much effort and energy into circumventing the game's barriers in order to play it probably isn't as much of an attrition risk as someone who plays by the rules and might run into all of the deadends and fail states that come with doing so.

November 23, 2015 - 09:43

Bill, thank you for this excellent response. I really agree, especially your closing lines: "This will only happen, though, if we play an active role in the design of algorithmic tools and focus their use on assistive applications rather than seeking to replace human thinking." The work you all are doing is so, so important. The future appears to be one where more technologies are used to (presumably) save time and make things easier, and I am all for that. However, I want to ensure that these technologies reflect the value systems of our particular discourse communities—that is, writing technologies such as the ones you explore here should reflect the values of writing faculty, not what corporations outside of our community believe we want or need.

You're incredibly lucky to be at an institution that supports that kind of work through WIDE; do you have suggestions for others interested in getting started in similar work? Especially if they're at a smaller school or one with fewer resources—how can we help support the work of "creat[ing] systems that give humans a chance to focus more on how we might improve as writers and communicators"?

November 23, 2015 - 09:39

Carl, great points here, and I especially nodded along when I got to your line "These algorithms are not evil. They are us, or they are working with us, or they are working us." That line made me think of Michael Wesch's The Machine is Us/ing Us, but your point is well taken—algorithms are programmed by people, and therefore are embedded with the values, mores, and ideologies of people.

As someone who has done some work critiquing the "promises" of automation that Turnitin in particular has offered us, I'm also interested in seeing how responses to your piece here shape up. I know that there are potential positives for that particular technology, but because of my own fierce disinterest in using their tech and supporting the company (and its values), I find it difficult to explore "the pluses and minuses they each hold." In other words, my biases toward the company and its values make it hard for me to step back and "consider the algorithms and how they are bundled"—I can't separate the algorithm from the company's ideological frame. Do you find ways to make this kind of approach possible? I know you have argued elsewhere against the tradition of rejection, but in cases like Turnitin, I feel that rejection is a way to vocally make my (and some others') concerns be heard.