How can we better use data and/or research visualization in the humanities?

  • Visual (il)Literacy

    Almila Akdag Salah's picture
    by Almila Akdag Salah — University of Amsterdam, Department of New Media 1 Comment view

    As an (odd) art historian, I am fond of computers, algorithms, computational methodologies, and visualization (which I perceive as a very useful tool). I do not think I belong to the majority when I say “instead of looking at each painting of an artist, or a plethora of artists, and compare their work, I’d rather look at feature sets derived from their works, visualized as a set of ‘fingerprints’, and try to get an overview of their relations via these fingerprints”.

    Let me briefly explain what I mean by a ‘feature set’ and ‘fingerprints’. Feature extraction in image processing is a commonly used method, where an image is represented by a set of features derived from the image. These features could be very basic, such as hue/brightness/saturation of an image, or more complex, like the amount of edges and corners in the image, or the presence of high frequency or low frequency information. They can also be based on human visual system, such as the distribution of saliency over an image, obtained by a computational attention system.

    Feature extraction already belongs to the computer scientific jargon, but ‘fingerprint’ is a term I choose to use in this context to denote a specific abstraction. Whenever I use image processing on any given artist’s oeuvre, I look at the resulting visualizations (and there usually are many different ones depending on which feature sets you use), and think that each visualization is a distinct fingerprint of that artist.

    Here is for example a figure that compares sampled artworks from three different artists (all members of the social network-site deviantArt, which is dedicated to sharing user-generated art). In this figure, each line represents various feature-values of an artwork, and each color (i.e. green, blue and red) represents an artist, the sum of same colored lines is the fingerprint of that artist. 

    Instead of trying to explain to an art historian what feature extraction does, and what each line means in this figure, you can simply show him/her a visualizations where these features are explained via the visualization itself. Trained in spotting visual language of artworks (composition/color/balance etc.), an art historian would understand a visualization much better then a scatterplot. Compare for example these two figures (1) of another deviantArt member:

    On the left handside, we have all artworks of a deviantArt member in a scatterplot visualization. Each dot represents a painting; the points are arranged according to the mean and standard deviation values of the pixels of each painting. On the right hand-side, we have the same scatterplot, but this time, each painting is represented with a thumbnail of the painting itself. The first plot is readable only by the expert, whereas the latter translates what mean/standard deviation of pixels mean to a layman. This approach belongs to what Lev Manovich calls 'cultural analytics'. He proposes a quick solution to the challenge of “how to read a graph”. The answer is simple: change the points in a graph with the ‘real thing,’ i.e. the thumbnails of the pictures.

    We have developed a similar tool to study a collection of artworks from deviantArt (2). The main difference of this tool from the toolset Lev Manovich uses is essentially in its built-in recommendation component. Given two artists to compare, the tool uses machine learning methods to select the best features to maximally separate the galleries of these artists. In a sense, this tool provides the art historian with a direction by telling what kind of similarities two paintings/artists are sharing in computational terms. When combined with the social network information that exposes links between artists, it becomes possible to test (under some assumptions) whether an algorithm works well in spotting stylistic similarities and differences between artworks.

    In a recent study I did with Lev Manovich, we have looked at in what ways image processing can be used directly to discover relations between artworks, and found out that this is an incredibly challenging problem if tackled in a generic way (3). It is at the moment not possible to come up with an algorithm that can tell us whether two images share some common component, unless this component is made explicit. It makes sense to fill in the gap by using social network information.

    If we can develop an algorithm that can compare two artists, or two artworks successfully, then we can apply these algorithms to a huge dataset of (historical) artworks on which there is little historical data. For a lot of artworks, many important details are not known, many times we don’t even know the name of the artists, production place or time. These types of artworks could be analyzed by such algorithms, and compared to classical examples of their time, and maybe relations that were not thought of before could come to light.

    At the moment, we are far from that point. Not only computationally, but also mentally. And I think the latter is more problematic then the former:

    One of the questions I never get when talking about the deviantArt project in front of a humanities audience is “what does the x/y axis stand for, what are your parameters?” On the other hand, for scientists, this is almost always the first question they ask, and for them this is crucial in interpreting the graph. This clearly shows that humanities scholars lack the skills to properly interpret such visualizations, they do not understand what it means to have projection of data onto different subspaces spanned by different features, and they don’t think critically when it comes to visualizations. 

    So, my two cents in the debate of how humanities scholars can benefit from data/research visualization is very humble: the scholar has to be visual-literate, or the tools that are developed for (digital) humanities scholar should cover this lack of visual literacy by making the representation incredibly intuitive. 

    1 These visualizations are prepared with VisualCulture, a program developed by Lev Manovich's Software Lab. This program is not supported/distributed anymore. Please check this page to see the latest software used by the group. 

    2 Buter, B., N. Dijkshoorn, D. Modolo, Q. Nguyen, S. van Noort, B. van de Poel, A.A. Akdağ Salah, A.A. Salah, "Explorative visualization and analysis of a social network for arts: The case of deviantArt ", Journal of Convergence, vol.2, no.2, pp.87-94, 2011.

    3 Akdag Salah, A.A., L. Manovich, A.A. Salah and J. Chow, "Combining Cultural Analytics and Networks Analysis: Studying a Social Network Site with User-Generated Content," Journal of Broadcasting and Electronic Media, Volume 57, Issue 3, pp. 409-426, 2013. The high resolution images can be downloaded here: 123.

  • R.I.P. to the Cartographic Priesthood

    Thomas Chapman's picture
    by Thomas Chapman — Old Dominion University 1 Comment view

    As a geographer, I make generous use of one type of visual that is perhaps the most ubiquitous of all: the map. One cannot go through life without being exposed to thousands upon thousands of these spatial images. From paper maps to GPS apps to advertising, maps are the cultural artifacts of society that that tell us where we are, how to get where we are going, and where we have been.   

    There was a time not long ago when the map was the purveyor of ‘truth.’ As a visual representation of the human or physical landscape, few questioned the authoritarian nature of the map. A ‘useful’ map was one that was mostly authored by the “cartographic priesthood”[i]. They had professional training in cartography in an academic environment. Maps were made for public consumption, of course, but if they didn’t float down from the heavens as the final word on whatever was being mapped, it just wasn’t deemed a ‘good map’. I remember my own cartographic training as an undergraduate…If one violated one of “the rules” of cartography, it just didn’t pass muster. I don’t know how many times I heard the instructor say in a loud voice, PUT A SCALE BAR ON YOUR MAP, DAMN IT! The implication was that the narrative of the map wasn’t as important as the extraneous map elements such as a frame, a north arrow, a scale bar, etc.

    But then something strange happened that began topple the priesthood. Coinciding with the postmodern ‘spatial turn’ in the humanities and social sciences, the high priests saw their ‘masterpieces’ examined under the microscope. Meta-narratives about the world were challenged; multiple ‘truths’ were advanced; critical theory took center stage….now the scared map visual itself was open to interpretation.  

    I love all manner of visuals, spatial and non-spatial, digital and non-digital. Not only does visualization have a future in every academic discipline, but its ability to tell a good story, to inform, to educate, and to incite has already transformed the way we know the world; and it has allowed us to know so much more about the world.

    I advocate using visualization techniques whenever and however you can. But also recognize that whatever is produced…be it a video, a photograph, a graph, a 3-D model, a simulation, or a map:

    1. Represents a model of a complex reality that is greatly simplified and generalized. This is its greatest strength, and its greatest weakness.

    2. Is a snapshot that is fixed in time and space. The potential for misrepresenting ‘the truth’ or leaving out multiple truths is great.   

    3. Always comes with a reality that is inadvertently omitted. In other words, what is not represent in your visual can be as important (or more important), than what is represented.

    4. Is not the purview of a ‘professionalized’ class of trained software geeks. Some of the best maps (and other visuals) I’ve ever seen are those created on-the-fly from a simple online app or freeware. If your visual gets your point across or tells your story well,  and those seeing it can comprehend, then it is an effective visual.  

    5. Are always dripping with social constructions of power. If a visual is represented as “objective fact”, be suspicious.



    [i] see Mark Monmonier’s How to Lie with Maps. University of Chicago Press. 1996

    Image on front pagey by Nicholas Nova and available on Flickr.

  • Visualizing Library-Scale Media Collections

    Rob Turknett's picture
    by Rob Turknett — Texas Advanced Computing Center (TACC) 2 Comments view

     

    One way we can better use data and/or research visualization in the humanities is by taking advantage of the advanced computation and visualization resources developed for science and industry, bringing these tools to bear on humanistic research questions. Many digital humanists don't realize that high performance computing and advanced visualization resources are available to them through the nation's Extreme Science and Engineering Discovery Environment (XSEDE) network.

    Our 1000 Words project at the Texas Advanced Computing Center (TACC) seeks to improve access to these resources by developing the software tools, skills, and knowledge base to allow humanities researchers to use visualization - specifically on high-resolution displays powered by supercomputers – to perform novel research.

    Now you might be thinking, "If we humanists are just beginning to dip our toes into the waters of visualization, do we really need advanced visualization resources? What can we possibly do with supercomputers and ultra high-resolution displays?"

    Surely, in many cases, a laptop will be more than enough to achieve the goals of a visualization project. But ask yourself: How many millions of documents can you store on your hard drive? How long will it take to download them? How long to search through them? How many photographs, documents, or images of cultural artifacts can fit on a single laptop screen? How much information can you overlay on a map or a timeline without running out of space?

    Dr. Jason Baldridge interacting with a visualization of Civil War archives in his office and at the TACC Vislab

    These are the kinds of "big data" issues that are beginning to confront the humanities, thanks to our society's exponentially increasing production of and access to digital texts, images, music and video. As the humanities liaison for a major U.S. supercomputing center and visualization lab that serves the nation's science community, I am uniquely situated to help with these problems.

    Scholars like Lev Manovich, whose "cultural analytics" research explores massive cultural data sets of visual material, and Franco Moretti, whose Literary Lab at Stanford is developing new techniques of "distant reading" to study the history of the novel, are pioneers in the field.

    The first fruit of our 1000 Words project at TACC is the Massive Pixel Environment (MPE), whose initial development was funded by the National Endowment for the Humanities. MPE makes it possible for us to quickly create interactive visualizations for ultra-high resolution display walls in collaboration with humanities scholars.

    While creating visualizations using MPE does require a bit of coding, it leverages the Processing programming language — a language designed at the MIT Media Lab for teaching visual designers how to code. Learning to code visualizations in Processing is something that can be learned in a semester, rather than four years.

    This brings me to the second point I'd like to make: to make better use of visualization we need to create more institutional opportunities for students and researchers in the humanities to gain experience working across disciplines. Good visualization requires not only scholarly domain expertise, but also technical/coding expertise and information design expertise (itself a multidisciplinary field). Our siloed departments and traditional publication-based incentive structures limit the potential of students and scholars to gain the kinds of experience they need to creatively apply visualization techniques in their explorations.

    Please let me know your take on these issues in the comments below, and if you are a humanist working on a project that requires more CPU cycles, more memory, or more pixels than you have at your disposal, I'd love to hear from you!

     

    Photo Credits:

    Figure 2. Dr. Tanya Clement, using ProseVis in her office and at the TACC Vislab

    Figure 3. Dr. Jason Baldridge interacting with a visualization of Civil War archives in his office and at the TACC Vislab

  • Visualization Approaches : Histories of Databases and Mapping Science Controversies

    Rahul Mukherjee's picture
    by Rahul Mukherjee — University of California, Santa Barbara view

     

    I want to propose two approaches to thinking about questions of data and research visualization in the humanities. One is concerned with the historical study of database technologies and the construction of the incessantly contested categories of data and information. The second involves putting science studies and digital humanities approaches in conversation with one another. 

    From early seventies, experts increasingly emphasized that data was too enormous and unwieldy to be handled by common users and advised users to instead concentrate on working with information, which was to be understood as data processed by database management systems. But such a divide between data and information also resulted in other categorical divisions: programmers (who worked with data) vs. non-programmers (who were advised to work with information). As academics in the humanities engage with “Big Data,” histories of databases could help clarify a pertinent question: how have users  been encouraged to think about, and work with, their data? In 1981, Sequitur database system’s advertisement in BYTE magazine reads “A Database System that thinks like you do” and shows a sketch of a man with a thought bubble reading “$1/rong/wrong/p” – signifying that he is thinking in computer program language – while the computer has a thought bubble saying, “Couldn’t we be a little friendlier.” In a conversation published in the Softtalk Sept 1980 issue, John Couch mentioned that he found that his father encountered problems managing health spa data using a microcomputer. Couch proposed that machines should have “specification languages” that would allow people to solve problems “by specifying what they wanted in terms of inputs, outputs, relationships etc.” (Apple Orchard) The term “datagrammer” was proposed for those who would work only with specification languages without having to program in algorithmic languages. Through a diagram (see figure 1), it was suggested that before 1980, only programmers could work on data, but after 1980, data would have to evolve into “data defined via specification language,” so that datagrammers could work on it. This visualization recommending how data visualizations (or practices of data usage) should look from the 1980s onwards could be part of humanities research involving the “digital.” Several such anecdotes gathered from old computer magazines can be found in this article. Zooming into and digging of the layers in a database can help comprehend their acts/effects of interfacing (intra-facing), facilitating data usage, and abstraction. How would a history of databases contribute to media archaeology studies, and in what ways can it challenge notions of “raw data”? (On how data are variously cooked, see Gitelman and Bowker’s essays and on “data are capta, taken not given,” refer to Drucker's paper).

    Figure 1: Datagramming, John Couch, Apple Orchard, 1981/1982

    From histories of data visualizations, I now want to discuss some of the modes used for visualizing science controversies and sociality of texts. Bruno Latour and Michel Callon’s work within science studies (and Actor-Network Theory (ANT)) has called for extensive tracing of the concatenations of various actors involved in networks. Inspired by ANT, Tommaso Venturini and the médialab at Sciences Po have been part of a number of projects related to mapping controversies: EMAPS and MACOSPOL. Science Controversies are never just “science” controversies, they are “socio-technical” debates involving issues, and it is these issues that then gather actors (who are affected by those issues) around them. The animated cartography of the London 2012 Olympic stadium controversy has dynamically reconfiguring actors and relations. As somebody in the humanities studying media’s role in science controversies, for me, circulation (and sociality) of media texts is critical in sustaining, expanding, bending, and contracting publics around an ensuing controversy. In recent years, Rita Felski has found in Latour’s work a way to methodologically conceptualize a text’s sociability — a way to understand how a particular text (written by a specific author) comes to our attention (and solicits our attachment) through the work of numerous co-actors such as publishers, textbooks, syllabi, prize committees, and book clubs among others. Alan Liu, in a similar vein, has asked for the coupling of social science (social modeling) and digital humanities (study of online discourse) methods so as to study the “integral field of social expression.” (One has to also think here of sociality within text, interrelationships between characters of a story, and interactive narratives). These share resonances with the work of theorists of media adopting approaches such as “Media Ecology”  and “Mediation” to understand the material-discursive characteristics of the circulatory media environment. Venturini provides a set of toolkits to design timelines, graphs, and web connections required for representing controversies and Alan Liu has curated a comprehensive list of DH tools including those related to network analysis and visualization. Gephi is a popular graph visualization platform among both science studies and digital humanities scholars. Along side technicalities of building visualizations, there is a broad agreement that aesthetic sensibilities and theoretical approaches are crucial (with the caveat that aesthetic sensibilities can often be embedded within visualization tools). Besides ANT, there are other relational philosophies such as Karen Barad’s “agential realism” and ecological perspectives inspired from Deleuze. How would visualizing science controversies shift if we think of reconfigurations of an unfolding controversy in the form of “iteratively intra-active” components of experimental set-ups where both, matter and identities, come to matter? Given the certainty that data often conveys, how can data visualizations of intra-actions, or emerging interactions as visualized through data, retain uncertainties sparking playful interpretations and a sense of wonder (re-enchantment)?  

  • An Ends or a Means? Notes on the Future of Humanities Visualization

    Trevor Owens's picture
    by Trevor Owens — The Library of Congress 2 Comments view

    Will visualization become a mainstream part of humanities scholarship? If so how will it become part of the mainstream? At this point there are a lot of tools, a lot of techniques & plenty of guidance for how to generate and create a range of different visualizations; everything from word clouds, to sophisticated graphs and networks can be created with a minimal amount of training and expertise. With that said, it often feels like visualization is a solution in search of a problem. If it’s ever to become a substantive part of the humanities I think it needs to begin to justify itself in service to one of two core objectives.

    Visualization can serve, as an end in itself, which is visualization, can be a genre of scholarly product like a journal article or an academic book. Similarly, visualization can be a means, a suite of analytic techniques that function as part of the hermeneutic process of

    Visualization as Scholarly Product

    In many ways, I think the humanities is still trying to catch up to some of the vision that historian David Staley put forward in Computers, Visualization and History more than a decade ago. In one of the most provocative sections of his book, he argues for the value that could come from historians beginning to develop “visual secondary sources”, what I think I might instead call visualization as scholarship.

    In this case, I think the key questions that the field needs to answer are now less about examples of what can be done and more about how that work should be evaluated and counted.

    • How to evaluate visualizations as scholarship?
    • How to count visualizations as scholarship?

    Visualization as a Hermeneutic

    The second genre of visualization I think is actually far more likely to take off. In this case, visualization is a set of tactics, techniques and processes for making sense of primary sources. It is a means not an ends. Fred Gibbs got into some of this in the hermeneutics of data and we suggested a lot of the weaknesses in existing tools for this kind of use in building better digital humanities tools.

    This vision is largely in line with Jessop’s ideas about Visualization as Scholarly Activity, and Drucker’s notion of Graphesis , wherein visualization is understood as “generative and iterative, capable of producing new knowledge through aesthetic provocation.” I think this is also very much what Moretti is talking about in Graphs, Maps, and Trees. In this case, a visualization itself becomes a kind of block quote, something that a scholar can then pick apart and make sense of for their reader.

    I see the biggest needs here are about developing a body of methodological literature on how to use very simple tools to do this sort of work. We need a lot of examples of how to use simple tools against the kinds of primary sources that humanists work from and with.

    • In what situations does it make sense to use what particular visualization tool/technique toward what particular analytic end?
    • What kinds of inferences do different tools help us make?
    • What kinds of issues arise when you are using a particular visualization technique against a particular kind of sources?

    A good bit of these questions are actually about translating work in a more scientific mindset into a hermeneutic one and about taking a set of tools generally developed to work with contemporary data and transiting them into working with historical primary sources. 

    In short, I think the is still considerable promise for the future of visualization for humanities scholarship. I've offered questions along these two lines of thought for the future of visualization to try and continue the conversation. I think we have a lot of neat tools and techniques, however, at this point I think the hard work to be done is largely in figuring out exactly how visualization directly fits into either the ends or the means of scholarship. 

    This post extends some thoughts I shared about visualization for communication or discovery a few years back. 

    Image on front page by Shawn Allen and available on Flickr.

  • Rhetorics of Design and Visualization

    Kenneth FitzGerald's picture
    by Kenneth FitzGerald — Old Dominion University 1 Comment view

    The Humanities can best use visualization by keeping in mind and being articulate in the rhetoric of design. Form makes a claim: every graphic representation embodies some rhetoric. "Data" is just as susceptible to "style," and unintended and alternate readings as any other representation. 

     

    The concern is more pronounced in such research-related expressions as it is commonly thought that there exists an "undesigned," neutral design that is purged of subjective meanings. "Information design" derives primarily from the high Modernist "Swiss International Style" that is premised upon intellectual principles that are largely debunked. Before getting to a question of "better usage," we can reasonably ask if what is commonly done is meeting a minimal standard of accurately and efficiently reflecting the presenters' intention.

     

     

  • The Best Practices of Memorably Visualizing Data

    James Eric Sentell's picture
    by James Eric Sentell — Southeast Missouri State University 1 Comment view

     

    Although we are deluged with information, visualizing data and research “cuts through the noise, helping us quickly understand, navigate and find meaning in a complex world.” It reveals patterns and connections submerged in “the seas of data around us” and often becomes “memorable, impactful, enduring” (McCandless 41). Given these strengths, the humanities could use data visualization to communicate its scholarship to new audiences and/or with greater impact. This proposition begs the question: how can we better use data and/or research visualization in the humanities? Or more precisely, what are the best practices of representing data?

    This essay cannot explore every best practice, but Edward Tufte’s influential ideas provide a good start. He defines “graphical excellence” as communicating complex ideas with “clarity, precision, and efficiency.” In other words, effective visuals give the audience “the greatest number of ideas in the shortest time with the least ink in the smallest space” (51). Design principles like contrast, repetition, alignment, and proximity, if used well, can achieve Tufte’s ideal. By winnowing data down to its essentials and clearly organizing those essentials, well-designed visuals enable readers to comprehend complex ideas better, faster, and easier.

    Ease-of-processing is crucial. It is how data visualization “cuts through the noise” surrounding us. People gravitate to concise, well-organized visuals because they facilitate fast learning. Moreover, the easier something can be understood, the more likely it will be effectively encoded into long-term memory, where it can have a lasting presence in one’s knowledge, attitudes, or beliefs. The more memorable a visual, the more effective its communication.

    But visuals need more than ease-of-processing to be memorable. Clarity, precision, and efficiency aid memorableness, as explained above, but they do not necessarily create it. Since our attention, or lack thereof, influences how much or how well we remember, data and research visualization can be more memorable – more effective – if it uses design principles to attract, direct, and allocate a viewer’s attention. But to fully engage that attention in encoding the information into memory, the data visualization also needs the qualities of a good narrative.

    As leading visual designer David McCandless asserts, “The art of visualisation is combining and harmonising design thinking, statistical rigour and storytelling without compromising any element” (41). Using design principles can guide attention. Consider, for example, the use of contrast. Since people naturally notice distinctive stimuli, bolding certain data in a table draws attention to that data, signaling its importance and making it more memorable both visually and semantically. Statistical rigor facilitates winnowing data to its essence. And storytelling can contextualize data and give it the memorable qualities of a narrative.

    This data need not be limited to numbers. In an art education article, Robert Sweeny presents a list of canonical paintings in a visual continuum of dates, styles, and influences (223, see Figure 3). He uses contrast, repetition, alignment, and proximity to clearly organize the information and to create an obvious cluster within a certain date range. This visualization tells a story: many of the most well-known and influential artworks were created around the same time. Sweeny could have conveyed this knowledge in writing and/or a chronological list, but then it would not have been as noticeable, impactful, or memorable.

    As this example demonstrates, the humanities can use the best practices of design and storytelling to visualize either quantitative or qualitative data more clearly and memorably, increasing its communicative power and broadening its potential audience. Though this essay cannot explore every best practice of representing data, hopefully it will inspire further investigations into design’s contributions to our data’s stories and how the stories our data tell influence our design.

    References

    McCandless, David. “Who Doesn’t like a Good Data Visualisation?” Creative Review 34.1 (Jan. 2014): 41-4. Print.

    Tufte, Edward. The Visual Display of Quantitative Information. Chesire, CT: Graphics Press, 1983. Print.

    Sweeny, Robert. “Complex Digital Visual Systems.” Studies in Art Education: A Journal of Issues and Research 54.3 (2013): 216-231. Print. 

    Image on front page by Ran Yaniv Harstein and available on Flickr

  • Neatline and visualization as interpretation

    Bethany Nowviskie's picture
    by Bethany Nowviskie — University of Virginia Library 1 Comment view

    Neatline, a digital storytelling tool from the Scholars’ Lab at the University of Virginia Library, anticipates this week’s MediaCommons discussion question in three clear ways. But before I get to that, let me tell you what Neatline is.

    It’s a geotemporal exhibit-builder that allows you to create beautiful, complex maps, image annotations, and narrative sequences from collections of documents and artifacts, and to connect your maps and narratives with timelines that are more-than-usually sensitive to ambiguity and nuance. Neatline (which is free and open source) lets you make hand-crafted, interactive stories as interpretive expressions of a single document or a whole archival or cultural heritage collection.

    Now, let me tell you what Neatline isn’t.

    It’s not a Google Map. If you simply want to drop pins on modern landscapes and provide a bit of annotation, Neatline is obvious overkill – but stick around.

    How does Neatline respond to the MediaCommons question of the week?

    1)   First, as an add-on to Omeka, the most stable and well-supported open source content management system designed specifically for cultural heritage data, Neatline understands libraries, archives and museums as the data-stores of the humanities. Scholars are able either to build new digital collections for Neatline annotation and storytelling in Omeka themselves, or to capitalize on existing, robust, professionally-produced humanities metadata by using other plug-ins to import records from another system. These could range from robust digital repositories (FedoraConnector) to archival finding aids (EADimporter) to structured data of any sort, gleaned from sources like spreadsheets, XML documents, and APIs (CSVimport, OAI-PMH Harvester, Shared Shelf Link etc.).

    2)   Second, Neatline was carefully designed by humanities scholars and DH practitioners to emphasize what we found most humanistic about interpretive scholarship, and most compelling about small data in a big data world. Its timelines and drawing tools are respectful of ambiguity, uncertainty, and subjectivity, and allow for multiple aesthetics to emerge and be expressed. The platform itself is architected so as to allow multiple, complementary or even wholly conflicting interpretations to be layered over the same, core set of humanities data. This data is understood to be unstable (in the best sense of the term) – extensible, never fixed or complete – and able to be enriched, enhanced, and altered by the activity of the scholar or curator.

    3)   Finally, Neatline sees visualization itself as part of the interpretive process of humanities scholarship – not as an algorithmically-generated, push-button result or a macro-view for distant reading – but as something created minutely, manually, and iteratively, to draw our attention to small things and unfold it there. Neatline sees humanities visualization not as a result but as a process: as an interpretive act that will itself – inevitably – be changed by its own particular and unique course of creation.  Knowing that every algorithmic data visualization process is inherently interpretive is different from feeling it, as a productive resistance in the materials of digital data visualization. So users of Neatline are prompted to formulate their arguments by drawing them. They draw across landscapes (real or imaginary, photographed by today’s satellites or plotted by cartographers of years gone by), across timelines that allow for imprecision, across the gloss and grain of images of various kinds, and with and over printed or manuscript texts.

  • Intro: Visualizing Research

    Jamie Henthorn's picture
    by Jamie Henthorn — Old Dominion University view

    As scholars in the humanities develop skills in quantitative and technical methodologies, the ways that we have represented our research has changed. One branch of the field of digital humanities has worked to theorize on and provide better ways to visualize the types of research that the field has come to value. This boom in research and development in the digital humanities has led to a wealth of data and research visualization software. One no longer needs knowledge of SPSS to create graphs and charts for presentations. Additionally, there are now better ways to express qualitative data.

    New visualization applications include (but are by no means limited to): DHumanities, Neatline, and a wealth of visualizations built into qualitative analysis software like Nvivo. These applications give humanities and social science scholars the chance to share their research in new ways for new audiences and to use these visualizations in their own analysis of research. Discussions and guidelines on the applicability of, as well as standards for, best practices for data visualization would be beneficial as a resource for the greater humanities community.

     

    Below are the responders to our question of visualizing research.

    Thomas Chapman Old Dominion University

    Bethany Nowviskie University of Virginia

    Eric Sentell Southeast Missouri State University and Old Dominion University

    Kenneth Fitzgerald Old Dominion University

    Trevor Owens George Mason University

    Rahul Mukherjee University of California-Santa Barbara

    Rob Turknett University of Texas-Austin

    Almila Akdag University of Amsterdam