Scholars' Lab Blog //Project Reviews
Blog //Project Reviews

As part of our second week assignment we were giving the task of reviewing three projects from the following list. The list is interesting because it groups projects that have been around for a while with more recent contributions. The three that I chose actually belong to the former category. I thought it would be refreshing to re-view the familiar. I apologize beforehand if what I say below echoes criticism and praise already directed at these projects.

Zotero

I’ve been a user of Zotero for a while. When I discovered it a few years ago it changed the way I collected my research materials. It’s been three years now that I’ve been compiling a large bibliography of Aimé Césaire primary sources and secondary sources. I must here confess that I use Zotero somewhat off the beaten pass as a content management system of sorts. I have transcriptions for many of my items in the note section. This allows me to use the search function through out all my items, returning the bibliographic items. I’m aware that there are better ways to search through a corpus (Solr for example), but I like the fact that I can keep all my research in one single work-space. The tags and related features allow me to record connections between primary and secondary sources that an algorithmic solution might not discover. Being able to organize the database by different categories can also be a great way of visualizing a collection. My collection at this point is enormous, and it is true that the task of adding transcriptions and making connections can be overwhelming as the collection grows, but I figure there is no hurry. As the community continues to develop around the Zotero API, I’m hoping that I will eventually be able to share my bibliographic work in the form of an online publication. It is true that the collection can be offered through the Zotero website, but this does not allow for a personalized design around particular collections. As you will hear often from me, “analytic” tools should always move in the direction of publication. I’m happy to see Zotero is laying the groundwork for these kinds of projects.

Whitman Archive

In my mind the Whitman Archive has always been part of an imaginary triumvirate next to The Rossetti Archive and The Blake Archive. Like the other two it has always suffered from the “tree” structure. In order to get to a text you must click through narrowing categories. Once you do, you have arrived at… a text. True, you can use the search box.  It uses the Google algorithm. The logic of the Google algorithm is not necessarily the logic I would like to use for an archive. The archive is the work of Ed Folsom and Ken Price. At all levels of the work we find their imprint, criticism included. There is nothing inherently wrong with featuring the work of the principal investigators, but an archive of this scope and prominence should strive to feature the work of other scholars as well. Another question which is extremely important to me is the presentation of texts. The typical WA edition of Leaves of Grass for example will include images and lightly formatted text. The design of the pages is not very conducive to reading online. Although the edition remains a great resource for the study of Whitman, it still does not provide an aesthetic experience comparable to a Penguin edition. I believe the two should not be mutually exclusive. Finally, there is the question of what you can do with the texts. So far, the Archive has partnered with TokenX to allow users to do some textual processing with the materials. You can also download the XML files to use with other tools. What I would like to see in the archives of the future is more play within the space of the archive. With a bit of cleverness and resources, design can invite you in to play, and play can lead to better connections, which brings me to…

TAPoR

I started using TAPoR first through Voyeur Tools, and it wasn’t until this assignment that I started using their web interface directly, which if I’m not mistaken, is a recent offering. I discovered that the tools do not necessarily overlap, and that in general the set of tools in Voyeur Tools is more complete. Voyeur Tools also seems to be designed in such a way that the tools connect with each other, something TAPoR does not do very well. That said, TAPoR is a great tool for beginners who want to experience what textual processing can reveal about their texts. Much of the debate that I have heard about the use of statistical tools to examine texts has been directed precisely at the sorts of things that TAPoR and Voyeur Tools do well. I believe that criticism to be a result of poor usage, rather than a flaw in the tools. The conceptual claims of collocation and distribution graphs are never overstated by these projects. In this sense, the tools deliver what they promise. That said, there is plenty of room for misuse, false assumptions and ‘naive’ evidence. In short, use with caution.

Cite this post: Alex Gil. “Project Reviews”. Published September 05, 2011. https://scholarslab.lib.virginia.edu/blog/project-reviews/. Accessed on .