This post is in response to Claire’s thoughtful writeup on Crowdsourcing, which I think raises and tries to answer some absolutely salient questions. Originally, I think the intent was to wrote a simple “con” piece, but since Claire – and Brandon in reply – have taken such nuanced and sophisticated positions, I suppose that I’ll have to do the same lest I appear a rube (don’t be mean, Cecilia).
I’ll switch up the order to keep things interesting.
Does crowdsourcing turn people into dehumanized cogs?
We need to first break down what crowdsourcing means. Crowd, used here, is like one of those weasel-words that my students use when they don’t want to get into specifics – like “the people”, or (occasionally) “the rabble”. Crowdsourcing is the delegation of tasks to an arbitrary collection of individuals. Typically, this is done to harness the quality which they, as a faceless wall of flesh, share in common: being humans instead of computers. The problems where being humans are useful are therefore generally those that are impossible (maybe in one of the two Turing senses) or unfeasible for computers to do. Definitionally, if we care about who people are, it’s not crowdsourcing. It’s just, you know, sourcing.
Crowdsourcing strips away the individual, as a feature. Does that mean that it dehumanizes? I guess that depends. In the Mario Salvio New Left sense, I guess it would. But clearly, in social science disciplines where the individual is not so important as the aggregate study of humanity, this is not the case at all. Which is why, I suppose, crowdsourcing platforms like Amazon Mechanical Turk have been widely used for experiments in these fields.
Will a crowdsourcing project produce useful information for academic pursuits?
I don’t know.
But I think that trying to approach this question agnostic to discipline is a mistake. Interdisciplinary is a laudable goal, but in the end, it’s hard to deny that there are substantial differences not just in approach but in purpose that divide the various humanities. Crowdsourcing is a tool; it makes as much sense to consider its worth to “humanities” as determining the value of a cyclotron to “science”.
In my own field of history, as my fellows have heard endlessly, I feel that crowdsourcing is not very useful for research because of the inescapable fact that obtaining any kind of data from “the crowd” happens in the present and not in the past. This is the kind of inescapable statement as “cyclotrons aren’t useful for biology because its interactions are not on a macro scale.” Clearly, these may not be concerns for disciplines that aren’t history or biology. But that’s the point, I suppose.
Can a DH crowdsourcing project really reach beyond the walls of the academy?
Do we mean that the crowd is outside of the ivory tower or that the users are?
For the former, I think that there are certainly many interesting crowdsourced transcription projects (Bentham, Old Weather), that have found success and wide appeal. One really interesting academic (though not humanities) use of crowdsourcing that’s gotten alot of attention of late is the protein-folding game FoldIt (http://fold.it/portal/). FoldIt, and I guess Ender’s Game, really illuminate the power of gamification in attracting an active and broad audience for such esoteric subjects as viral pathology and intersteller genocide.
Cecilia and I actually had a brief discussion returning personal statistics in Prism, like calculating a particular user’s distance from the mean or the ability to show shortest-distance and farthest-distance users. That’s one step toward, if not gamification, then individualized feedback. Of course, this discussion naturally led to talk of using this metric for online dating.
“Hey baby, I see that we both highlighted the same sentence in The Raven…”