Discussing design in the grad lounge continues to raise questions both programmatic and philosophical. Today we revisited the question of whether or not users should be able to see results or visualizations before they highlight a text themselves. Most of us grad students were concerned about this possibility, especially since we have agreed to make the first version of Prism for pedagogical use. We worried that students would either copy what others have marked, or that they would simply be influenced by others’ markings unintentionally. Jeremy and Wayne suggested that that might all be just fine; maybe what we want to find out is how a group interprets as a group, not as a collection of isolated individuals. This raises a major critical or methodological question for us. Are we “croudsourcing interpretation” by collecting many individual interpretations or by creating a space for collaborative interpretation? Do we want many separate interpretations that we can compare and contrast visually, or do we want one interpretation that is the work of many minds all together?
I’m surprised we haven’t considered this question as a group before now. If we remain committed to the pedagogical use of the first version of Prism, I do think we should keep the results hidden from students before they have highlighted a text. But if we want to start thinking more broadly, I’m sure there will be use cases in which collaboration would be favored over the collection and collation of individual interpretations. Perhaps we can have it both ways further down the line, with researchers who upload texts given options about what is visible to which users. Our “original” idea of “crowdsourcing interpretation” seems to emphasize difference, but the collaborative version would seem to move toward a singular or unified or at least somehow standardized interpretation of an object. Wayne also pointed out how this issue has major implications for those writing the code, so here again what we thought might be a simple discussion of wireframing turned into a conversation about the methodological approach or assumptions of the tool we’re building. We still need a stronger foundation to hold up the frame.