Peer Review for Visual Aids?

How frustrating is this: You sit down to take in some form of scholarly work (be it a book, an article, or a talk) and you find yourself increasingly confused with a bombardment of information from graphs and figures and maps which don’t make sense because they either have too much or too little information contained within them or the information is poorly labeled (if at all).  Or even worse, you are the person writing the book/article or giving the talk and instead of fielding questions on your scholarly processes, you are repeatedly explaining to the audience what your visual aids actually represent.

A picture may be worth a thousand words, but if it is not a language your audience speaks, where have your efforts gotten you?
Typically, when I read a scholarly article, my first read-through goes as follows: I read the abstract, I look at each one of the figures/maps/tables/graphs and their annotations, and I read the conclusion.  Its not until the second read-through that I examine the bulk of the text.  I think that words sometimes have the unfortunate tendency to obfuscate the true findings of research and, truth be told, I like to find out if I draw the same conclusions from the provided data as the author(s) do.  My process stumbles when I encounter articles with figures/graphs/maps etc. which have either a glut or a dearth of information contained in them, making non-intuitive to the uninitiated reader.  Some highlights:  A map of a state containing rivers, waterbodies, and watershed boundaries (the focus of this particular article) AND all of the major roads and highways (NOT the focus of the article).  All in gray-scale.  Add in the point locations and names of the state’s twelve most populous cities and cram it into a box three inches tall by five inches wide.  The focus of the article was on modeling and delineating the major and minor watersheds of the area in order to develop a best management practice for cooperating water districts.  Needless to say, that point was lost in the shuffle.  Another example which is all too common: a graph depicting change over time of 10 or more constituents using various dotted, dashed, and solid lines of variable thickness.  With that amount of information crammed into a single visual aid, the results are simply lost in the shuffle.

We have writing clinics and public speaking critique sessions, why don’t we have a peer evaluation system for visual aids?  I think that many people (myself included) fall into a habit of having our material critiqued solely by our close working group.  While this is certainly a necessary step in the writing process–the people most familiar with our work are the ones most likely to pick up on the esoteric flaws–many scholars neglect to obtain peer review from individuals tangential to or completely outside of their small fields.  I would say that one of our main objectives as scholars is to use our work to excite interest from members of the scholarly community inside and outside of our focused area.   In my opinion, an important step towards this goal is to make our visual aids more accessible to the curious non-expert.

I would like to see our scholarly community develop this type of peer-review network where we can utilize the human resources around us to improve our intellectual contribution to all of our respective fields.  We could have minds from a variety of fields of study working collaboratively to improve the accessibility (and therefore the use) of our collective body of knowledge.   I think the concept has amazing potential.

Wendy Robertson is a graduate student in the Department of Environmental Studies at the University of Virginia.

1 Comments

  1. Of course, Edward Tufte is the guru for this field, but I have another suggestion that I have found incredibly useful. Frank Sulloway (Berkeley) wrote a book which used the biographies of 3500 scientists to explore the networks of force for and against scientific revolutions over history. The book – Born to Rebel – is a distillation of 26 years of Sulloway’s research, and he uses descriptive statistics brilliantly to condense and illuminate this rich data. And, even better, he writes a set of appendices to his book that show other people how to do it. I have long thought that these should be used as teaching aids for graduate researchers (and the book should be used as an example of how to make gripping reading out of dense, quantitative historical research).

Archives