UW Interactive Data Lab
IDL logo

Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design

Jeffrey Heer, Michael Bostock. ACM Human Factors in Computing Systems (CHI), 2010
Figure for Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design
Experimental stimuli in which participants were asked to estimate what percentage the smaller value was of the larger.
Materials
PDF | Honorable Mention Award
Abstract
Understanding perception is critical to effective visualization design. With its low cost and scalability, crowdsourcing presents an attractive option for evaluating the large design space of visualizations; however, it first requires validation. In this paper, we assess the viability of Amazon’s Mechanical Turk as a platform for graphical perception experiments. We replicate previous studies of spatial encoding and luminance contrast and compare our results. We also conduct new experiments on rectangular area perception (as in treemaps or cartograms) and on chart size and gridline spacing. Our results demonstrate that crowdsourced perception experiments are viable and contribute new insights for visualization design. Lastly, we report cost and performance data from our experiments and distill recommendations for the design of crowdsourced studies.
BibTeX
@inproceedings{2010-crowdsourcing-graphical-perception,
  title = {Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design},
  author = {Heer, Jeffrey AND Bostock, Michael},
  booktitle = {ACM Human Factors in Computing Systems (CHI)},
  year = {2010},
  pages = {203--212},
  url = {https://idl.uw.edu/papers/crowdsourcing-graphical-perception},
  doi = {10.1145/1753326.1753357}
}