Although we have seen a proliferation of algorithms for recommending visualizations, these algorithms are rarely compared with one another, making it difficult to ascertain which algorithm is best for a given visual analysis scenario. Though several formal frameworks have been proposed in response, we believe this issue persists because visualization recommendation algorithms are inadequately specified from an evaluation perspective. In this paper, we propose an evaluation-focused framework to contextualize and compare a broad range of visualization recommendation algorithms. We present the structure of our framework, where algorithms are specified using three components: (1) a graph representing the full space of possible visualization designs, (2) the method used to traverse the graph for potential candidates for recommendation, and (3) an oracle used to rank candidate designs. To demonstrate how our framework guides the formal comparison of algorithmic performance, we not only theoretically compare five existing representative recommendation algorithms, but also empirically compare four new algorithms generated based on our findings from the theoretical
comparison. Our results show that these algorithms behave similarly in terms of user performance, highlighting the need for more rigorous formal comparisons of recommendation algorithms to further clarify their benefits in various analysis scenarios.
BibTeX
@article{2022-eval-focused-visrec,
title = {An Evaluation-Focused Framework for Visualization Recommendation Algorithms},
author = {Zeng, Zehua AND Moh, Phoebe AND Du, Fan AND Hoffswell, Jane AND Lee, Tak Yeon AND Malik, Sana AND Koh, Eunyee AND Battle, Leilani},
journal = {IEEE Trans. Visualization \& Comp. Graphics (Proc. VIS)},
year = {2022},
url = {https://idl.uw.edu/papers/eval-focused-visrec},
doi = {10.1109/TVCG.2021.3114814}
}