We investigate how to automatically recover visual encodings from a chart image, primarily using inferred text elements. We contribute an end-to-end pipeline which takes a bitmap image as input and returns a visual encoding specification as output. We present a text analysis pipeline which detects text elements in a chart, classifies their role (e.g., chart title, x-axis label, y-axis title, etc.), and recovers the text content using optical character recognition. We also train a Convolutional Neural Network for mark type classification. Using the identified text elements and graphical mark type, we can then infer the encoding specification of an input chart image. We evaluate our techniques on three chart corpora: a set of automatically labeled charts generated using Vega, charts from the Quartz news website, and charts extracted from academic papers. We demonstrate accurate automatic inference of text elements, mark types, and chart specifications across a variety of input chart types.
BibTeX
@article{2017-reverse-engineering-vis,
title = {Reverse-Engineering Visualizations: Recovering Visual Encodings from Chart Images},
author = {Poco, Jorge AND Heer, Jeffrey},
journal = {Computer Graphics Forum (Proc. EuroVis)},
year = {2017},
url = {https://idl.uw.edu/papers/reverse-engineering-vis},
doi = {10.1111/cgf.13193}
}