UW Interactive Data Lab
IDL logo

BLADE: Benchmarking Language Model Agents for Data-Driven Science

Ken Gu, Ruoxi Shang, Ruien Jiang, Keying Kuang, Richard-John Lin, Donghe Lyu, Yue Mao, Youran Pan, Teng Wu, Jiaqian Yu, Yikun Zhang, Tianmai M. Zhang, Lanyi Zhu, Mike Merrill, Jeffrey Heer, Tim Althoff. Empirical Methods in Natural Language Processing, 2024
Figure for BLADE: Benchmarking Language Model Agents for Data-Driven Science
Overview of BLADE. We gathered research questions and datasets from existing research papers, crowd-sourced analysis studies and statistic textbooks as well as analyses from expert annotators (boxes 1-2-3). Given a research question and dataset, LM agents generate a full analysis containing the relevant conceptual variables, a data transform function, and a statistical modeling function (boxes 1-4-5). BLADE automatically evaluates this against the ground truth (box 6).
Materials
Abstract
Data-driven scientific discovery requires the iterative integration of scientific domain knowledge, statistical expertise, and an understanding of data semantics to make nuanced analytical decisions, e.g., about which variables, transformations, and statistical models to consider. LM-based agents equipped with planning, memory, and code execution capabilities have the potential to support data-driven science. However, evaluating agents on such open-ended tasks is challenging due to multiple valid approaches, partially correct steps, and different ways to express the same decisions. To address these challenges, we present BLADE, a benchmark to automatically evaluate agents' multifaceted approaches to open-ended research questions. BLADE consists of 12 datasets and research questions drawn from existing scientific literature, with ground truth collected from independent analyses by expert data scientists and researchers. To automatically evaluate agent responses, we developed corresponding computational methods to match different representations of analyses to this ground truth. Though language models possess considerable world knowledge, our evaluation shows that they are often limited to basic analyses. However, agents capable of interacting with the underlying data demonstrate improved, but still non-optimal, diversity in their analytical decision making. Our work enables the evaluation of agents for data-driven science and provides researchers deeper insights into agents' analysis approaches.
BibTeX
@inproceedings{2024-blade,
  title = {BLADE: Benchmarking Language Model Agents for Data-Driven Science},
  author = {Gu, Ken AND Shang, Ruoxi AND Jiang, Ruien AND Kuang, Keying AND Lin, Richard-John AND Lyu, Donghe AND Mao, Yue AND Pan, Youran AND Wu, Teng AND Yu, Jiaqian AND Zhang, Yikun AND Zhang, Tianmai AND Zhu, Lanyi AND Merrill, Mike AND Heer, Jeffrey AND Althoff, Tim},
  booktitle = {Empirical Methods in Natural Language Processing},
  year = {2024},
  url = {https://idl.uw.edu/papers/blade},
  doi = {10.48550/arXiv.2408.09667}
}