Fred Hohman

Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models

ACM Conference on Human Factors in Computing Systems (CHI), 2019
Deployed at Microsoft Research

Abstract

Without good models and the right tools to interpret them, data scientists risk making decisions based on hidden biases, spurious correlations, and false generalizations. This has led to a rallying cry for model interpretability. Yet the concept of interpretability remains nebulous, such that researchers and tool designers lack actionable guidelines for how to incorporate interpretability into models and accompanying tools. Through an iterative design process with expert machine learning researchers and practitioners, we designed a visual analytics system, Gamut, to explore how interactive interfaces could better support model interpretation. Using Gamut as a probe, we investigated why and how professional data scientists interpret models, and how interface affordances can support data scientists in answering questions about model interpretability. Our investigation showed that interpretability is not a monolithic concept: data scientists have different reasons to interpret models and tailor explanations for specific audiences, often balancing competing concerns of simplicity and completeness. Participants also asked to use Gamut in their work, highlighting its potential to help data scientists understand their own data.

BibTeX

			
@inproceedings{hohman2019gamut,
    title={Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models},
    author={Hohman, Fred and Head, Andrew and Caruana, Rich and DeLine, Rob and Drucker, Steven M.},
    booktitle={Proceedings of the SIGCHI Conference on Human Factors in Computing Systems},
    year={2019},
    organization={ACM},
    doi={10.1145/3290605.3300809}
}