Fred Hohman
/ CSE Ph.D. Student at GT

Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations

Fred Hohman, Haekyu Park, Caleb Robinson, Duen Horng (Polo) Chau

With Summit, users can scalably summarize and interactively interpret deep neural networks by visualizing what features a network detects and how they are related. In this example, InceptionV1 accurately classifies images of tench (yellow-brown fish). However, Summit reveals surprising associations in the network (e.g., using parts of people) that contribute to its final outcome: the "tench" prediction is dependent on an intermediate "hands holding fish" feature (right callout), which is influenced by lower-level features like "scales," "person," and "fish". A. The Embedding View summarizes all classes' aggregated activations using dimensionality reduction. The Class Sidebar enables users to search, sort, and compare all classes within a model. C. The Attribution Graph View visualizes highly activated neurons as vertices ("scales" "fish") and their most influential connections as edges (dashed purple edges).

Abstract

Deep learning is increasingly used in decision-making tasks. However, understanding how neural networks produce final predictions remains a fundamental challenge. Existing work on interpreting neural network predictions for images often focuses on explaining predictions for single images or neurons. As predictions are often computed based off of millions of weights that are optimized over millions of images, such explanations can easily miss a bigger picture. We present Summit, the first interactive system that scalably and systematically summarizes and visualizes what features a deep learning model has learned and how those features interact to make predictions. Summit introduces two new scalable summarization techniques: (1) activation aggregation discovers important neurons, and (2) neuron-influence aggregation identifies relationships among such neurons. Summit combines these techniques to create the novel attribution graph that reveals and summarizes crucial neuron associations and substructures that contribute to a model’s outcomes. Summit scales to large data, such as the ImageNet dataset with 1.2M images, and leverages neural network feature visualization and dataset examples to help users distill large, complex neural network models into compact, interactive visualizations. We present neural network exploration scenarios where Summit helps us discover multiple surprising insights into a state-of-the-art image classifier’s learned representations and informs future neural network architecture design. The Summit visualization runs in modern web browsers and is open-sourced.

Citation

Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Fred Hohman, Haekyu Park, Caleb Robinson, Duen Horng (Polo) Chau
arXiv:1904.02323. 2019.
Project Demo PDF Video Code BibTeX

BibTeX


@article{hohman2019summit,
  title={Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations},
  author={Hohman, Fred and Park, Haekyu and Robinson, Caleb and Chau, Duen Horng},
  journal={arXiv preprint arXiv:1808.04414},
  year={2019},
  url={https://fredhohman.com/summit/}
}