Autoencoder Node Saliency: Selecting Relevant Latent Representations

Publication Type
Journal Article
Publication Year
2019
Authors
Fan, Ya Ju
Abstract

The autoencoder is an artificial neural network that performs nonlinear dimension reduction and learns hidden representations of unlabeled data. With a linear transfer function, it is similar to the principal component analysis (PCA). While both methods use weight vectors for linear transformations, the autoencoder does not come with any indication similar to the eigenvalues in PCA that are paired with eigenvectors. The authors proposed a novel autoencoder node saliency method that examines whether the features constructed by autoencoders exhibit properties related to known class labels. The supervised node saliency ranks the nodes based on their capability of performing a learning task. It is coupled with the normalized entropy difference (NED). The authors established a property for NED values to verify classifying behaviors among the top ranked nodes. By applying their methods to real datasets, the authors demonstrated their ability to provide indications on the performing nodes and explain the learned tasks in autoencoders.

Citation
Date
Volume
88
Publication Title
Pattern Recognition
ISSN
313203
DOI
https://doi.org/10.1016/j.patcog.2018.12.015
Publication Tags
Manual Tags
LLNL
Related Materials