ai4materials.interpretation.deconv_resp_maps module

class ai4materials.interpretation.deconv_resp_maps.DeconvNet(model)[source]

Bases: object

DeconvNet class. Code taken from: https://github.com/tdeboissiere/DeepLearningImplementations/blob/master/DeconvNet/KerasDeconv.py

get_deconv(X, target_layer, feat_map=None)[source]
get_layers()[source]
ai4materials.interpretation.deconv_resp_maps.deconv_visualize(model, target_layer, input_data, nb_top_feat_maps)[source]

Obtain attentive response maps back-projected to image space using transposed convolutions (sometimes referred as deconvolutions in machine learning).

Parameters:

model: instance of the Keras model
The ConvNet model to be used.
target_layer: str
Name of the layer for which we want to obtain the attentive response maps. The names of the layers are defined in the Keras instance model.
input_data: ndarray
The image data to be passed through the network. Shape: (n_samples, n_channels, img_dim1, img_dim2)
nb_top_feat_maps: int
Top-n filter you want to visualize, e.g. nb_top_feat_maps = 25 will visualize top 25 filters in target layer

Code author: Devinder Kumar <d22kumar@uwaterloo.ca>

ai4materials.interpretation.deconv_resp_maps.get_deconv_imgs(img_index, data, dec_layer, target_layer, feat_maps)[source]

Return the attentive response maps of the images specified in img_index for the target layer and feature maps specified in the arguments.

Parameters:

img_index: list or ndarray
Array or list of index. These are the indices of the images (contained in data) for which we want to obtain the attentive response maps.
data: ndarray
The image data. Shape : (n_samples, n_channels, img_dim1, img_dim2)
Dec: instance of class ai4materials.interpretation.deconv_resp_maps.DeconvNet
DeconvNet model: instance of the DeconvNet class
target_layer: str
Name of the layer for which we want to obtain the attentive response maps. The names of the layers are defined in the Keras instance model.
feat_map: int
Index of the attentive response map to visualise.

Code author: Devinder Kumar <d22kumar@uwaterloo.ca>

ai4materials.interpretation.deconv_resp_maps.get_max_activated_filter_for_layer(target_layer, model, input_data, nb_top_feat_maps, img_index)[source]

Find the indices of the most activated filters for a given image in the specified target layer of a Keras model.

Parameters:

target_layer: str
Name of the layer for which we want to obtain the attentive response maps. The names of the layers are defined in the Keras instance model.
model: instance of the Keras model
The ConvNet model to be used.
input_data:
input_data: ndarray The image data to be passed through the network. Shape: (n_samples, n_channels, img_dim1, img_dim2)
nb_top_feat_maps:
Number of the top attentive response maps to be calculated and plotted. It must be <= to the minimum number of filters used in the neural network layers. This is not checked by the code, and respecting this criterion is up to the user.
img_index: list or ndarray
Array or list of index. These are the indices of the images (contained in data) for which we want to obtain the attentive response maps.
Returns: list of int
List containing the indices of the filters with the highest response (activation) for the given image.

Code author: Devinder Kumar <d22kumar@uwaterloo.ca>

Code author: Angelo Ziletti <angelo.ziletti@gmail.com>

ai4materials.interpretation.deconv_resp_maps.load_model(model_arch_file, model_weights_file)[source]

Load Keras model from .json and .h5 files

ai4materials.interpretation.deconv_resp_maps.plot_att_response_maps(data, model_arch_file, model_weights_file, figure_dir, nb_conv_layers, layer_nb='all', nb_top_feat_maps=4, filename_maps='attentive_response_maps', cmap=<matplotlib.colors.LinearSegmentedColormap object>, plot_all_filters=False, plot_filter_sum=True, plot_summary=True)[source]

Plot attentive response maps given a Keras trained model and input images.

Parameters:

data: ndarray, shape (n_images, dim1, dim2, channels)
Array of input images that will be used to calculate the attentive response maps.
model_arch_file: string
Full path to the model architecture file (.json format) written by Keras after the neural network training. This is used by the load_model function to load the neural network architecture.
model_weights_file: string
Full path to the model weights file (.h5 format) written by Keras after the neural network training . This is used by the load_model function to load the neural network architecture.
figure_dir: string
Full path of the directory where the images resulting from the transposed convolution procedure will be saved.
nb_conv_layers: int
Numbers of Convolution2D layers in the neural network architecture.
layer_nb: list of int, or ‘all’
List with the layer number which will be deconvolved starting from 0. E.g. layer_nb=[0, 1, 4] will deconvolve the 1st, 2nd, and 5th convolution2d layer. Only up to 6 conv_2d layers are supported. If ‘all’ is selected, all conv_2d layers will be deconvolved, up to nb_conv_layers.
nb_top_feat_maps: int
Number of the top attentive response maps to be calculated and plotted. It must be <= to the minimum number of filters used in the neural network layers. This is not checked by the code, and respecting this criterion is up to the user.
filename_maps: str
Base filename (without extension and path) of the files where the attentive response maps will be saved.
cmap: Matplotlib cmap, optional, default=`cm.hot`
Type of coloring for the heatmap, if images are greyscale. Possible cmaps can be found here: https://matplotlib.org/examples/color/colormaps_reference.html If images are RGB, then an RGB color map is used. The RGB colormap can be found at ai4materials.utils.utils_plotting.rgb_colormaps.
plot_all_filters: bool
If True, plot and save the nb_top_feat_maps for each layer. The files will be saved in different folders according to the layer: - “convolution2d_1” for the 1st layer - “convolution2d_2” for the 2nd layer etc.
plot_filter_sum: bool
If True, plot and save the sum of all the filters for a given layer.
plot_summary: bool
If True, plot and save a summary figure containing: (left) input image (center) nb_top_feat_maps filters for each deconvolved layer (right) sum of the all filters of the last layer If set to True, also plot_filter_sum must be set to True.

Code author: Angelo Ziletti <angelo.ziletti@gmail.com>