eli5.keras

eli5 has Keras support - eli5.explain_prediction() explains predictions of image classifiers by using an impementation of Grad-CAM (Gradient-weighted Class Activation Mapping, https://arxiv.org/pdf/1610.02391.pdf). The function works with both Sequential model and functional Model.

eli5.keras.explain_prediction

explain_prediction_keras(model, doc, targets=None, layer=None, image=None)[source]

Explain the prediction of a Keras classifier with the Grad-CAM technique.

We explicitly assume that the model’s task is classification, i.e. final output is class scores.

Parameters:
  • model (keras.models.Model) – Instance of a Keras neural network model, whose predictions are to be explained.
  • doc (numpy.ndarray) –

    An input to model whose prediction will be explained.

    Currently only numpy arrays are supported.

    The tensor must be of suitable shape for the model.

    Check model.input_shape to confirm the required dimensions of the input tensor.

    raises TypeError:
     if doc is not a numpy array.
    raises ValueError:
     if doc shape does not match.
  • targets (list[int], optional) –

    Prediction ID’s to focus on.

    Currently only the first prediction from the list is explained. The list must be length one.

    If None, the model is fed the input image and its top prediction is taken as the target automatically.

    raises ValueError:
     if targets is a list with more than one item.
    raises TypeError:
     if targets is not list or None.
  • layer (int or str or keras.layers.Layer, optional) –

    The activation layer in the model to perform Grad-CAM on: a valid keras layer name, layer index, or an instance of a Keras layer.

    If None, a suitable layer is attempted to be retrieved. For best results, pick a layer that:

    • has spatial or temporal information (conv, recurrent, pooling, embedding) (not dense layers).
    • shows high level features.
    • has large enough dimensions for resizing over input to work.
    raises TypeError:
     if layer is not None, str, int, or keras.layers.Layer instance.
    raises ValueError:
     if suitable layer can not be found.
    raises ValueError:
     if differentiation fails with respect to retrieved layer.

See eli5.explain_prediction() for more information about the model, doc, and targets parameters.

Other arguments are passed to concrete implementations for image and text explanations.

Returns:expl (eli5.base.Explanation) – An eli5.base.Explanation object for the relevant implementation.
explain_prediction_keras_image(model, doc, image=None, targets=None, layer=None)[source]

Explain an image-based model, highlighting what contributed in the image.

Parameters:
  • doc (numpy.ndarray) –

    Input representing an image.

    Must have suitable format. Some models require tensors to be rank 4 in format (batch_size, dims, …, channels) (channels last) or (batch_size, channels, dims, …) (channels first), where dims is usually in order height, width and batch_size is 1 for a single image.

    If image argument is not given, an image will be created from doc, where possible.

  • image (PIL.Image.Image, optional) – Pillow image over which to overlay the heatmap. Corresponds to the input doc.

See eli5.keras.explain_prediction.explain_prediction_keras() for a description of model, doc, targets, and layer parameters.

Returns:expl (eli5.base.Explanation) –
An eli5.base.Explanation object with the following attributes:
  • image a Pillow image representing the input.
  • targets a list of eli5.base.TargetExplanation objects for each target. Currently only 1 target is supported.
The eli5.base.TargetExplanation objects will have the following attributes:
  • heatmap a rank 2 numpy array with the localization map values as floats.
  • target ID of target class.
  • score value for predicted class.
explain_prediction_keras_not_supported(model, doc)[source]

Can not do an explanation based on the passed arguments. Did you pass either “image” or “tokens”?

eli5.keras.gradcam

gradcam(weights, activations)[source]

Generate a localization map (heatmap) using Gradient-weighted Class Activation Mapping (Grad-CAM) (https://arxiv.org/pdf/1610.02391.pdf).

The values for the parameters can be obtained from eli5.keras.gradcam.gradcam_backend().

Parameters:
  • weights (numpy.ndarray) – Activation weights, vector with one weight per map, rank 1.
  • activations (numpy.ndarray) – Forward activation map values, vector of matrices, rank 3.
Returns:

lmap (numpy.ndarray) – A Grad-CAM localization map, rank 2, with values normalized in the interval [0, 1].

Notes

We currently make two assumptions in this implementation
  • We are dealing with images as our input to model.
  • We are doing a classification. model’s output is a class scores or probabilities vector.
Credits
gradcam_backend(model, doc, targets, activation_layer)[source]

Compute the terms and by-products required by the Grad-CAM formula.

Parameters:
  • model (keras.models.Model) – Differentiable network.
  • doc (numpy.ndarray) – Input to the network.
  • targets (list, optional) – Index into the network’s output, indicating the output node that will be used as the “loss” during differentiation.
  • activation_layer (keras.layers.Layer) – Keras layer instance to differentiate with respect to.

See eli5.keras.explain_prediction() for description of the model, doc, targets parameters.

Returns:(weights, activations, gradients, predicted_idx, predicted_val) ((numpy.ndarray, …, int, float)) – Values of variables.