allennlp.interpret.saliency_interpreters

class allennlp.interpret.saliency_interpreters.saliency_interpreter.SaliencyInterpreter(predictor: allennlp.predictors.predictor.Predictor)[source]

Bases: allennlp.common.registrable.Registrable

A SaliencyInterpreter interprets an AllenNLP Predictor’s outputs by assigning a saliency score to each input token.

saliency_interpret_from_json(self, inputs:Dict[str, Any]) → Dict[str, Any][source]

This function finds a modification to the input text that would change the model’s prediction in some desired manner (e.g., an adversarial attack).

Parameters
inputsJsonDict

The input you want to interpret (the same as the argument to a Predictor, e.g., predict_json()).

Returns
interpretationJsonDict

Contains the normalized saliency values for each input token. The dict has entries for each instance in the inputs JsonDict, e.g., {instance_1: ..., instance_2:, ... }. Each one of those entries has entries for the saliency of the inputs, e.g., {grad_input_1: ..., grad_input_2: ... }.

class allennlp.interpret.saliency_interpreters.simple_gradient.SimpleGradient(predictor: allennlp.predictors.predictor.Predictor)[source]

Bases: allennlp.interpret.saliency_interpreters.saliency_interpreter.SaliencyInterpreter

saliency_interpret_from_json(self, inputs:Dict[str, Any]) → Dict[str, Any][source]

Interprets the model’s prediction for inputs. Gets the gradients of the loss with respect to the input and returns those gradients normalized and sanitized.

class allennlp.interpret.saliency_interpreters.integrated_gradient.IntegratedGradient(predictor: allennlp.predictors.predictor.Predictor)[source]

Bases: allennlp.interpret.saliency_interpreters.saliency_interpreter.SaliencyInterpreter

Interprets the prediction using Integrated Gradients (https://arxiv.org/abs/1703.01365)

saliency_interpret_from_json(self, inputs:Dict[str, Any]) → Dict[str, Any][source]

This function finds a modification to the input text that would change the model’s prediction in some desired manner (e.g., an adversarial attack).

Parameters
inputsJsonDict

The input you want to interpret (the same as the argument to a Predictor, e.g., predict_json()).

Returns
interpretationJsonDict

Contains the normalized saliency values for each input token. The dict has entries for each instance in the inputs JsonDict, e.g., {instance_1: ..., instance_2:, ... }. Each one of those entries has entries for the saliency of the inputs, e.g., {grad_input_1: ..., grad_input_2: ... }.

class allennlp.interpret.saliency_interpreters.smooth_gradient.SmoothGradient(predictor: allennlp.predictors.predictor.Predictor)[source]

Bases: allennlp.interpret.saliency_interpreters.saliency_interpreter.SaliencyInterpreter

Interprets the prediction using SmoothGrad (https://arxiv.org/abs/1706.03825)

saliency_interpret_from_json(self, inputs:Dict[str, Any]) → Dict[str, Any][source]

This function finds a modification to the input text that would change the model’s prediction in some desired manner (e.g., an adversarial attack).

Parameters
inputsJsonDict

The input you want to interpret (the same as the argument to a Predictor, e.g., predict_json()).

Returns
interpretationJsonDict

Contains the normalized saliency values for each input token. The dict has entries for each instance in the inputs JsonDict, e.g., {instance_1: ..., instance_2:, ... }. Each one of those entries has entries for the saliency of the inputs, e.g., {grad_input_1: ..., grad_input_2: ... }.