allennlp.service.predictors

A Predictor is a wrapper for an AllenNLP Model that makes JSON predictions using JSON inputs. If you want to serve up a model through the web service (or using allennlp.commands.predict), you’ll need a Predictor that wraps it.

class allennlp.service.predictors.predictor.DemoModel(archive_file: str, predictor_name: str) → None[source]

Bases: object

A demo model is determined by both an archive file (representing the trained model) and a choice of predictor

predictor() → allennlp.service.predictors.predictor.Predictor[source]
class allennlp.service.predictors.predictor.Predictor(model: allennlp.models.model.Model, dataset_reader: allennlp.data.dataset_readers.dataset_reader.DatasetReader) → None[source]

Bases: allennlp.common.registrable.Registrable

a Predictor is a thin wrapper around an AllenNLP model that handles JSON -> JSON predictions that can be used for serving models through the web API or making predictions in bulk.

dump_line(outputs: typing.Dict[str, typing.Any]) → str[source]

If you don’t want your outputs in JSON-lines format you can override this function to output them differently.

classmethod from_archive(archive: allennlp.models.archival.Archive, predictor_name: str = None) → allennlp.service.predictors.predictor.Predictor[source]

Instantiate a Predictor from an Archive; that is, from the result of training a model. Optionally specify which Predictor subclass; otherwise, the default one for the model will be used.

classmethod from_path(archive_path: str, predictor_name: str = None) → allennlp.service.predictors.predictor.Predictor[source]

Instantiate a Predictor from an archive path.

If you need more detailed configuration options, such as running the predictor on the GPU, please use from_archive.

Parameters:
archive_path The path to the archive.
Returns:
A Predictor instance.
load_line(line: str) → typing.Dict[str, typing.Any][source]

If your inputs are not in JSON-lines format (e.g. you have a CSV) you can override this function to parse them correctly.

predict_batch_json(inputs: typing.List[typing.Dict[str, typing.Any]]) → typing.List[typing.Dict[str, typing.Any]][source]
predict_json(inputs: typing.Dict[str, typing.Any]) → typing.Dict[str, typing.Any][source]
class allennlp.service.predictors.bidaf.BidafPredictor(model: allennlp.models.model.Model, dataset_reader: allennlp.data.dataset_readers.dataset_reader.DatasetReader) → None[source]

Bases: allennlp.service.predictors.predictor.Predictor

Predictor for the BidirectionalAttentionFlow model.

predict(question: str, passage: str) → typing.Dict[str, typing.Any][source]

Make a machine comprehension prediction on the supplied input. See https://rajpurkar.github.io/SQuAD-explorer/ for more information about the machine comprehension task.

Parameters:
question : str

A question about the content in the supplied paragraph. The question must be answerable by a span in the paragraph.

passage : str

A paragraph of information relevant to the question.

Returns:
A dictionary that represents the prediction made by the system. The answer string will be under the
“best_span_str” key.
class allennlp.service.predictors.decomposable_attention.DecomposableAttentionPredictor(model: allennlp.models.model.Model, dataset_reader: allennlp.data.dataset_readers.dataset_reader.DatasetReader) → None[source]

Bases: allennlp.service.predictors.predictor.Predictor

Predictor for the DecomposableAttention model.

predict(premise: str, hypothesis: str) → typing.Dict[str, typing.Any][source]

Predicts whether the hypothesis is entailed by the premise text.

Parameters:
premise : str

A passage representing what is assumed to be true.

hypothesis : str

A sentence that may be entailed by the premise.

Returns:
A dictionary where the key “label_probs” determines the probabilities of each of
[entailment, contradiction, neutral].
class allennlp.service.predictors.semantic_role_labeler.SemanticRoleLabelerPredictor(model: allennlp.models.model.Model, dataset_reader: allennlp.data.dataset_readers.dataset_reader.DatasetReader) → None[source]

Bases: allennlp.service.predictors.predictor.Predictor

Predictor for the SemanticRoleLabeler model.

static make_srl_string(words: typing.List[str], tags: typing.List[str]) → str[source]
predict(sentence: str) → typing.Dict[str, typing.Any][source]

Predicts the semantic roles of the supplied sentence and returns a dictionary with the results.

{"words": [...],
 "verbs": [
    {"verb": "...", "description": "...", "tags": [...]},
    ...
    {"verb": "...", "description": "...", "tags": [...]},
]}
Parameters:
sentence, ``str``

The sentence to parse via semantic role labeling.

Returns:
A dictionary representation of the semantic roles in the sentence.
predict_batch_json(inputs: typing.List[typing.Dict[str, typing.Any]]) → typing.List[typing.Dict[str, typing.Any]][source]

Expects JSON that looks like [{"sentence": "..."}, {"sentence": "..."}, ...] and returns JSON that looks like

[
    {"words": [...],
     "verbs": [
        {"verb": "...", "description": "...", "tags": [...]},
        ...
        {"verb": "...", "description": "...", "tags": [...]},
    ]},
    {"words": [...],
     "verbs": [
        {"verb": "...", "description": "...", "tags": [...]},
        ...
        {"verb": "...", "description": "...", "tags": [...]},
    ]}
]
predict_json(inputs: typing.Dict[str, typing.Any]) → typing.Dict[str, typing.Any][source]

Expects JSON that looks like {"sentence": "..."} and returns JSON that looks like

{"words": [...],
 "verbs": [
    {"verb": "...", "description": "...", "tags": [...]},
    ...
    {"verb": "...", "description": "...", "tags": [...]},
]}
class allennlp.service.predictors.sentence_tagger.SentenceTaggerPredictor(model: allennlp.models.model.Model, dataset_reader: allennlp.data.dataset_readers.dataset_reader.DatasetReader) → None[source]

Bases: allennlp.service.predictors.predictor.Predictor

Predictor for any model that takes in a sentence and returns a single set of tags for it. In particular, it can be used with the CrfTagger model and also the SimpleTagger model.

predict(sentence: str) → typing.Dict[str, typing.Any][source]
class allennlp.service.predictors.coref.CorefPredictor(model: allennlp.models.model.Model, dataset_reader: allennlp.data.dataset_readers.dataset_reader.DatasetReader) → None[source]

Bases: allennlp.service.predictors.predictor.Predictor

Predictor for the CoreferenceResolver model.

predict(document: str) → typing.Dict[str, typing.Any][source]

Predict the coreference clusters in the given document.

{
"document": [tokenised document text]
"clusters":
  [
    [
      [start_index, end_index],
      [start_index, end_index]
    ],
    [
      [start_index, end_index],
      [start_index, end_index],
      [start_index, end_index],
    ],
    ....
  ]
}
Parameters:
document : str

A string representation of a document.

Returns:
A dictionary representation of the predicted coreference clusters.
class allennlp.service.predictors.constituency_parser.ConstituencyParserPredictor(model: allennlp.models.model.Model, dataset_reader: allennlp.data.dataset_readers.dataset_reader.DatasetReader) → None[source]

Bases: allennlp.service.predictors.predictor.Predictor

Predictor for the SpanConstituencyParser model.

predict(sentence: str) → typing.Dict[str, typing.Any][source]

Predict a constituency parse for the given sentence. Parameters ———- sentence The sentence to parse.

Returns:
A dictionary representation of the constituency tree.
predict_batch_json(inputs: typing.List[typing.Dict[str, typing.Any]]) → typing.List[typing.Dict[str, typing.Any]][source]
predict_json(inputs: typing.Dict[str, typing.Any]) → typing.Dict[str, typing.Any][source]
class allennlp.service.predictors.simple_seq2seq.SimpleSeq2SeqPredictor(model: allennlp.models.model.Model, dataset_reader: allennlp.data.dataset_readers.dataset_reader.DatasetReader) → None[source]

Bases: allennlp.service.predictors.predictor.Predictor

Predictor for the simple_seq2seq model.

predict(source: str) → typing.Dict[str, typing.Any][source]
class allennlp.service.predictors.wikitables_parser.WikiTablesParserPredictor(model: allennlp.models.model.Model, dataset_reader: allennlp.data.dataset_readers.dataset_reader.DatasetReader) → None[source]

Bases: allennlp.service.predictors.predictor.Predictor

Wrapper for the WikiTablesSemanticParser model.

predict_json(inputs: typing.Dict[str, typing.Any]) → typing.Dict[str, typing.Any][source]
class allennlp.service.predictors.nlvr_parser.NlvrParserPredictor(model: allennlp.models.model.Model, dataset_reader: allennlp.data.dataset_readers.dataset_reader.DatasetReader) → None[source]

Bases: allennlp.service.predictors.predictor.Predictor

dump_line(outputs: typing.Dict[str, typing.Any]) → str[source]

If you don’t want your outputs in JSON-lines format you can override this function to output them differently.