allennlp.service.predictors

A Predictor is a wrapper for an AllenNLP Model that makes JSON predictions using JSON inputs. If you want to serve up a model through the web service (or using allennlp.commands.predict), you’ll need a Predictor that wraps it.

class allennlp.service.predictors.predictor.DemoModel(archive_file: str, predictor_name: str) → None[source]

Bases: object

A demo model is determined by both an archive file (representing the trained model) and a choice of predictor

predictor() → allennlp.service.predictors.predictor.Predictor[source]
class allennlp.service.predictors.predictor.Predictor(model: allennlp.models.model.Model, dataset_reader: allennlp.data.dataset_readers.dataset_reader.DatasetReader) → None[source]

Bases: allennlp.common.registrable.Registrable

a Predictor is a thin wrapper around an AllenNLP model that handles JSON -> JSON predictions that can be used for serving models through the web API or making predictions in bulk.

classmethod from_archive(archive: allennlp.models.archival.Archive, predictor_name: str) → allennlp.service.predictors.predictor.Predictor[source]

Instantiate a Predictor from an Archive; that is, from the result of training a model. Optionally specify which Predictor subclass; otherwise, the default one for the model will be used.

predict_batch_json(inputs: typing.List[typing.Dict[str, typing.Any]], cuda_device: int = -1) → typing.List[typing.Dict[str, typing.Any]][source]
predict_json(inputs: typing.Dict[str, typing.Any], cuda_device: int = -1) → typing.Dict[str, typing.Any][source]
class allennlp.service.predictors.bidaf.BidafPredictor(model: allennlp.models.model.Model, dataset_reader: allennlp.data.dataset_readers.dataset_reader.DatasetReader) → None[source]

Bases: allennlp.service.predictors.predictor.Predictor

Wrapper for the BidirectionalAttentionFlow model.

class allennlp.service.predictors.decomposable_attention.DecomposableAttentionPredictor(model: allennlp.models.model.Model, dataset_reader: allennlp.data.dataset_readers.dataset_reader.DatasetReader) → None[source]

Bases: allennlp.service.predictors.predictor.Predictor

Wrapper for the DecomposableAttention model.

class allennlp.service.predictors.semantic_role_labeler.SemanticRoleLabelerPredictor(model: allennlp.models.model.Model, dataset_reader: allennlp.data.dataset_readers.dataset_reader.DatasetReader) → None[source]

Bases: allennlp.service.predictors.predictor.Predictor

Wrapper for the SemanticRoleLabeler model.

static make_srl_string(words: typing.List[str], tags: typing.List[str]) → str[source]
predict_json(inputs: typing.Dict[str, typing.Any], cuda_device: int = -1) → typing.Dict[str, typing.Any][source]

Expects JSON that looks like {"sentence": "..."} and returns JSON that looks like

{"words": [...],
 "verbs": [
    {"verb": "...", "description": "...", "tags": [...]},
    ...
    {"verb": "...", "description": "...", "tags": [...]},
]}
class allennlp.service.predictors.sentence_tagger.SentenceTaggerPredictor(model: allennlp.models.model.Model, dataset_reader: allennlp.data.dataset_readers.dataset_reader.DatasetReader) → None[source]

Bases: allennlp.service.predictors.predictor.Predictor

Wrapper for any model that takes in a sentence and returns a single set of tags for it. In particular, it can be used with the CrfTagger model and also the SimpleTagger model.

predict_json(inputs: typing.Dict[str, typing.Any], cuda_device: int = -1) → typing.Dict[str, typing.Any][source]

Expects JSON that looks like {"sentence": "..."}. Runs the underlying model, and adds the "words" to the output.

class allennlp.service.predictors.coref.CorefPredictor(model: allennlp.models.model.Model, dataset_reader: allennlp.data.dataset_readers.dataset_reader.DatasetReader) → None[source]

Bases: allennlp.service.predictors.predictor.Predictor

Wrapper for the CoreferenceResolver model.

predict_json(inputs: typing.Dict[str, typing.Any], cuda_device: int = -1) → typing.Dict[str, typing.Any][source]

Expects JSON that looks like {"document": "string of document text"} and returns JSON that looks like:

{
"document": [tokenised document text]
"clusters":
  [
    [
      [start_index, end_index],
      [start_index, end_index]
    ],
    [
      [start_index, end_index],
      [start_index, end_index],
      [start_index, end_index],
    ],
    ....
  ]
}