The server_flask application launches a server that exposes trained models via a REST API, and that includes a web interface for exploring their predictions.

You can run this on the command line with

$ python -m allennlp.service.server_flask -h
usage: [-h] [--port PORT]

Run the web service, which provides an HTTP API as well as a web demo.

optional arguments:
  -h, --help   show this help message and exit
  --port PORT  the port to run the server on
class allennlp.service.server_flask.DemoModel(archive_file: str, predictor_name: str) → None[source]

Bases: object

A demo model is determined by both an archive file (representing the trained model) and a choice of predictor

predictor() → allennlp.predictors.predictor.Predictor[source]
exception allennlp.service.server_flask.ServerError(message, status_code=None, payload=None)[source]

Bases: Exception

status_code = 400
allennlp.service.server_flask.make_app(build_dir: str = None, demo_db: typing.Union[allennlp.service.db.DemoDatabase, NoneType] = None) →[source] int, trained_models: typing.Dict[str, allennlp.service.server_flask.DemoModel], static_dir: str = None) → None[source]

Run the server programatically