These submodules contain the command line tools for things like training and evaluating models. You probably don’t want to call most of them directly. Instead, just create a script that calls allennlp.commands.main() and it will automatically inherit all of the subcommands in this module.

The included module is such a script:

$ python -m --help
usage: run [command]

Run AllenNLP

optional arguments:
-h, --help  show this help message and exit


    predict   Use a trained model to make predictions.
    train     Train a model
    serve     Run the web service and demo.
    evaluate  Evaluate the specified model + dataset

However, it only knows about the models and classes that are included with AllenNLP. Once you start creating custom models, you’ll need to make your own script which imports them and then calls main().

allennlp.commands.main(prog: str = None, model_overrides: typing.Dict[str, allennlp.service.predictors.predictor.DemoModel] = {}, predictor_overrides: typing.Dict[str, str] = {}, subcommand_overrides: typing.Dict[str, allennlp.commands.subcommand.Subcommand] = {}) → None[source]

The run command only knows about the registered classes in the allennlp codebase. In particular, once you start creating your own Model s and so forth, it won’t work for them. However, is simply a wrapper around this function. To use the command line interface with your own custom classes, just create your own script that imports all of the classes you want and then calls main().

The default models for serve and the default predictors for predict are defined above. If you’d like to add more or use different ones, the model_overrides and predictor_overrides arguments will take precedence over the defaults.