allennlp.commands.find_learning_rateΒΆ

The find-lr subcommand can be used to find a good learning rate for a model. It requires a configuration file and a directory in which to write the results.

$ allennlp find-lr --help
usage: allennlp find-lr [-h] -s SERIALIZATION_DIR [-o OVERRIDES]
                        [--start-lr START_LR] [--end-lr END_LR]
                        [--num-batches NUM_BATCHES] [--linear]
                        [--stopping-factor STOPPING_FACTOR] [--linear]
                        [--include-package INCLUDE_PACKAGE]
                        param_path

Find a learning rate range where the loss decreases quickly for the specified
model and dataset.

positional arguments:
param_path            path to parameter file describing the model to be
                        trained

optional arguments:
-h, --help              show this help message and exit
-s SERIALIZATION_DIR, --serialization-dir SERIALIZATION_DIR
                        directory in which to save Learning rate vs loss
-f, --force             overwrite the output directory if it exists
-o OVERRIDES, --overrides OVERRIDES
                        a JSON structure used to override the experiment
                        configuration.
--start-lr START_LR
                        learning rate to start the search.
--end-lr END_LR
                        learning rate up to which search is done.
--num-batches NUM_BATCHES
                        number of mini-batches to run Learning rate finder.
--stopping-factor STOPPING_FACTOR
                        stop the search when the current loss exceeds the best
                        loss recorded by multiple of stopping factor
--linear                increase learning rate linearly instead of exponential
                        increase
--include-package INCLUDE_PACKAGE
                        additional packages to include