allennlp.training.trainer

A Trainer is responsible for training a Model.

Typically you might create a configuration file specifying the model and training parameters and then use train rather than instantiating a Trainer yourself.

class allennlp.training.trainer.TensorboardWriter(train_log: tensorboard.writer.SummaryWriter = None, validation_log: tensorboard.writer.SummaryWriter = None) → None[source]

Bases: object

Wraps a pair of SummaryWriter instances but is a no-op if they’re None. Allows Tensorboard logging without always checking for Nones first.

add_train_scalar(name: str, value: float, global_step: int) → None[source]
add_validation_scalar(name: str, value: float, global_step: int) → None[source]
class allennlp.training.trainer.Trainer(model: allennlp.models.model.Model, optimizer: torch.optim.optimizer.Optimizer, iterator: allennlp.data.iterators.data_iterator.DataIterator, train_dataset: allennlp.data.dataset.Dataset, validation_dataset: typing.Union[allennlp.data.dataset.Dataset, NoneType] = None, patience: int = 2, validation_metric: str = '-loss', num_epochs: int = 20, serialization_dir: typing.Union[str, NoneType] = None, cuda_device: int = -1, grad_norm: typing.Union[float, NoneType] = None, grad_clipping: typing.Union[float, NoneType] = None, learning_rate_scheduler: typing.Union[torch.optim.lr_scheduler._LRScheduler, NoneType] = None, no_tqdm: bool = False) → None[source]

Bases: object

classmethod from_params(model: allennlp.models.model.Model, serialization_dir: str, iterator: allennlp.data.iterators.data_iterator.DataIterator, train_dataset: allennlp.data.dataset.Dataset, validation_dataset: typing.Union[allennlp.data.dataset.Dataset, NoneType], params: allennlp.common.params.Params) → allennlp.training.trainer.Trainer[source]
train() → None[source]

Trains the supplied model with the supplied parameters.