A Trainer is responsible for training a Model.

Typically you might create a configuration file specifying the model and training parameters and then use train rather than instantiating a Trainer yourself.

class tensorboard.writer.SummaryWriter = None, validation_log: tensorboard.writer.SummaryWriter = None) → None[source]

Bases: object

Wraps a pair of SummaryWriter instances but is a no-op if they’re None. Allows Tensorboard logging without always checking for Nones first.

add_train_scalar(name: str, value: float, global_step: int) → None[source]
add_validation_scalar(name: str, value: float, global_step: int) → None[source]
class allennlp.models.model.Model, optimizer: torch.optim.optimizer.Optimizer, iterator:, train_dataset:, validation_dataset: typing.Union[, NoneType] = None, patience: int = 2, validation_metric: str = '-loss', num_epochs: int = 20, serialization_dir: typing.Union[str, NoneType] = None, cuda_device: int = -1, grad_norm: typing.Union[float, NoneType] = None, grad_clipping: typing.Union[float, NoneType] = None, learning_rate_scheduler: typing.Union[torch.optim.lr_scheduler._LRScheduler, NoneType] = None, no_tqdm: bool = False) → None[source]

Bases: object

classmethod from_params(model: allennlp.models.model.Model, serialization_dir: str, iterator:, train_dataset:, validation_dataset: typing.Union[, NoneType], params: allennlp.common.params.Params) →[source]
train() → None[source]

Trains the supplied model with the supplied parameters.