class allennlp.models.model.Model, optimizer: torch.optim.optimizer.Optimizer, iterator:, train_dataset: Iterable[], validation_dataset: Optional[Iterable[]] = None, patience: Optional[int] = None, validation_metric: str = '-loss', validation_iterator: = None, shuffle: bool = True, num_epochs: int = 20, serialization_dir: Optional[str] = None, num_serialized_models_to_keep: int = 20, keep_serialized_model_every_num_seconds: int = None, checkpointer: = None, model_save_interval: float = None, cuda_device: Union[int, List] = -1, grad_norm: Optional[float] = None, grad_clipping: Optional[float] = None, learning_rate_scheduler: Optional[] = None, momentum_scheduler: Optional[] = None, summary_interval: int = 100, histogram_interval: int = None, should_log_parameter_statistics: bool = True, should_log_learning_rate: bool = False, log_batch_size_period: Optional[int] = None, moving_average: Optional[] = None)[source]


batch_loss(self, batch_group:List[Dict[str, Union[torch.Tensor, Dict[str, torch.Tensor]]]], for_training:bool) → torch.Tensor[source]

Does a forward pass on the given batches and returns the loss value in the result. If for_training is True also applies regularization penalty.

classmethod from_params(model:allennlp.models.model.Model, serialization_dir:str,, train_data:Iterable[], validation_data:Union[Iterable[], NoneType], params:allennlp.common.params.Params, → 'Trainer'[source]

This is the automatic implementation of from_params. Any class that subclasses FromParams (or Registrable, which itself subclasses FromParams) gets this implementation for free. If you want your class to be instantiated from params in the “obvious” way – pop off parameters and hand them to your constructor with the same names – this provides that functionality.

If you need more complex logic in your from from_params method, you’ll have to implement your own method that overrides this one.

rescale_gradients(self) → Union[float, NoneType][source]
train(self) → Dict[str, Any][source]

Trains the supplied model with the supplied parameters.


Bases: tuple

We would like to avoid having complex instantiation logic taking place in Trainer.from_params. This helper class has a from_params that instantiates a model, loads train (and possibly validation and test) datasets, constructs a Vocabulary, creates data iterators, and handles a little bit of bookkeeping. If you’re creating your own alternative training regime you might be able to use this.

static from_params(params:allennlp.common.params.Params, serialization_dir:str, recover:bool=False, cache_directory:str=None, cache_prefix:str=None) → 'TrainerPieces'[source]

Alias for field number 1


Alias for field number 0


Alias for field number 6


Alias for field number 4


Alias for field number 2


Alias for field number 3


Alias for field number 5