MetricTracker(self, patience:Union[int, NoneType]=None, metric_name:str=None, should_decrease:bool=None) -> None
This class tracks a metric during training for the dual purposes of early stopping
and for knowing whether the current value is the best so far. It mimics the PyTorch
load_state_dict interface, so that it can be checkpointed along with
your model and optimizer.
Some metrics improve by increasing; others by decreasing. Here you can either explicitly
should_decrease, or you can provide a
metric_name in which case "should decrease"
is inferred from the first character, which must be "+" or "-".
- patience : int, optional (default = None)
If provided, then
should_stop_early()returns True if we go this many epochs without seeing a new best value.
- metric_name : str, optional (default = None)
If provided, it's used to infer whether we expect the metric values to
increase (if it starts with "+") or decrease (if it starts with "-").
It's an error if it doesn't start with one of those. If it's not provided,
you should specify
- should_decrease : str, optional (default = None)
metric_nameisn't provided (in which case we can't infer
should_decrease), then you have to specify it here.
MetricTracker.clear(self) -> None
Clears out the tracked metrics, but keeps the patience and should_decrease settings.
MetricTracker.state_dict(self) -> Dict[str, Any]
Trainer can use this to serialize the state of the metric tracker.
MetricTracker.load_state_dict(self, state_dict:Dict[str, Any]) -> None
Trainer can use this to hydrate a metric tracker from a serialized state.
MetricTracker.add_metric(self, metric:float) -> None
Record a new value of the metric and update the various things that depend on it.
MetricTracker.add_metrics(self, metrics:Iterable[float]) -> None
Helper to add multiple metrics at once.
MetricTracker.is_best_so_far(self) -> bool
Returns true if the most recent value of the metric is the best so far.
MetricTracker.should_stop_early(self) -> bool
Returns true if improvement has stopped for long enough.