# allennlp.modules.layer_norm¶

class allennlp.modules.layer_norm.LayerNorm(dimension: int, eps: float = 1e-06) → None[source]

Bases: torch.nn.modules.module.Module

An implementation of Layer Normalization .

Layer Normalization stabilises the training of deep neural networks by normalising the outputs of neurons from a particular layer. It computes:

output = (gamma * (tensor - mean) / (std + eps)) + beta

Parameters: dimension : int, required. The dimension of the layer output to normalize. eps : float, optional, (default = 1e-6) An epsilon to prevent dividing by zero in the case the layer has zero variance. The normalized layer output.
forward(tensor: torch.FloatTensor)[source]