A stacked LSTM with LSTM layers which alternate between going forwards over the sequence and going backwards.
StackedAlternatingLstm(input_size: int, hidden_size: int, num_layers: int, recurrent_dropout_probability: float = 0.0, use_highway: bool = True, use_input_projection_bias: bool = True) → None¶
A stacked LSTM with LSTM layers which alternate between going forwards over the sequence and going backwards. This implementation is based on the description in Deep Semantic Role Labelling - What works and what’s next .
input_size : int, required
The dimension of the inputs to the LSTM.
hidden_size : int, required
The dimension of the outputs of the LSTM.
num_layers : int, required
The number of stacked LSTMs to use.
recurrent_dropout_probability: float, optional (default = 0.0)
The dropout probability to be used in a dropout scheme as stated in A Theoretically Grounded Application of Dropout in Recurrent Neural Networks .
use_input_projection_bias : bool, optional (default = True)
Whether or not to use a bias on the input projection layer. This is mainly here for backwards compatibility reasons and will be removed (and set to False) in future releases.
output_accumulator : PackedSequence
The outputs of the interleaved LSTMs per timestep. A tensor of shape (batch_size, max_timesteps, hidden_size) where for a given batch element, all outputs past the sequence length for that batch are zero tensors.
forward(inputs: torch.nn.utils.rnn.PackedSequence, initial_state: typing.Union[typing.Tuple[torch.FloatTensor, torch.FloatTensor], NoneType] = None)¶
A batch first
PackedSequenceto run the stacked LSTM over.
initial_state : Tuple[torch.Tensor, torch.Tensor], optional, (default = None)
A tuple (state, memory) representing the initial hidden state and memory of the LSTM. Each tensor has shape (1, batch_size, output_dimension).
output_sequence : PackedSequence
The encoded sequence of shape (batch_size, sequence_length, hidden_size)
The per-layer final (state, memory) states of the LSTM, each with shape (num_layers, batch_size, hidden_size).