allennlp.modules.seq2seq_encoders

Modules that transform a sequence of input vectors into a sequence of output vectors. Some are just basic wrappers around existing PyTorch modules, others are AllenNLP modules.

The available Seq2Seq encoders are

class allennlp.modules.seq2seq_encoders.pytorch_seq2seq_wrapper.PytorchSeq2SeqWrapper(module: torch.nn.modules.rnn.RNNBase) → None[source]

Bases: allennlp.modules.seq2seq_encoders.seq2seq_encoder.Seq2SeqEncoder

Pytorch’s RNNs have two outputs: the hidden state for every time step, and the hidden state at the last time step for every layer. We just want the first one as a single output. This wrapper pulls out that output, and adds a get_output_dim() method, which is useful if you want to, e.g., define a linear + softmax layer on top of this to get some distribution over a set of labels. The linear layer needs to know its input dimension before it is called, and you can get that from get_output_dim.

In order to be wrapped with this wrapper, a class must have the following members:

  • self.input_size: int
  • self.hidden_size: int
  • def forward(inputs: PackedSequence, hidden_state: torch.autograd.Variable) -> Tuple[PackedSequence, torch.autograd.Variable].
  • self.bidirectional: bool (optional)

This is what pytorch’s RNN’s look like - just make sure your class looks like those, and it should work.

Note that we require you to pass sequence lengths when you call this module, to avoid subtle bugs around masking. If you already have a PackedSequence you can pass None as the second parameter.

forward(inputs: torch.FloatTensor, mask: torch.FloatTensor, hidden_state: torch.FloatTensor = None) → torch.FloatTensor[source]
get_input_dim() → int[source]
get_output_dim() → int[source]
class allennlp.modules.seq2seq_encoders.seq2seq_encoder.Seq2SeqEncoder[source]

Bases: torch.nn.modules.module.Module, allennlp.common.registrable.Registrable

A Seq2SeqEncoder is a Module that takes as input a sequence of vectors and returns a modified sequence of vectors. Input shape: (batch_size, sequence_length, input_dim); output shape: (batch_size, sequence_length, output_dim).

We add two methods to the basic Module API: get_input_dim() and get_output_dim(). You might need this if you want to construct a Linear layer using the output of this encoder, or to raise sensible errors for mis-matching input dimensions.

classmethod from_params(params: allennlp.common.params.Params) → allennlp.modules.seq2seq_encoders.seq2seq_encoder.Seq2SeqEncoder[source]
get_input_dim() → int[source]

Returns the dimension of the vector input for each element in the sequence input to a Seq2SeqEncoder. This is not the shape of the input tensor, but the last element of that shape.

get_output_dim() → int[source]

Returns the dimension of each vector in the sequence output by this Seq2SeqEncoder. This is not the shape of the returned tensor, but the last element of that shape.

class allennlp.modules.seq2seq_encoders.intra_sentence_attention.IntraSentenceAttentionEncoder(input_dim: int, projection_dim: int = None, similarity_function: allennlp.modules.similarity_functions.similarity_function.SimilarityFunction = DotProductSimilarity ( ), num_attention_heads: int = 1, combination: str = '1, 2') → None[source]

Bases: allennlp.modules.seq2seq_encoders.seq2seq_encoder.Seq2SeqEncoder

An IntraSentenceAttentionEncoder is a Seq2SeqEncoder that merges the original word representations with an attention (for each word) over other words in the sentence. As a Seq2SeqEncoder, the input to this module is of shape (batch_size, num_tokens, input_dim), and the output is of shape (batch_size, num_tokens, output_dim).

We compute the attention using a configurable SimilarityFunction, which could have multiple attention heads. The operation for merging the original representations with the attended representations is also configurable (e.g., you can concatenate them, add them, multiply them, etc.).

Parameters:

input_dim : int

The dimension of the vector for each element in the input sequence; input_tensor.size(-1).

projection_dim : int, optional

If given, we will do a linear projection of the input sequence to this dimension before performing the attention-weighted sum.

similarity_function : SimilarityFunction, optional

The similarity function to use when computing attentions. Default is to use a dot product.

num_attention_heads: ``int``, optional

If this is greater than one (default is 1), we will split the input into several “heads” to compute multi-headed weighted sums. Must be used with a multi-headed similarity function, and you almost certainly want to do a projection in conjunction with the multiple heads.

combination : str, optional

This string defines how we merge the original word representations with the result of the intra-sentence attention. This will be passed to combine_tensors(); see that function for more detail on exactly how this works, but some simple examples are "1,2" for concatenation (the default), "1+2" for adding the two, or "2" for only keeping the attention representation.

forward(tokens: torch.FloatTensor, mask: torch.FloatTensor)[source]
classmethod from_params(params: allennlp.common.params.Params) → allennlp.modules.seq2seq_encoders.intra_sentence_attention.IntraSentenceAttentionEncoder[source]
get_input_dim() → int[source]

Returns the dimension of the vector input for each element in the sequence input to a Seq2SeqEncoder. This is not the shape of the input tensor, but the last element of that shape.

get_output_dim() → int[source]

Returns the dimension of each vector in the sequence output by this Seq2SeqEncoder. This is not the shape of the returned tensor, but the last element of that shape.