allennlp.modules.token_embedders

A TokenEmbedder is a Module that embeds one-hot-encoded tokens as vectors.

class allennlp.modules.token_embedders.token_embedder.TokenEmbedder[source]

Bases: torch.nn.modules.module.Module, allennlp.common.registrable.Registrable

A TokenEmbedder is a Module that takes as input a tensor with integer ids that have been output from a TokenIndexer and outputs a vector per token in the input. The input typically has shape (batch_size, num_tokens) or (batch_size, num_tokens, num_characters), and the output is of shape (batch_size, num_tokens, output_dim). The simplest TokenEmbedder is just an embedding layer, but for character-level input, it could also be some kind of character encoder.

We add a single method to the basic Module API: get_output_dim(). This lets us more easily compute output dimensions for the TextFieldEmbedder, which we might need when defining model parameters such as LSTMs or linear layers, which need to know their input dimension before the layers are called.

default_implementation = 'embedding'
classmethod from_params(vocab: allennlp.data.vocabulary.Vocabulary, params: allennlp.common.params.Params) → allennlp.modules.token_embedders.token_embedder.TokenEmbedder[source]
get_output_dim() → int[source]

Returns the final output dimension that this TokenEmbedder uses to represent each token. This is not the shape of the returned tensor, but the last element of that shape.

class allennlp.modules.token_embedders.embedding.Embedding(num_embeddings: int, embedding_dim: int, projection_dim: int = None, weight: torch.FloatTensor = None, padding_index: int = None, trainable: bool = True, max_norm: float = None, norm_type: float = 2.0, scale_grad_by_freq: bool = False, sparse: bool = False) → None[source]

Bases: allennlp.modules.token_embedders.token_embedder.TokenEmbedder

A more featureful embedding module than the default in Pytorch. Adds the ability to:

  1. embed higher-order inputs
  2. pre-specify the weight matrix
  3. use a non-trainable embedding
  4. project the resultant embeddings to some other dimension (which only makes sense with non-trainable embeddings).
  5. build all of this easily from_params

Note that if you are using our data API and are trying to embed a TextField, you should use a TextFieldEmbedder instead of using this directly.

Parameters:

num_embeddings :, int:

Size of the dictionary of embeddings (vocabulary size).

embedding_dim : int

The size of each embedding vector.

projection_dim : int, (optional, default=None)

If given, we add a projection layer after the embedding layer. This really only makes sense if trainable is False.

weight : torch.FloatTensor, (optional, default=None)

A pre-initialised weight matrix for the embedding lookup, allowing the use of pretrained vectors.

padding_index : int, (optional, default=None)

If given, pads the output with zeros whenever it encounters the index.

trainable : bool, (optional, default=True)

Whether or not to optimize the embedding parameters.

max_norm : float, (optional, default=None)

If given, will renormalize the embeddings to always have a norm lesser than this

norm_type : float, (optional, default=2):

The p of the p-norm to compute for the max_norm option

scale_grad_by_freq : boolean, (optional, default=False):

If given, this will scale gradients by the frequency of the words in the mini-batch.

sparse : bool, (optional, default=False):

Whether or not the Pytorch backend should use a sparse representation of the embedding weight.

Returns:

An Embedding module.

forward(inputs)[source]

Defines the computation performed at every call.

Should be overriden by all subclasses.

classmethod from_params(vocab: allennlp.data.vocabulary.Vocabulary, params: allennlp.common.params.Params) → allennlp.modules.token_embedders.embedding.Embedding[source]

We need the vocabulary here to know how many items we need to embed, and we look for a vocab_namespace key in the parameter dictionary to know which vocabulary to use. If you know beforehand exactly how many embeddings you need, or aren’t using a vocabulary mapping for the things getting embedded here, then you can pass in the num_embeddings key directly, and the vocabulary will be ignored.

get_output_dim() → int[source]

Returns the final output dimension that this TokenEmbedder uses to represent each token. This is not the shape of the returned tensor, but the last element of that shape.

class allennlp.modules.token_embedders.token_characters_encoder.TokenCharactersEncoder(embedding: allennlp.modules.token_embedders.embedding.Embedding, encoder: allennlp.modules.seq2vec_encoders.seq2vec_encoder.Seq2VecEncoder, dropout: float = 0.0) → None[source]

Bases: allennlp.modules.token_embedders.token_embedder.TokenEmbedder

A TokenCharactersEncoder takes the output of a TokenCharactersIndexer, which is a tensor of shape (batch_size, num_tokens, num_characters), embeds the characters, runs a token-level encoder, and returns the result, which is a tensor of shape (batch_size, num_tokens, encoding_dim). We also optionally apply dropout after the token-level encoder.

We take the embedding and encoding modules as input, so this class is itself quite simple.

forward(token_characters: torch.FloatTensor) → torch.FloatTensor[source]
classmethod from_params(vocab: allennlp.data.vocabulary.Vocabulary, params: allennlp.common.params.Params) → allennlp.modules.token_embedders.token_characters_encoder.TokenCharactersEncoder[source]
get_output_dim() → int[source]