allennlp.modules.token_embedders.pretrained_transformer_embedder#

PretrainedTransformerEmbedder#

PretrainedTransformerEmbedder(self, model_name:str) -> None

Uses a pretrained model from transformers as a TokenEmbedder.

get_output_dim#

PretrainedTransformerEmbedder.get_output_dim(self)

Returns the final output dimension that this TokenEmbedder uses to represent each token. This is not the shape of the returned tensor, but the last element of that shape.

forward#

PretrainedTransformerEmbedder.forward(self, token_ids:torch.LongTensor, mask:torch.LongTensor) -> torch.Tensor

Parameters

  • token_ids: torch.LongTensor
  • Shape: [batch_size, num_wordpieces].
  • mask: torch.LongTensor
  • Shape: [batch_size, num_wordpieces].

Returns:

Shape: [batch_size, num_wordpieces, embedding_size].