allennlp.data.token_indexers.dep_label_indexer#

DepLabelIndexer#

DepLabelIndexer(self, namespace:str='dep_labels', token_min_padding_length:int=0) -> None

This :class:TokenIndexer represents tokens by their syntactic dependency label, as determined by the dep_ field on Token.

Parameters

  • namespace : str, optional (default=dep_labels)
  • We will use this namespace in the :class:Vocabulary to map strings to indices.
  • token_min_padding_length : int, optional (default=0)
  • See :class:TokenIndexer.

count_vocab_items#

DepLabelIndexer.count_vocab_items(self, token:allennlp.data.tokenizers.token.Token, counter:Dict[str, Dict[str, int]])

The :class:Vocabulary needs to assign indices to whatever strings we see in the training data (possibly doing some frequency filtering and using an OOV, or out of vocabulary, token). This method takes a token and a dictionary of counts and increments counts for whatever vocabulary items are present in the token. If this is a single token ID representation, the vocabulary item is likely the token itself. If this is a token characters representation, the vocabulary items are all of the characters in the token.

tokens_to_indices#

DepLabelIndexer.tokens_to_indices(self, tokens:List[allennlp.data.tokenizers.token.Token], vocabulary:allennlp.data.vocabulary.Vocabulary) -> Dict[str, List[int]]

Takes a list of tokens and converts them to an IndexedTokenList. This could be just an ID for each token from the vocabulary. Or it could split each token into characters and return one ID per character. Or (for instance, in the case of byte-pair encoding) there might not be a clean mapping from individual tokens to indices, and the IndexedTokenList could be a complex data structure.

get_empty_token_list#

DepLabelIndexer.get_empty_token_list(self) -> Dict[str, List[Any]]

Returns an already indexed version of an empty token list. This is typically just an empty list for whatever keys are used in the indexer.