allennlp.data.vocabulary

A Vocabulary maps strings to integers, allowing for strings to be mapped to an out-of-vocabulary token.

class allennlp.data.vocabulary.Vocabulary(counter: typing.Dict[str, typing.Dict[str, int]] = None, min_count: int = 1, max_vocab_size: typing.Union[int, typing.Dict[str, int]] = None, non_padded_namespaces: typing.Sequence[str] = ('*tags', '*labels')) → None[source]

Bases: object

A Vocabulary maps strings to integers, allowing for strings to be mapped to an out-of-vocabulary token.

Vocabularies are fit to a particular dataset, which we use to decide which tokens are in-vocabulary.

Vocabularies also allow for several different namespaces, so you can have separate indices for ‘a’ as a word, and ‘a’ as a character, for instance, and so we can use this object to also map tag and label strings to indices, for a unified Field API. Most of the methods on this class allow you to pass in a namespace; by default we use the ‘tokens’ namespace, and you can omit the namespace argument everywhere and just use the default.

Parameters:

counter : Dict[str, Dict[str, int]], optional (default=``None``)

A collection of counts from which to initialize this vocabulary. We will examine the counts and, together with the other parameters to this class, use them to decide which words are in-vocabulary. If this is None, we just won’t initialize the vocabulary with anything.

min_count : int, optional (default=``1``)

When initializing the vocab from a counter, you can specify a minimum count, and every token with a count less than this will not be added to the dictionary. The default of 1 means that every word ever seen will be added.

max_vocab_size : Union[int, Dict[str, int]], optional (default=``None``)

If you want to cap the number of tokens in your vocabulary, you can do so with this parameter. If you specify a single integer, every namespace will have its vocabulary fixed to be no larger than this. If you specify a dictionary, then each namespace in the counter can have a separate maximum vocabulary size. Any missing key will have a value of None, which means no cap on the vocabulary size.

non_padded_namespaces : Sequence[str], optional

By default, we assume you are mapping word / character tokens to integers, and so you want to reserve word indices for padding and out-of-vocabulary tokens. However, if you are mapping NER or SRL tags, or class labels, to integers, you probably do not want to reserve indices for padding and out-of-vocabulary tokens. Use this field to specify which namespaces should not have padding and OOV tokens added.

The format of each element of this is either a string, which must match field names exactly, or * followed by a string, which we match as a suffix against field names.

We try to make the default here reasonable, so that you don’t have to think about this. The default is ("*tags", "*labels"), so as long as your namespace ends in “tags” or “labels” (which is true by default for all tag and label fields in this code), you don’t have to specify anything here.

add_token_to_namespace(token: str, namespace: str = 'tokens') → int[source]

Adds token to the index, if it is not already present. Either way, we return the index of the token.

classmethod from_dataset(dataset, min_count: int = 1, max_vocab_size: typing.Union[int, typing.Dict[str, int]] = None, non_padded_namespaces: typing.Sequence[str] = ('*tags', '*labels')) → allennlp.data.vocabulary.Vocabulary[source]

Constructs a vocabulary given a Dataset and some parameters. We count all of the vocabulary items in the dataset, then pass those counts, and the other parameters, to __init__(). See that method for a description of what the other parameters do.

classmethod from_files(directory: str) → allennlp.data.vocabulary.Vocabulary[source]

Loads a Vocabulary that was serialized using save_to_files.

Parameters:

directory : str

The directory containing the serialized vocabulary.

classmethod from_params(params: allennlp.common.params.Params, dataset=None)[source]

There are two possible ways to build a vocabulary; from a pre-existing dataset, using Vocabulary.from_dataset(), or from a pre-saved vocabulary, using Vocabulary.from_files(). This method wraps both of these options, allowing their specification from a Params object, generated from a JSON configuration file.

Parameters:

params: Params, required.

dataset: Dataset, optional.

If params doesn’t contain a vocabulary_directory key, the Vocabulary can be built directly from a Dataset.

Returns:

A Vocabulary.

get_index_to_token_vocabulary(namespace: str = 'tokens') → typing.Dict[int, str][source]
get_token_from_index(index: int, namespace: str = 'tokens') → str[source]
get_token_index(token: str, namespace: str = 'tokens') → int[source]
get_vocab_size(namespace: str = 'tokens') → int[source]
save_to_files(directory: str) → None[source]

Persist this Vocabulary to files so it can be reloaded later. Each namespace corresponds to one file.

Parameters:

directory : str

The directory where we save the serialized vocabulary.

set_from_file(filename: str, is_padded: bool = True, oov_token: str = '@@UNKNOWN@@', namespace: str = 'tokens')[source]

If you already have a vocabulary file for a trained model somewhere, and you really want to use that vocabulary file instead of just setting the vocabulary from a dataset, for whatever reason, you can do that with this method. You must specify the namespace to use, and we assume that you want to use padding and OOV tokens for this.

Parameters:

filename : str

The file containing the vocabulary to load. It should be formatted as one token per line, with nothing else in the line. The index we assign to the token is the line number in the file (1-indexed if is_padded, 0-indexed otherwise). Note that this file should contain the OOV token string!

is_padded : bool, optional (default=True)

Is this vocabulary padded? For token / word / character vocabularies, this should be True; while for tag or label vocabularies, this should typically be False. If True, we add a padding token with index 0, and we enforce that the oov_token is present in the file.

oov_token : str, optional (default=DEFAULT_OOV_TOKEN)

What token does this vocabulary use to represent out-of-vocabulary characters? This must show up as a line in the vocabulary file. When we find it, we replace oov_token with self._oov_token, because we only use one OOV token across namespaces.

namespace : str, optional (default=”tokens”)

What namespace should we overwrite with this vocab file?