Dataset represents a collection of data suitable for feeding into a model.
For example, when you train a model, you will likely have a training dataset and a validation dataset.
Dataset(instances: typing.List[allennlp.data.instance.Instance]) → None¶
A collection of
Fields, and the fields could be in an indexed or unindexed state - the
Datasethas methods around indexing the data and converting the data into arrays.
as_tensor_dict(padding_lengths: typing.Dict[str, typing.Dict[str, int]] = None, cuda_device: int = -1, for_training: bool = True, verbose: bool = False) → typing.Dict[str, typing.Union[torch.FloatTensor, typing.Dict[str, torch.FloatTensor]]]¶
This method converts this
Datasetinto a set of pytorch Tensors that can be passed through a model. In order for the tensors to be valid tensors, all
Instancesin this dataset need to be padded to the same lengths wherever padding is necessary, so we do that first, then we combine all of the tensors for each field in each instance into a set of batched tensors for each field.
Dict[str, Dict[str, int]]
If a key is present in this dictionary with a non-
Nonevalue, we will pad to that length instead of the length calculated from the data. This lets you, e.g., set a maximum value for sentence length if you want to throw out long sequences.
Entries in this dictionary are keyed first by field name (e.g., “question”), then by padding key (e.g., “num_tokens”).
If cuda_device >= 0, GPUs are available and Pytorch was compiled with CUDA support, the tensor will be copied to the cuda_device specified.
bool, optional (default=``True``)
False, we will pass the
volatile=Trueflag when constructing variables, which disables gradient computations in the graph. This makes inference more efficient (particularly in memory usage), but is incompatible with training models.
bool, optional (default=``False``)
Should we output logging information when we’re doing this padding? If the dataset is large, this is nice to have, because padding a large dataset could take a long time. But if you’re doing this inside of a data generator, having all of this output per batch is a bit obnoxious (and really slow).
A dictionary of tensors, keyed by field name, suitable for passing as input to a model. This is a batch of instances, so, e.g., if the instances have a “question” field and an “answer” field, the “question” fields for all of the instances will be grouped together into a single tensor, and the “answer” fields for all instances will be similarly grouped in a parallel set of tensors, for batched computation. Additionally, for complex
Fields, the value of the dictionary key is not necessarily a single tensor. For example, with the
TextField, the output is a dictionary mapping
TokenIndexerkeys to tensors. The number of elements in this sub-dictionary therefore corresponds to the number of
TokenIndexersused to index the
Fieldclass is responsible for batching its own output.
get_padding_lengths() → typing.Dict[str, typing.Dict[str, int]]¶
Gets the maximum padding lengths from all
Instancesin this dataset. Each
Fields, and each
Fieldcould have multiple things that need padding. We look at all fields in all instances, and find the max values for each (field_name, padding_key) pair, returning them in a dictionary.
This can then be used to convert this dataset into arrays of consistent length, or to set model parameters, etc.
IndexedFields. This modifies the current object, it does not return a new object.
If there are more instances than
max_instancesin this dataset, we truncate the instances to the first
max_instances. This modifies the current object, and returns nothing.