class, num_workers: int, epochs_per_read: int = 1, output_queue_size: int = 1000) → None[source]


Wraps another dataset reader and uses it to read from multiple input files using multiple processes. Note that in this case the file_path passed to read() should be a glob, and that the dataset reader will return instances from all files matching the glob.

base_reader : DatasetReader

Each process will use this dataset reader to read zero or more files.

num_workers : int

How many data-reading processes to run simultaneously.

epochs_per_read : int, (optional, default=1)

Normally a call to returns a single epoch worth of instances, and your DataIterator handles iteration over multiple epochs. However, in the multiple-process case, it’s possible that you’d want finished workers to continue on to the next epoch even while others are still finishing the previous epoch. Passing in a value larger than 1 allows that to happen.

output_queue_size: ``int``, (optional, default=1000)

The size of the queue on which read instances are placed to be yielded. You might need to increase this if you’re generating instances too quickly.

read(file_path: str) → typing.Iterable[][source]
text_to_instance(*args, **kwargs) →[source]

Just delegate to the base reader text_to_instance.


Bases: object

multiprocessing.log_to_stderr causes some output in the logs even when we don’t use this dataset reader. This is a small hack to instantiate the stderr logger lazily only when it’s needed (which is only when using the MultiprocessDatasetReader)

classmethod info(message: str) → None[source]