allennlp.modules.time_distributed

A wrapper that unrolls the second (time) dimension of a tensor into the first (batch) dimension, applies some other Module, and then rolls the time dimension back up.

class allennlp.modules.time_distributed.TimeDistributed(module)[source]

Bases: torch.nn.modules.module.Module

Given an input shaped like (batch_size, time_steps, [rest]) and a Module that takes inputs like (batch_size, [rest]), TimeDistributed reshapes the input to be (batch_size * time_steps, [rest]), applies the contained Module, then reshapes it back.

Note that while the above gives shapes with batch_size first, this Module also works if batch_size is second - we always just combine the first two dimensions, then split them.

forward(*inputs)[source]