A feed-forward neural network.

class allennlp.modules.feedforward.FeedForward(input_dim: int, num_layers: int, hidden_dims: typing.Union[int, typing.Sequence[int]], activations: typing.Union[allennlp.nn.activations.Activation, typing.Sequence[allennlp.nn.activations.Activation]], dropout: typing.Union[float, typing.Sequence[float]] = 0.0) → None[source]

Bases: torch.nn.modules.module.Module

This Module is a feed-forward neural network, just a sequence of Linear layers with activation functions in between.

input_dim : int

The dimensionality of the input. We assume the input has shape (batch_size, input_dim).

num_layers : int

The number of Linear layers to apply to the input.

hidden_dims : Union[int, Sequence[int]]

The output dimension of each of the Linear layers. If this is a single int, we use it for all Linear layers. If it is a Sequence[int], len(hidden_dims) must be num_layers.

activations : Union[Callable, Sequence[Callable]]

The activation function to use after each Linear layer. If this is a single function, we use it after all Linear layers. If it is a Sequence[Callable], len(activations) must be num_layers.

dropout : Union[float, Sequence[float]], optional

If given, we will apply this amount of dropout after each layer. Semantics of float versus Sequence[float] is the same as with other parameters.

forward(inputs: torch.Tensor) → torch.Tensor[source]
classmethod from_params(params: allennlp.common.params.Params)[source]