allennlp.modules.feedforward¶
A feed-forward neural network.
-
class
allennlp.modules.feedforward.
FeedForward
(input_dim: int, num_layers: int, hidden_dims: typing.Union[int, typing.Sequence[int]], activations: typing.Union[allennlp.nn.activations.Activation, typing.Sequence[allennlp.nn.activations.Activation]], dropout: typing.Union[float, typing.Sequence[float]] = 0.0) → None[source]¶ Bases:
torch.nn.modules.module.Module
This
Module
is a feed-forward neural network, just a sequence ofLinear
layers with activation functions in between.Parameters: input_dim :
int
The dimensionality of the input. We assume the input has shape
(batch_size, input_dim)
.num_layers :
int
The number of
Linear
layers to apply to the input.hidden_dims :
Union[int, Sequence[int]]
The output dimension of each of the
Linear
layers. If this is a singleint
, we use it for allLinear
layers. If it is aSequence[int]
,len(hidden_dims)
must benum_layers
.activations :
Union[Callable, Sequence[Callable]]
The activation function to use after each
Linear
layer. If this is a single function, we use it after allLinear
layers. If it is aSequence[Callable]
,len(activations)
must benum_layers
.dropout :
Union[float, Sequence[float]]
, optionalIf given, we will apply this amount of dropout after each layer. Semantics of
float
versusSequence[float]
is the same as with other parameters.