DPLSTM

class opacus.layers.dp_lstm.DPLSTM(input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0.0, bidirectional=False)[source]

DP-friendly abstraction in place of the torch.nn.LSTM module with a similar interface.

The dimensionality of each timestep input tensor for a sequence of length T is [B, D] where B is the batch size. The DPLSTM output at timestep t, h_t is of shape [B, H] with the cell state c_t also of shape [B, H].

input_size

The number of expected features in the input x.

Type

int

hidden_size

The number of features in the hidden state h.

Type

int

batch_first

If True, then the input and output tensors are provided as (batch, seq, feature). The default is False.

Type

bool

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x, state_init=None)[source]

Implements the forward pass of the DPLSTM when a sequence is input.

Parameters
  • x (Tensor) – Input sequence to the DPLSTM of shape [T, B, D]

  • state_init (Optional[Tuple[Tensor, Tensor]]) – Initial state of the LSTM as a tuple (h_init, c_init) where h_init is the initial hidden state and c_init is the initial cell state of the DPLSTM (The default is None, in which case both h_init and c_init default to zero tensors).

Return type

Tuple[Tensor, Tuple[Tensor, Tensor]]

Returns

output, (h_n, c_n) where output is of shape [T, B, H] and is a tensor containing the output features (h_t) from the last layer of the DPLSTM for each timestep t. h_n is of shape [B,H] and is a tensor containing the hidden state for t = T. c_n is of shape [B, H] tensor containing the cell state for t = T.

reset_parameters()[source]

Resets parameters of the DPLSTM by initializing them from an uniform distribution.

validate_parameters()[source]

Validates the DPLSTM configuration and raises a NotImplementedError if the number of layers is more than 1, the DPLSTM is bidirectional, uses dropout at the output or, it does not have a bias term.

Raises

NotImplementedError – If the number of layers is more than 1, the DPLSTM is bidirectional, uses dropout at the output, or it does not have a bias term.

class opacus.layers.dp_lstm.LSTMLinear(in_features, out_features, bias=True)[source]

This function is the same as a nn.Linear layer, except that in the backward pass the grad_samples get accumulated (instead of being concatenated as in the standard nn.Linear)

Initializes internal Module state, shared by both nn.Module and ScriptModule.