DPLSTM¶

class
opacus.layers.dp_lstm.
DPLSTM
(input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0.0, bidirectional=False)[source]¶ DPfriendly abstraction in place of the
torch.nn.LSTM
module with a similar interface.The dimensionality of each timestep input tensor for a sequence of length
T
is[B, D]
whereB
is the batch size. TheDPLSTM
output at timestept
,h_t
is of shape[B, H]
with the cell statec_t
also of shape[B, H]
.The number of features in the hidden state
h
. Type

batch_first
¶ If
True
, then the input and output tensors are provided as (batch, seq, feature). The default isFalse
. Type
Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward
(x, state_init=None)[source]¶ Implements the forward pass of the DPLSTM when a sequence is input.
 Parameters
x (
Tensor
) – Input sequence to the DPLSTM of shape[T, B, D]
state_init (
Optional
[Tuple
[Tensor
,Tensor
]]) – Initial state of the LSTM as a tuple(h_init, c_init)
whereh_init
is the initial hidden state andc_init
is the initial cell state of the DPLSTM (The default isNone
, in which case bothh_init
andc_init
default to zero tensors).
 Return type
 Returns
output, (h_n, c_n)
whereoutput
is of shape[T, B, H]
and is a tensor containing the output features (h_t
) from the last layer of the DPLSTM for each timestept
.h_n
is of shape[B,H]
and is a tensor containing the hidden state fort = T
.c_n
is of shape[B, H]
tensor containing the cell state fort = T
.

reset_parameters
()[source]¶ Resets parameters of the DPLSTM by initializing them from an uniform distribution.

validate_parameters
()[source]¶ Validates the DPLSTM configuration and raises a
NotImplementedError
if the number of layers is more than 1, the DPLSTM is bidirectional, uses dropout at the output or, it does not have a bias term. Raises
NotImplementedError – If the number of layers is more than 1, the DPLSTM is bidirectional, uses dropout at the output, or it does not have a bias term.

class
opacus.layers.dp_lstm.
LSTMLinear
(in_features, out_features, bias=True)[source]¶ This function is the same as a nn.Linear layer, except that in the backward pass the grad_samples get accumulated (instead of being concatenated as in the standard nn.Linear)
Initializes internal Module state, shared by both nn.Module and ScriptModule.