DPLSTM¶
- class opacus.layers.dp_lstm.BidirectionalDPLSTMLayer(input_size, hidden_size, bias, dropout)[source]¶
Implements one layer of Bidirectional LSTM in a way amenable to differential privacy. We don’t expect you to use this directly: use DPLSTM instead :)
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, state_init, batch_sizes=None)[source]¶
Implements the forward pass of the DPLSTM when a sequence is input.
- Dimensions as follows:
B: Batch size
T: Sequence length
D: LSTM input hidden size (eg from a word embedding)
H: LSTM output hidden size
P: number of directions (2 if bidirectional, else 1)
- Parameters
x (
Tensor
) – Input sequence to the DPLSTM of shape[T, B, D]
state_init (
Tuple
[Tensor
,Tensor
]) –- Initial state of the LSTM as a tuple
(h_0, c_0)
, where h_0
of shape[P, B, H]
contains the initial hidden state, andc_0
of shape[P, B, H]
contains the initial cell state. This argument can be (and defaults to) None, in which case zero tensors will be used.- Returns:
output, (h_n, c_n)
where,output
is of shape[T, B, H * P]
and is a tensor containing the output features (h_t
) from the last layer of the DPLSTM for each timestept
.h_n
is of shape[P, B, H]
and contains the hidden state fort = T
.c_n
is of shape[P, B, H]
and contains the cell state fort = T
.
- Initial state of the LSTM as a tuple
- Return type
- class opacus.layers.dp_lstm.DPLSTM(input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0, bidirectional=False)[source]¶
DP-friendly drop-in replacement of the
torch.nn.LSTM
module.Its state_dict matches that of nn.LSTM exactly, so that after training it can be exported and loaded by an nn.LSTM for inference.
Refer to nn.LSTM’s documentation for all parameters and inputs.
Initializes internal state. Subclass this instead of
torch.nn.Module
whenever you need to rename your model’s state.- Parameters
rename_map – mapping from old name -> new name for each parameter you want renamed. Note that this must be a 1:1 mapping!
- forward(x, state_init=None)[source]¶
Implements the forward pass of the DPLSTM when a sequence is input.
- Dimensions as follows:
B: Batch size
T: Sequence length
D: LSTM input hidden size (eg from a word embedding)
H: LSTM output hidden size
L: number of layers in the LSTM
P: number of directions (2 if bidirectional, else 1)
- Parameters
x (
Union
[Tensor
,PackedSequence
]) – Input sequence to the DPLSTM of shape[T, B, D]
. Or it can be a PackedSequence.state_init (
Optional
[Tuple
[Tensor
,Tensor
]]) –- Initial state of the LSTM as a tuple
(h_0, c_0)
, where: h_0
of shape[L*P, B, H]
contains the initial hidden statec_0
of shape[L*P, B, H]
contains the initial cell state
This argument can be (and defaults to) None, in which case zero tensors will be used.
- Returns:
output, (h_n, c_n)
where,output
is of shape[T, B, H * P]
and is a tensor containing the output features (h_t
) from the last layer of the DPLSTM for each timestept
.h_n
is of shape[L * P, B, H]
and contains the hidden state fort = T
.c_n
is of shape[L * P, B, H]
and contains the cell state fort = T
.
- Initial state of the LSTM as a tuple
- Return type
- class opacus.layers.dp_lstm.DPLSTMCell(input_size, hidden_size, bias)[source]¶
Internal-only class. Implements one step of LSTM so that a LSTM layer can be seen as repeated applications of this class.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, h_prev, c_prev, batch_size_t=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class opacus.layers.dp_lstm.DPLSTMLayer(input_size, hidden_size, bias, dropout, reverse=False)[source]¶
Implements one layer of LSTM in a way amenable to differential privacy. We don’t expect you to use this directly: use DPLSTM instead :)
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, state_init, batch_sizes=None)[source]¶
Implements the forward pass of the DPLSTMLayer when a sequence is given in input.
- Parameters
x (
Union
[Tensor
,Tuple
]) – Input sequence to the DPLSTMCell of shape[T, B, D]
.state_init (
Tuple
[Tensor
,Tensor
]) – Initial state of the LSTMCell as a tuple(h_0, c_0)
whereh_0
is the initial hidden state andc_0
is the initial cell state of the DPLSTMCellbatch_sizes (
Optional
[Tensor
]) – Contains the batch sizes as stored in PackedSequence
- Return type
- Returns
output, (h_n, c_n)
where,output
is of shape[T, B, H]
and is a tensor containing the output features (h_t
) from the last layer of the DPLSTMCell for each timestept
.h_n
is of shape[B, H]
and is a tensor containing the hidden state fort = T
.c_n
is of shape[B, H]
tensor containing the cell state fort = T
.
- class opacus.layers.dp_lstm.LSTMLinear(in_features, out_features, bias=True)[source]¶
This function is the same as a nn.Linear layer, except that in the backward pass the grad_samples get accumulated (instead of being concatenated as in the standard nn.Linear)
Initializes internal Module state, shared by both nn.Module and ScriptModule.