DPRNN¶
- class opacus.layers.dp_rnn.DPGRU(input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0, bidirectional=False, proj_size=0)[source]¶
Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.
DP-friendly drop-in replacement of the
torch.nn.GRUmodule. Refer totorch.nn.GRUdocumentation for the model description, parameters and inputs/outputs.After training this module can be exported and loaded by the original
torch.nnimplementation for inference.
- class opacus.layers.dp_rnn.DPGRUCell(input_size, hidden_size, bias)[source]¶
A gated recurrent unit (GRU) cell
DP-friendly drop-in replacement of the
torch.nn.GRUCellmodule to use inDPGRU. Refer totorch.nn.GRUCelldocumentation for the model description, parameters and inputs/outputs.Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input, hx=None, batch_size_t=None)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Return type:
- class opacus.layers.dp_rnn.DPLSTM(input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0, bidirectional=False, proj_size=0)[source]¶
Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.
DP-friendly drop-in replacement of the
torch.nn.LSTMmodule. Refer totorch.nn.LSTMdocumentation for the model description, parameters and inputs/outputs.After training this module can be exported and loaded by the original
torch.nnimplementation for inference.
- class opacus.layers.dp_rnn.DPLSTMCell(input_size, hidden_size, bias)[source]¶
A long short-term memory (LSTM) cell.
DP-friendly drop-in replacement of the
torch.nn.LSTMCellmodule to use inDPLSTM. Refer totorch.nn.LSTMCelldocumentation for the model description, parameters and inputs/outputs.Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input, hx=None, batch_size_t=None)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class opacus.layers.dp_rnn.DPRNN(input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0, bidirectional=False, proj_size=0, nonlinearity='tanh')[source]¶
Applies a multi-layer Elman RNN with :math:` anh` or :math:` ext{ReLU}` non-linearity to an input sequence.
DP-friendly drop-in replacement of the
torch.nn.RNNmodule. Refer totorch.nn.RNNdocumentation for the model description, parameters and inputs/outputs.After training this module can be exported and loaded by the original
torch.nnimplementation for inference.
- class opacus.layers.dp_rnn.DPRNNBase(mode, input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0.0, bidirectional=False, proj_size=0, cell_params=None)[source]¶
Base class for all RNN-like sequence models.
DP-friendly drop-in replacement of the
torch.nn.RNNBasemodule. After training this module can be exported and loaded by the originaltorch.nnimplementation for inference.This module implements multi-layer (Type-2, see [this issue](https://github.com/pytorch/pytorch/issues/4930#issuecomment-361851298)) bi-directional sequential model based on abstract cell. Cell should be a subclass of
DPRNNCellBase.Limitations: - proj_size > 0 is not implemented - this implementation doesn’t use cuDNN
- forward(input, state_init=None)[source]¶
Forward pass of a full RNN, containing one or many single- or bi-directional layers. Implemented for an abstract cell type.
Note:
proj_size > 0is not supported here. Cell state size is always equal to hidden state size.- Inputs: input, h_0/(h_0, c_0)
- input: Input sequence. Tensor of shape
[T, B, D]([B, T, D]ifbatch_first=True) or PackedSequence.
h_0: Initial hidden state for each element in the batch. Tensor of shape
[L*P, B, H]. Default to zeros. c_0: Initial cell state for each element in the batch. Only for cell types with an additional state.Tensor of shape
[L*P, B, H]. Default to zeros.- input: Input sequence. Tensor of shape
- Outputs: output, h_n/(h_n, c_n)
- output: Output features (
h_t) from the last layer of the model for eacht. Tensor of shape
[T, B, P*H]([B, T, P*H]ifbatch_first=True), or PackedSequence.
h_n: Final hidden state for each element in the batch. Tensor of shape
[L*P, B, H]. c_n: Final cell state for each element in the batch. Tensor of shape[L*P, B, H].- output: Output features (
- where
T = sequence length B = batch size D = input_size H = hidden_size L = num_layers P = num_directions (2 if bidirectional=True else 1)
- forward_layer(x, h_0, c_0, batch_sizes, cell, max_batch_size, seq_length, is_packed, reverse_layer)[source]¶
Forward pass of a single RNN layer (one direction). Implemented for an abstract cell type.
- Inputs: x, h_0, c_0
x: Input sequence. Tensor of shape
[T, B, D]or PackedSequence if is_packed = True. h_0: Initial hidden state. Tensor of shape[B, H]. c_0: Initial cell state. Tensor of shape[B, H]. Only for cells with additionalstate c_t, e.g. DPLSTMCell.
- Outputs: h_t, h_last, c_last
- h_t: Final hidden state, output features (
h_t) for each timestept. Tensor of shape
[T, B, H]or list of lengthTwith tensors[B, H]if PackedSequence is used.
h_last: The last hidden state. Tensor of shape
[B, H]. c_last: The last cell state. Tensor of shape[B, H]. None if cell has no additional state.- h_t: Final hidden state, output features (
- where
T = sequence length B = batch size D = input_size (for this specific layer) H = hidden_size (output size, for this specific layer)
- Parameters:
batch_sizes (
Tensor) – Contains the batch sizes as stored in PackedSequencecell (
DPRNNCellBase) – Module implementing a single cell of the network, must be an instance of DPRNNCellmax_batch_size (
int) – batch sizeseq_length (
int) – sequence lengthis_packed (
bool) – whether PackedSequence is used as inputreverse_layer (
bool) – if True, it will run forward pass for a reversed layer
- Return type:
- iterate_layers(*args)[source]¶
Iterate through all the layers and through all directions within each layer.
Arguments should be list-like of length
num_layers * num_directionswhere each element corresponds to (layer, direction) pair. The corresponding elements of each of these lists will be iterated over.Example
num_layers = 3 bidirectional = True
- for layer, directions in self.iterate_layers(self.cell, h):
- for dir, (cell, hi) in directions:
print(layer, dir, hi)
# 0 0 h[0] # 0 1 h[1] # 1 0 h[2] # 1 1 h[3] # 2 0 h[4] # 2 1 h[5]
- class opacus.layers.dp_rnn.DPRNNCell(input_size, hidden_size, bias, nonlinearity='tanh')[source]¶
An Elman RNN cell with tanh or ReLU non-linearity.
DP-friendly drop-in replacement of the
torch.nn.RNNCellmodule to use inDPRNN. Refer totorch.nn.RNNCelldocumentation for the model description, parameters and inputs/outputs.Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input, hx=None, batch_size_t=None)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Return type:
- class opacus.layers.dp_rnn.DPRNNCellBase(input_size, hidden_size, bias, num_chunks)[source]¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- class opacus.layers.dp_rnn.RNNLinear(in_features, out_features, bias=True)[source]¶
Applies a linear transformation to the incoming data: \(y = xA^T + b\)
This module is the same as a
torch.nn.Linear`layer, except that in the backward pass the grad_samples get accumulated (instead of being concatenated as in the standard nn.Linear).When used with PackedSequence`s, additional attribute `max_batch_len is defined to determine the size of per-sample grad tensor.
Initialize internal Module state, shared by both nn.Module and ScriptModule.