class opacus.optimizers.optimizer.DPOptimizer(optimizer, *, noise_multiplier, max_grad_norm, expected_batch_size, loss_reduction='mean', generator=None, secure_mode=False)[source]

torch.optim.Optimizer wrapper that adds additional functionality to clip per sample gradients and add Gaussian noise.

Can be used with any torch.optim.Optimizer subclass as an underlying optimizer. DPOptimzer assumes that parameters over which it performs optimization belong to GradSampleModule and therefore have the grad_sample attribute.

On a high level DPOptimizer’s step looks like this: 1) Aggregate p.grad_sample over all parameters to calculate per sample norms 2) Clip p.grad_sample so that per sample norm is not above threshold 3) Aggregate clipped per sample gradients into p.grad 4) Add Gaussian noise to p.grad calibrated to a given noise multiplier and max grad norm limit (std = noise_multiplier * max_grad_norm). 5) Call underlying optimizer to perform optimization step


>>> module = MyCustomModel()
>>> optimizer = torch.optim.SGD(module.parameters(), lr=0.1)
>>> dp_optimizer = DPOptimizer(
...     optimizer=optimizer,
...     noise_multiplier=1.0,
...     max_grad_norm=1.0,
...     expected_batch_size=4,
... )
  • optimizer (Optimizer) – wrapped optimizer.

  • noise_multiplier (float) – noise multiplier

  • max_grad_norm (float) – max grad norm used for gradient clipping

  • expected_batch_size (Optional[int]) – batch_size used for averaging gradients. When using Poisson sampling averaging denominator can’t be inferred from the actual batch size. Required is loss_reduction="mean", ignored if loss_reduction="sum"

  • loss_reduction (str) – Indicates if the loss reduction (for aggregating the gradients) is a sum or a mean operation. Can take values “sum” or “mean”

  • generator – torch.Generator() object used as a source of randomness for the noise

  • secure_mode (bool) – if True uses noise generation approach robust to floating point arithmetic attacks. See _generate_noise() for details

property accumulated_iterations: int

Returns number of batches currently accumulated and not yet processed.

In other words accumulated_iterations tracks the number of forward/backward passed done in between two optimizer steps. The value would typically be 1, but there are possible exceptions.

Used by privacy accountants to calculate real sampling rate.


Adds noise to clipped gradients. Stores clipped and noised result in p.grad


Attaches a hook to be executed after gradient clipping/noising, but before the actual optimization step.

Most commonly used for privacy accounting.


fn (Callable[[DPOptimizer], None]) – hook function. Expected signature: foo(optim: DPOptimizer)


Performs gradient clipping. Stores clipped and aggregated gradients into p.summed_grad``

property grad_samples: List[Tensor]

Returns a flat list of per sample gradient tensors (one per parameter)


Loads the optimizer state.


state_dict (dict) – optimizer state. Should be an object returned from a call to state_dict().

Return type:


property params: List[Parameter]

Returns a flat list of nn.Parameter managed by the optimizer


Perform actions specific to DPOptimizer before calling underlying optimizer.step()


closure (Optional[Callable[[], float]]) – A closure that reevaluates the model and returns the loss. Optional for most optimizers.

Return type:



Applies given loss_reduction to p.grad.

Does nothing if loss_reduction="sum". Divides gradients by self.expected_batch_size if loss_reduction="mean"


Signals the optimizer to skip an optimization step and only perform clipping and per sample gradient accumulation.

On every call of .step() optimizer will check the queue of skipped step signals. If non-empty and the latest flag is True, optimizer will call self.clip_and_accumulate, but won’t proceed to adding noise and performing the actual optimization step. It also affects the behaviour of zero_grad(). If the last step was skipped, optimizer will clear per sample gradients accumulated by self.clip_and_accumulate (p.grad_sample), but won’t touch aggregated clipped gradients (p.summed_grad)

Used by BatchMemoryManager to simulate large virtual batches with limited memory footprint.


do_skip – flag if next step should be skipped


Returns the state of the optimizer as a dict.

It contains two entries:

  • state: a Dict holding current optimization state. Its content

    differs between optimizer classes, but some common characteristics hold. For example, state is saved per parameter, and the parameter itself is NOT saved. state is a Dictionary mapping parameter ids to a Dict with state corresponding to each parameter.

  • param_groups: a List containing all parameter groups where each

    parameter group is a Dict. Each parameter group contains metadata specific to the optimizer, such as learning rate and weight decay, as well as a List of parameter IDs of the parameters in the group.

NOTE: The parameter IDs may look like indices but they are just IDs associating state with param_group. When loading from a state_dict, the optimizer will zip the param_group params (int IDs) and the optimizer param_groups (actual nn.Parameter s) in order to match state WITHOUT additional verification.

A returned state dict might look something like:

    'state': {
        0: {'momentum_buffer': tensor(...), ...},
        1: {'momentum_buffer': tensor(...), ...},
        2: {'momentum_buffer': tensor(...), ...},
        3: {'momentum_buffer': tensor(...), ...}
    'param_groups': [
            'lr': 0.01,
            'weight_decay': 0,
            'params': [0]
            'lr': 0.001,
            'weight_decay': 0.5,
            'params': [1, 2, 3]

Performs a single optimization step (parameter update).


closure (Callable) – A closure that reevaluates the model and returns the loss. Optional for most optimizers.

Return type:



Unless otherwise specified, this function should not modify the .grad field of the parameters.


Clear gradients.

Clears p.grad, p.grad_sample and p.summed_grad for all of it’s parameters


set_to_none argument only affects p.grad. p.grad_sample and p.summed_grad is never zeroed out and always set to None. Normal grads can do this, because their shape is always the same. Grad samples do not behave like this, as we accumulate gradients from different batches in a list

  • set_to_none (bool) – instead of setting to zero, set the grads to None. (only

  • None) (affects regular gradients. Per sample gradients are always set to)