# DPOptimizer¶

class opacus.optimizers.optimizer.DPOptimizer(optimizer, *, noise_multiplier, max_grad_norm, expected_batch_size, loss_reduction='mean', generator=None, secure_mode=False)[source]

torch.optim.Optimizer wrapper that adds additional functionality to clip per sample gradients and add Gaussian noise.

Can be used with any torch.optim.Optimizer subclass as an underlying optimizer. DPOptimzer assumes that parameters over which it performs optimization belong to GradSampleModule and therefore have the grad_sample attribute.

On a high level DPOptimizer’s step looks like this: 1) Aggregate p.grad_sample over all parameters to calculate per sample norms 2) Clip p.grad_sample so that per sample norm is not above threshold 3) Aggregate clipped per sample gradients into p.grad 4) Add Gaussian noise to p.grad calibrated to a given noise multiplier and max grad norm limit (std = noise_multiplier * max_grad_norm). 5) Call underlying optimizer to perform optimization step

Examples

>>> module = MyCustomModel()
>>> optimizer = torch.optim.SGD(module.parameters(), lr=0.1)
>>> dp_optimizer = DPOptimizer(
...     optimizer=optimizer,
...     noise_multiplier=1.0,
...     expected_batch_size=4,
... )

Parameters:
• optimizer (Optimizer) – wrapped optimizer.

• noise_multiplier (float) – noise multiplier

• max_grad_norm (float) – max grad norm used for gradient clipping

• expected_batch_size (Optional[int]) – batch_size used for averaging gradients. When using Poisson sampling averaging denominator can’t be inferred from the actual batch size. Required is loss_reduction="mean", ignored if loss_reduction="sum"

• loss_reduction (str) – Indicates if the loss reduction (for aggregating the gradients) is a sum or a mean operation. Can take values “sum” or “mean”

• generator – torch.Generator() object used as a source of randomness for the noise

• secure_mode (bool) – if True uses noise generation approach robust to floating point arithmetic attacks. See _generate_noise() for details

property accumulated_iterations: int

Returns number of batches currently accumulated and not yet processed.

In other words accumulated_iterations tracks the number of forward/backward passed done in between two optimizer steps. The value would typically be 1, but there are possible exceptions.

Used by privacy accountants to calculate real sampling rate.

Return type:

int

Adds noise to clipped gradients. Stores clipped and noised result in p.grad

attach_step_hook(fn)[source]

Attaches a hook to be executed after gradient clipping/noising, but before the actual optimization step.

Most commonly used for privacy accounting.

Parameters:

fn (Callable[[DPOptimizer], None]) – hook function. Expected signature: foo(optim: DPOptimizer)

clip_and_accumulate()[source]

Performs gradient clipping. Stores clipped and aggregated gradients into p.summed_grad

Returns a flat list of per sample gradient tensors (one per parameter)

Return type:

Parameters:

state_dict (dict) – optimizer state. Should be an object returned from a call to state_dict().

Return type:

None

property params: List[Parameter]

Returns a flat list of nn.Parameter managed by the optimizer

Return type:
pre_step(closure=None)[source]

Perform actions specific to DPOptimizer before calling underlying optimizer.step()

Parameters:

closure (Optional[Callable[[], float]]) – A closure that reevaluates the model and returns the loss. Optional for most optimizers.

Return type:

Applies given loss_reduction to p.grad.

Does nothing if loss_reduction="sum". Divides gradients by self.expected_batch_size if loss_reduction="mean"

signal_skip_step(do_skip=True)[source]

Signals the optimizer to skip an optimization step and only perform clipping and per sample gradient accumulation.

On every call of .step() optimizer will check the queue of skipped step signals. If non-empty and the latest flag is True, optimizer will call self.clip_and_accumulate, but won’t proceed to adding noise and performing the actual optimization step. It also affects the behaviour of zero_grad(). If the last step was skipped, optimizer will clear per sample gradients accumulated by self.clip_and_accumulate (p.grad_sample), but won’t touch aggregated clipped gradients (p.summed_grad)

Used by BatchMemoryManager to simulate large virtual batches with limited memory footprint.

Parameters:

do_skip – flag if next step should be skipped

state_dict()[source]

Returns the state of the optimizer as a dict.

It contains two entries:

• state - a dict holding current optimization state. Its content

differs between optimizer classes.

• param_groups - a list containing all parameter groups where each

parameter group is a dict

step(closure=None)[source]

Performs a single optimization step (parameter update).

Parameters:

closure (Callable) – A closure that reevaluates the model and returns the loss. Optional for most optimizers.

Note

Unless otherwise specified, this function should not modify the .grad field of the parameters.

Return type:
Clears p.grad, p.grad_sample and p.summed_grad for all of it’s parameters
set_to_none argument only affects p.grad. p.grad_sample and p.summed_grad is never zeroed out and always set to None. Normal grads can do this, because their shape is always the same. Grad samples do not behave like this, as we accumulate gradients from different batches in a list
• set_to_none (bool) – instead of setting to zero, set the grads to None. (only