Privacy Engine

class opacus.privacy_engine.PrivacyEngine(module, *, sample_rate=None, batch_size=None, sample_size=None, max_grad_norm, noise_multiplier=None, alphas=[1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.0, 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, 8.8, 8.9, 9.0, 9.1, 9.2, 9.3, 9.4, 9.5, 9.6, 9.7, 9.8, 9.9, 10.0, 10.1, 10.2, 10.3, 10.4, 10.5, 10.6, 10.7, 10.8, 10.9, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63], secure_rng=False, batch_first=True, target_delta=1e-06, target_epsilon=None, epochs=None, loss_reduction='mean', poisson=False, **misc_settings)[source]

The main component of Opacus is the PrivacyEngine.

To train a model with differential privacy, all you need to do is to define a PrivacyEngine and later attach it to your optimizer before running.


This example shows how to define a PrivacyEngine and to attach it to your optimizer.

>>> import torch
>>> model = torch.nn.Linear(16, 32)  # An example model
>>> optimizer = torch.optim.SGD(model.parameters(), lr=0.05)
>>> privacy_engine = PrivacyEngine(model, sample_rate=0.01, noise_multiplier=1.3, max_grad_norm=1.0)
>>> privacy_engine.attach(optimizer)  # That's it! Now it's business as usual.
  • module (Module) – The Pytorch module to which we are attaching the privacy engine

  • alphas (List[float]) – A list of RDP orders

  • noise_multiplier (Optional[float]) – The ratio of the standard deviation of the Gaussian noise to the L2-sensitivity of the function to which the noise is added

  • max_grad_norm (Union[float, List[float]]) – The maximum norm of the per-sample gradients. Any gradient with norm higher than this will be clipped to this value.

  • batch_size (Optional[int]) – Training batch size. Used in the privacy accountant.

  • sample_size (Optional[int]) – The size of the sample (dataset). Used in the privacy accountant.

  • sample_rate (Optional[float]) – Sample rate used to build batches. Used in the privacy accountant.

  • secure_rng (bool) – If on, it will use torchcsprng for secure random number generation. Comes with a significant performance cost, therefore it’s recommended that you turn it off when just experimenting.

  • batch_first (bool) – Flag to indicate if the input tensor to the corresponding module has the first dimension representing the batch. If set to True, dimensions on input tensor will be [batch_size, ..., ...].

  • target_delta (float) – The target delta. If unset, we will set it for you.

  • loss_reduction (str) – Indicates if the loss reduction (for aggregating the gradients) is a sum or a mean operation. Can take values “sum” or “mean”

  • **misc_settings – Other arguments to the init


Attaches the privacy engine to the optimizer.

Attaches to the PrivacyEngine an optimizer object,and injects itself into the optimizer’s step. To do that it,

  1. Validates that the model does not have unsupported layers.

  2. Adds a pointer to this object (the PrivacyEngine) inside the optimizer.

  3. Moves optimizer’s original step() function to original_step().

4. Monkeypatches the optimizer’s step() function to call step() on the query engine automatically whenever it would call step() for itself.


optimizer (Optimizer) – The optimizer to which the privacy engine will attach


Detaches the privacy engine from optimizer.

To detach the PrivacyEngine from optimizer, this method returns the model and the optimizer to their original states (i.e. all added attributes/methods will be removed).


Computes the (epsilon, delta) privacy budget spent so far.

This method converts from an (alpha, epsilon)-DP guarantee for all alphas that the PrivacyEngine was initialized with. It returns the optimal alpha together with the best epsilon.


target_delta (Optional[float]) – The Target delta. If None, it will default to the privacy engine’s target delta.

Return type

Tuple[float, float]


Pair of epsilon and optimal order alpha.


Takes a step for the privacy engine.


is_empty (bool) – Whether the step is taken on an empty batch In this case, we do not call clip_and_accumulate since there are no per sample gradients.


You should not call this method directly. Rather, by attaching your PrivacyEngine to the optimizer, the PrivacyEngine would have the optimizer call this method for you.


ValueError – If the last batch of training epoch is greater than others. This ensures the clipper consumed the right amount of gradients. In the last batch of a training epoch, we might get a batch that is smaller than others but we should never get a batch that is too large


Moves the privacy engine to the target device.


device (Union[str, device]) – The device on which Pytorch Tensors are allocated. See:


This example shows the usage of this method, on how to move the model after instantiating the PrivacyEngine.

>>> model = torch.nn.Linear(16, 32)  # An example model. Default device is CPU
>>> privacy_engine = PrivacyEngine(model, sample_rate=0.01, noise_multiplier=0.8, max_grad_norm=0.5)
>>> device = "cuda:3"  # GPU
>>>  # If we move the model to GPU, we should call the to() method of the privacy engine (next line)

The current PrivacyEngine


Takes a virtual step.

Virtual batches enable training with arbitrary large batch sizes, while keeping the memory consumption constant. This is beneficial, when training models with larger batch sizes than standard models.


Imagine you want to train a model with batch size of 2048, but you can only fit batch size of 128 in your GPU. Then, you can do the following:

>>> for i, (X, y) in enumerate(dataloader):
>>>     logits = model(X)
>>>     loss = criterion(logits, y)
>>>     loss.backward()
>>>     if i % 16 == 15:
>>>         optimizer.step()    # this will call privacy engine's step()
>>>         optimizer.zero_grad()
>>>     else:
>>>         optimizer.virtual_step()   # this will call privacy engine's virtual_step()

The rough idea of virtual step is as follows:

1. Calling loss.backward() repeatedly stores the per-sample gradients for all mini-batches. If we call loss.backward() N times on mini-batches of size B, then each weight’s .grad_sample field will contain NxB gradients. Then, when calling step(), the privacy engine clips all NxB gradients and computes the average gradient for an effective batch of size NxB. A call to optimizer.zero_grad() erases the per-sample gradients.

2. By calling virtual_step() after loss.backward(),the B per-sample gradients for this mini-batch are clipped and summed up into a gradient accumulator. The per-sample gradients can then be discarded. After N iterations (alternating calls to loss.backward() and virtual_step()), a call to step() will compute the average gradient for an effective batch of size NxB.

The advantage here is that this is memory-efficient: it discards the per-sample gradients after every mini-batch. We can thus handle batches of arbitrary size.


Resets clippers status.

Clipper keeps internal gradient per sample in the batch in each forward call of the module, they need to be cleaned before the next round.

If these variables are not cleaned the per sample gradients keep being concatenated accross batches. If accumulating gradients is intented behavious, e.g. simulating a large batch, prefer using virtual_step() function.