class opacus.utils.stats.Stat(stat_type, name, frequency=1.0, reduction='avg')[source]

Wrapper around tensorboard’s SummaryWriter.add_scalar, allowing for sampling and easier interface.

Use this to gather and visualize statistics to get insight about differential privacy parameters, and to observe how clipping and noising affects the training process (loss, accuracy, etc).

We have already implemented some common ones inside opacus.utils.stat.StatType.

Internal Privacy metrics (such as StatType.PRIVACY and StatType.GRAD) are already added to the code and need only be activated by adding the stat as shown in the example. Other stat types need to be added to the stat and updated properly using update function.


To get stats about clipping you can add the following line to your main file. By default the samples are averaged and the average is reported every 1 / frequency times.

>>> stat = Stat(StatType.GRAD, 'sample_stats', frequency=0.1)
>>> for i in range(20):
>>>    stat.log({"val":i})

If an instance of tensorboard.SummaryWriter exists it can be used for stat gathering by passing it like this:

>>> stats.set_global_summary_writer(tensorboard.SummaryWriter())

To add stats about test accuracy you can do:

>>> stats.add(Stat(stats.StatType.TEST, 'accuracy', frequency=0.1))

and then update the stat meter in the proper location using:

>>> acc1_value = compute_accuracy(x, y)  # you can supply your metrics functions, and Stats later displays them
>>> stats.update(stats.StatType.TEST, acc1=acc1_value)  # pass to Stats the result so that the result gets logged
  • stat_type (StatType) – Type of the statistic from StatType.

  • name (str) – Name of the stat that is used to identify this Stat for update or to view in tensorboard.

  • frequency (float) – The frequency of stat gathering. Its value is in [0, 1], where e.g. 1 means report to tensorboard any time log is called and 0.1 means report only 1 out of 10 times.

  • reduction (str) – The reduction strategy used for reporting, e.g. if frequency = 0.1 and reduction='avg' then log averages 10 samples and reports to tensorboard this average once every 10 samples. Current valid values are ‘avg’ and ‘sample’.

log(named_value, hist=False)[source]

Logs a metrics to tensorboard.

Generally not used directly (use update instead).


named_value (Dict[str, Any]) – A dictionary of metrics to log


Resets the accumulated metrics.

class opacus.utils.stats.StatType(value)[source]

This enum covers all the stat types we currently support.

  1. LOSS: Monitors the training loss.

  2. Grads: Monitors stats about the gradients across iterations

  3. PRIVACY: Logs Epsilon so you can see how it evolves during training

  4. TRAIN: This is a TB namespace where you can attach training metrics

  5. TEST: Similar to TRAIN, just another TB namespace to log things under


Adds statistics gathering to the process.


*args – An iterable of statistics to add


Clears all stats and stops collecting statistics.


Removes the Stat of name name from the global statistics gathering.


name (str) – The name of stats to remove

opacus.utils.stats.reset(stat_type=None, name=None)[source]

Resets the stat with given name and stat_type


Sets this object’s TensorBoard SummaryWriter to an externally provided one.

Useful if you already have one instantiated and you don’t want this to create another unnecessarily.


summary_writer (SummaryWriter) – The externally provided SummaryWriter

opacus.utils.stats.update(stat_type=None, name=None, hist=False, **named_values)[source]

Updates the stat(s) with the given name and stat_type

  • stat_type (Optional[StatType]) – The type of the stat from StatType. Could be None if name is unique.

  • name (Optional[str]) – The name of the stat. Could be None if there is only one stat for the stat_type

  • **named_values – A set of values with their names