Opacus
  • Introduction
  • FAQ
  • Tutorials
  • API Reference
  • GitHub
Support Ukraine 🇺🇦 Help Provide Humanitarian Aid to Ukraine.

Train PyTorch models with Differential Privacy

Introduction
Get Started
Tutorials

Key Features

Scalable logo

Scalable

Vectorized per-sample gradient computation that is 10x faster than microbatching

PyTorch logo

Built on PyTorch

Supports most types of PyTorch models and can be used with minimal modification to the original neural network.

Extensible logo

Extensible

Open source, modular API for differential privacy research. Everyone is welcome to contribute.

Get Started

  1. Installing Opacus:

    Via pip:
    pip install opacus
    
    Via conda:
    conda install -c conda-forge opacus
    
    From source:
    
    git clone https://github.com/pytorch/opacus.git
    cd opacus
    pip install -e .
                    
    
  2. Getting started

    Training with differential privacy is as simple as instantiating a PrivacyEngine:
    # define your components as usual
    model = Net()
    optimizer = SGD(model.parameters(), lr=0.05)
    data_loader = torch.utils.data.DataLoader(dataset, batch_size=1024)
    
    # enter PrivacyEngine
    privacy_engine = PrivacyEngine()
    model, optimizer, data_loader = privacy_engine.make_private(
        module=model,
        optimizer=optimizer,
        data_loader=data_loader,
        noise_multiplier=1.1,
        max_grad_norm=1.0,
    )
    # Now it's business as usual
    
Opacus
Docs
IntroductionFAQTutorialsAPI Reference
Github
opacus
Legal
PrivacyTerms
Meta Open Source
Copyright © 2023 Meta Platforms, Inc.