pytorch-dp is a library that enables training PyTorch models with differential privacy. It supports training with minimal code changes required on the client, has little impact on training performance and allows the client to online track the privacy budget expended at any given moment.
PyTorch-DP is currently a preview beta and under active development!
This code release is aimed at two target audiences:
pip:
pip install pytorch-dp
From source:
git clone https://github.com/facebookresearch/pytorch-dp.git
cd pytorch-dp
pip install -e .
To train your model with differential privacy, all you need to do is to declare a PrivacyEngine and attach it to your optimizer before running, eg:
model = Net()
optimizer = SGD(model.parameters(), lr=0.05)
privacy_engine = PrivacyEngine(
model,
batch_size,
sample_size,
alphas=[1, 10, 100],
noise_multiplier=1.3,
max_grad_norm=1.0,
)
privacy_engine.attach(optimizer)
# Now it's business as usual
The MNIST example contains an end to end run.
See the CONTRIBUTING file for how to help out.
This code is released under Apache 2.0, as found in the LICENSE file.