DEV Community

Cover image for Privacy-Preserving Machine Learning with AIJack - 3: Federated Learning with Paillier Encryption on PyTorch
Syumei
Syumei

Posted on

Privacy-Preserving Machine Learning with AIJack - 3: Federated Learning with Paillier Encryption on PyTorch

This post is part of our Privacy-Preserving Machine Learning with AIJack series.

Part 1: Federated Learning
Part 2: Model Inversion Attack against Federated Learning
Part 3: Federated Learning with Paillier Encryption
Part 4: Federated Learning with Differential Privacy
Part 5: Federated Learning with Sparse Gradient
Part 6: Poisoning Attack against Federated Learning
Part 7: Federated Learning with FoolsGold
Part 8: Split Learning
Part 9: Label Leakage against Split Learning

Overview

As shown in the previous tutorials, Federated Learning, one of the popular distributed deep learning paradigms, might leak private information via the local gradient that each client sends to the server. To overcome this problem, the client can encrypt the local gradients before uploading. Recall that the server in Federated Learning has to aggregate the gradients from clients. If the server uses a simple average as the aggregation function, the server can execute such aggregation not on the plain gradients but on the gradients encrypted with Homomorphic Encryption.

Homomorphic Encryption is one type of encryption scheme where you can execute some arithmetic operations on cipher texts. For example, Paillier Encryption Scheme has the following properties;

D(E(x)+E(y))=x+yD(E(x)+y)=x+yD(E(x)y)=xy \begin{align} &\mathcal{D}(\mathcal{E}(x) + \mathcal{E}(y)) = x + y \newline &\mathcal{D}(\mathcal{E}(x) + y) = x + y \newline &\mathcal{D}(\mathcal{E}(x) * y) = x * y \end{align}

, where E\mathcal{E} and D\mathcal{D} represent encryption and decryption, respectively.

Recall that the server in FedAVG averages the received gradients to update the global model (see Part 1 for the detail).

wtwt1ηc=1CncNl(wt1,Xc,Yc) w_{t} \leftarrow w_{t - 1} - \eta \sum_{c=1}^{C} \frac{n_{c}}{N} \nabla \mathcal{l}(w_{t - 1}, X_{c}, Y_{c})

To mitigate the private information leakage from the gradient, one option for the client is to encrypt the gradient with Paillier Encryption Scheme.

wtwt1ηc=1CncNE(l(wt1,Xc,Yc)) w_{t} \leftarrow w_{t - 1} - \eta \sum_{c=1}^{C} \frac{n_{c}}{N} \mathcal{E} (\nabla \mathcal{l}(w_{t - 1}, X_{c}, Y_{c}))

The details procedure of Federated Learning with Paillier Encryption is as follows:

1. The central server initializes the global model.
2. Clients publish and share private and public keys.
3. The server distributes the global model to each client.
4. Except for the first round, each client decrypts the global model.
5. Each client locally calculates the gradient of the loss function on their dataset.
6. Each client encrypts the gradient and sends it to the server.
7. The server aggregates the received gradients with some method (e.g., average) and updates the global model with the aggregated gradient.
8. Repeat 3 ~ 7 until converge.
Enter fullscreen mode Exit fullscreen mode

Code

We will also use AIJack to implement Federated Learning with Paillier Encryption. All you need is to attach the ability of encryption and decryption to each client via PaillierGradientClientManager.


from aijack.defense import PaillierGradientClientManager, PaillierKeyGenerator

keygenerator = PaillierKeyGenerator(64)
pk, sk = keygenerator.generate_keypair()

manager = PaillierGradientClientManager(pk, sk)
PaillierGradFedAVGClient = manager.attach(FedAVGClient)

clients = [
    PaillierGradFedAVGClient(Net().to(device), user_id=c, server_side_update=False)
    for c in range(client_size)
]
server = FedAVGServer(clients, Net().to(device), server_side_update=False)

# Same code as in Part 1
Enter fullscreen mode Exit fullscreen mode

Summary

In this post, we have learned that clients in Federated Learning can encrypt the gradients with Homomorphic Encryption to prevent privacy violations via gradients. However, this method makes it impossible for the server to check and validate the gradient, which might cause other threats like Poisoning Attack. We will study these risks in the next tutorial.

Top comments (0)