Federated Adversarial Attack — FGSM/PGD Demo

Adversarial examples are generated locally using a client-side model’s gradients (white-box), then evaluated against the server-side aggregated (FedAvg) global model. If the perturbation transfers, it can degrade or alter the FedAvg model’s predictions on the same input image. Object detection in this demo is limited to 'car', 'van', and 'truck' classes only.

Select from sample images
0 1
0.001 0.05
1 100
0 1
Class-targeted attack
Evaluation model