We study backdoor attacks in peer-to-peer federated learning systems on
different graph topologies and datasets. We show that only 5% attacker nodes
are sufficient to perform a backdoor attack with 42% attack success without
decreasing the accuracy on clean data by more than 2%. We also demonstrate that
the attack can be amplified by the attacker crashing a small number of nodes.
We evaluate defenses proposed in the context of centralized federated learning
and show they are ineffective in peer-to-peer settings. Finally, we propose a
defense that mitigates the attacks by applying different clipping norms to the
model updates received from peers and local model trained by a node.

By admin