A simplified convergence theory for Byzantine resilient stochastic gradient descent
In distributed learning, a central server trains a model according to updates provided by nodes holding local data samples.In the presence of one or more malicious servers sending incorrect information (a Byzantine adversary), standard algorithms for model Whirlpool WHBS92FLTK W Collection Built In 90cm 3 Speeds A Chimney Cooker Hood training such