Recently, researchers have shown an increased interest in designing distributed generative adversarial networks (GANs) in order to enhance the generation capability of local agents while not violating privacy. Most available studies on distributed GAN architectures have only focused on implementations with a fusion center. In this work, we propose a fully decentralized scheme by employing a diffusion strategy to train a network of GANs. We introduce a team competing problem, which serves as a useful formulation for a network of GANs, and present the competing adaptive networks framework. We interpret the network of GANs as a competition between two teams. We present the convergence analysis of the proposed training approach, in which we prove that the local discriminators will cluster around a centroid and the discriminator centroid will converge to a first-order stationary point, which we use as an approximation for the distribution similarity measurement (Jensen–Shannon divergence and Wasserstein distance) between the real data distribution and generated data distribution. All generators will also approach a centroid, in a manner analogous to the discriminators. We explain that when the generators and discriminators have enough capacity, the distribution generated by each local generator can converge to the distribution of the real data distribution.
We present simulation results to illustrate the performance of the training algorithm for the network of GANs with homogeneous and non-homogeneous datasets based on the digits generation task. In the full-information case, we show that using the proposed algorithm allows local agents to match the performance of the centralized GAN, which has access to all training data. In the partial-information case, we show that using the proposed diffusion training algorithm enables agents with limited types of training data to generate all kinds of fake samples. In the end, we present the conclusions of this work and make recommendations for further research.