Zhouyue Wang 2022

Abstract of PhD Dissertation
Institute of Electrical and Micro Engineering (IEM), EPFL, June 2022
Advisor: Prof. Ali H. Sayed

Decentralized GAN Training through Diffusion

Zhouyue Wang, EPFL


Recently, researchers have shown an increased interest in designing distributed generative adversarial networks (GANs) in order to enhance the generation capability of local agents while not violating privacy. Most available studies on distributed GAN architectures have only focused on implementations with a fusion center. In this work, we propose a fully decentralized scheme by employing a diffusion strategy to train a network of GANs. We introduce a team competing problem, which serves as a useful formulation for a network of GANs, and present the competing adaptive networks framework. We interpret the network of GANs as a competition between two teams. We present the convergence analysis of the proposed training approach, in which we prove that the local discriminators will cluster around a centroid and the discriminator centroid will converge to a first-order stationary point, which we use as an approximation for the distribution similarity measurement (Jensen–Shannon divergence and Wasserstein distance) between the real data distribution and generated data distribution. All generators will also approach a centroid, in a manner analogous to the discriminators. We explain that when the generators and discriminators have enough capacity, the distribution generated by each local generator can converge to the distribution of the real data distribution.

We present simulation results to illustrate the performance of the training algorithm for the network of GANs with homogeneous and non-homogeneous datasets based on the digits generation task. In the full-information case, we show that using the proposed algorithm allows local agents to match the performance of the centralized GAN, which has access to all training data. In the partial-information case, we show that using the proposed diffusion training algorithm enables agents with limited types of training data to generate all kinds of fake samples. In the end, we present the conclusions of this work and make recommendations for further research.

https://ikor.upi.edu/slot-gacor/https://dev.em.northeastern.edu/slot-gacor/https://mllerangetout.com/rtp-slot/http://csitjournal.khmnu.edu.ua/js/slot-dana/http://csitjournal.khmnu.edu.ua/js/slot-pulsa/http://csitjournal.khmnu.edu.ua/js/https://is-en.segway.com/https://mllerangetout.com/rtp-slot/https://p3k2.upi.edu/slot-demo/https://bk.upi.edu/slot-4d-gacor-hari-ini/https://lapetitebergerie.ca/wp-includes/slot-gacor/https://amaraimusi.sakura.ne.jp/actiestim/wp/wp-includes/slot-dana/https://sncollegenattika.ac.in/c1/slot-gacor/http://paftamag.bilkent.edu.tr/wp-admin/kumpulan-15-situs-judi-slot-gacor-2022/https://ventanagourmetgrill.com/wp-includes/slot-gacor/https://labnodes.vanderbilt.edu/sandbox/slot-gacor-2022/