Distributed learning deals with the problem of optimizing aggregate cost functions by networked agents from streaming data. This scenario arises in many contexts including distributed estimation, machine learning, resource allocation, and in the modeling of flocking and swarming behavior by biological networks. Among several available solutions such as consensus and incremental strategies, the class of diffusion strategies has proven to be particularly attractive because these techniques are scalable, robust, fully-distributed, and endow networks with real-time adaptation and learning abilities.
One key challenge in real applications is that networked agents generally face many types of asynchronous imperfections, such as random link failures, random data arrival times, noisy links, random topology changes, agents turning on and off randomly, and even drifting objectives. This dissertation provides a detailed analysis of the stability and performance of asynchronous diffusion strategies for solving distributed optimization and adaptation problems over networks in the presence of such imperfections. Conditions are developed to ensure the stability of the mean-square and mean-fourth-order moments of the network error vectors; closed-form expressions are derived to reveal how the network parameters influence the learning behavior; and the performance of the asynchronous networks is then compared against centralized solutions and synchronous networks. One notable conclusion is that the mean-square performance of asynchronous networks degrades only in the order of $\nu$, which is a small step-size parameter, while the convergence rate remains largely unaltered. A second notable conclusion is that even under the influence of asynchronous events, all agents in the adaptive network can still reach an $O(\mu^{1+\gamma})$ near-agreement with some constant $\gamma > 0$, while approaching the desired solution within $O(\mu)$ accuracy. These theoretical results provide a solid justification for the remarkable resilience of cooperative networks in the face of random imperfections at multiple levels: agents, links, data arrivals, and topology.
The dissertation also examines a second challenging form of uncertainty arising from agents in a network pursuing different objectives or observing data arising from different unknown models. In these cases, indiscriminate cooperation will lead to undesired results. A useful adaptive clustering and learning strategy is developed in order to allow agents to learn which neighbors should be trusted and which other neighbors should be ignored. The resulting procedure enables agents to identify their grouping and to attain improved learning performance.
Acknowledgment This work was supported in part by NSF grants CCF-1011918 and CCF-0942936. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsor.