Inference and Optimization over Multi-Agent Networks

Adaptive networks consist of a collection of agents with adaptation and learning abilities. The agents interact with each other on a local level and diffuse information across the network to solve estimation, inference, and optimization tasks in a distributed manner. Some surprising phenomena arise when information is processed in a decentralized fashion over networked systems. For example, the addition of more informed agents is not necessarily beneficial to the inference task, and even minor variations in how information is processed by the agents can lead to catastrophic error propagation across the network. In this talk, we elaborate on such phenomena. In particular, we examine the performance of stochastic-gradient learners for global optimization problems. We consider two classes of distributed schemes involving consensus strategies and diffusion strategies. We quantify how the mean-square-error and the convergence rate of the network vary with the combination policy and with the fraction of informed agents. It will be seen that the performance of the network does not always improve with a larger proportion of informed agents. A strategy to counter the degradation in performance is presented. We also examine how the order by which information is processed by the agents is critical; minor variations can lead to catastrophic failure even when the agents are able to solve the inference task individually on their own. To illustrate this effect, we will establish that diffusion protocols are mean-square stable regardless of the network topology. In contrast, consensus networks can become unstable even if all individual agents are stable. These results indicate that information processing over networked systems gives rise to some revealing learning phenomena.