Inference and Optimization over Networks: It Matters How Information Flows

Adaptive networks consist of a collection of agents with adaptation and learning abilities. The agents interact with each other on a local level and diffuse information across the network to solve estimation, inference, and optimization problems in a distributed manner. Some surprising phenomena arise when information is processed in a decentralized fashion over networks. For example, the collection of more information by the agents is not always beneficial to the inference task and even minor variations in how information is processed by the agents can lead to catastrophic error propagation across the graph. In this talk, we elaborate on such phenomena. In particular, we examine the performance of stochastic-gradient learners for global optimization problems. We consider two well-studied classes of distributed schemes including consensus strategies and diffusion strategies. We quantify how the mean-square-error and the convergence rate of the network vary with the combination policy and with the fraction of informed agents. It will be seen that the performance of the network does not necessarily improve with a larger proportion of informed agents. A strategy to counter the degradation in performance is presented. We also examine how the order by which information is processed by the agents is critical; minor variations can lead to catastrophic failure even when the agents are able to solve the inference task individually on their own. To illustrate this effect, we will establish that diffusion protocols are mean-square stable regardless of the network topology. In contrast, consensus networks can become unstable even if all individual agents are stable. These results indicate that information processing over networks leads to richer dynamics than originally thought with some revealing learning phenomena.