(August 31, 2022)


Xian, China (lecture given remotely by zoom)

Social Machine Learning

A wide range of learning algorithms are available in the literature, including sophisticated structures that are based on feedforward, recurrent, or convolutional neural networks. The performance of these architectures matches or exceeds human performance in many important applications. However, they are susceptible to adversarial attacks that can drive them to erroneous decisions under minimal perturbations. They are also often trained with data that arise from homogeneous statistical distributions. And, once trained, the internal structure of these systems remains fixed and are expected to deliver reliable decisions thereafter. For all practical purposes, learning is turned off following training. Contrast these situations with learning by humans: they learn from different types of data and even minimal clues are sufficient in many instances. Humans are also more difficult to fool by small perturbations, and they continue to learn and accumulate experiences over time.

Motivated by these considerations, we will discuss one architecture for learning that exploits important characteristics of social interactions. We refer to the new framework as Social Machine Learning, and it consists of two main connected blocks. One block represents the memory component of the learning machine since it will learn the underlying clues, store them, and regularly update them. A second block represents the processing component and it consists of a graph structure linking the various clue models. This block performs classification by exploiting repeated social interactions, and it takes advantage of the “wisdom of the crowd” for added robustness.  Analyses based on statistical learning theory indicate that, under reasonable conditions, the social machine learning structure can learn with high confidence and handles heterogeneity in data more gracefully.