Enabling Asymptotic Truth Learning in a Network

Jie Gao Speaker
Rutgers University
 
Tuesday, Aug 6: 8:35 AM - 9:00 AM
Invited Paper Session 
Oregon Convention Center 

Description

Consider a network of distributed agents that all want to guess the correct value of some ground truth state. In a sequential order, each agent makes its decision using a single private signal which has a constant probability of error, as well as observations of actions from its network neighbors earlier in the order. We are interested in the question of enabling network-wide asymptotic truth learning -- that in a network of n agents, almost all agents make a correct prediction with probability approaching one as n goes to infinity. In this paper we study carefully crafted decision orders with respect to the graph topology as well as sufficient or necessary conditions for a graph to support such a good ordering.

We first show that on a sparse graph with a random ordering asymptotic truth learning does not happen. We then show a rather modest sufficient condition to enable asymptotic truth learning. With the help of this condition we characterize graphs generated from the Erdös Rényi model and preferential attachment model. In an Erdös Rényi graph, unless the graph is super sparse (with O(n) edges) or super dense (with Ω(n^2) edges), there exists a decision ordering that supports asymptotic truth learning. Similarly any preferential attachment network with a constant number of edges per node can achieve asymptotic truth learning under a carefully designed ordering. We also evaluated a variant of the decision ordering on different network topologies and demonstrated clear effectiveness in improving truth learning over random orderings.