SCS Faculty Candidate

  • Ph.D. Student
  • Laboatory for Information and Decision Systems
  • Massachusetts Institute of Technology

Fast and Slow Learning from Reviews

The amount of goods and services transacted on online platforms is set to grow several folds over the next decade. These platforms face several critical challenges in creating a seamless interaction between diverse sellers and service providers, often with unknown provenance on the one hand, and millions of users on the other. Chief among them is how to provide reliable information on the reputation of sellers and the quality of goods and services they are providing. Though peer reviews and recommendations have emerged as the dominant approach to address this problem, the properties of different rating systems and the incentives facing users are poorly understood. Both the potential bias in reviews and the fact that those selecting to leave reviews are not representative of the average user complicate the reliability of the information provided by these rating systems.

In this talk, we investigate these issues theoretically, showing that at least under some ideal conditions, the information provided in many realistic rating systems can enable users to obtain accurate information about quality of products. We do this by developing a model of dynamic Bayesian learning and then study how well (and how rapidly) consumer information is aggregated by various rating systems. The model considers a sequence of potential customers, which decide whether or not to join the platform. Upon joining the platform, the rating system presents a summary of previous reviews to new customers. After observing the ratings of a product, and conditional on her ex ante valuation, a customer decides whether to make a purchase or not. If she purchases, the true quality of the product, her ex ante valuation, an ex post idiosyncratic preference, and the price of the product determine her overall satisfaction. Given the rating system of the platform, she decides to leave a review as a function of her overall satisfaction (or leave no review if she does not have a strong preference). Learning dynamics are complicated by what we refer to as selection effect. The selection effect captures the fact that reviews of customers who purchase a given good depend on the information available to them at the time of purchase, and is thus biased.

Despite this challenge, we show that Bayesian learning ensures that as the number of potential users grows, the assessment of the underlying quality converges almost surely to the true quality of the good. More importantly, we provide a tight characterization of the speed of learning under several different types of rating systems. We then show that the revenues of the platform are higher under a rating system with higher learning speed, which confirms that the platform’s incentives are in fact aligned with accelerating learning. Using these results, we study the design of rating systems in terms of the information collection and dissemination structure. In particular, we show that providing more information does not always lead to faster learning, but strictly finer rating systems always do. We also illustrate how different rating systems, with the same distribution of preferences, can lead to very fast or very slow speeds of learning.

Ali Makhdoumi is a Ph.D. student in the MIT Laboratory for Information and Decision Systems (LIDS), advised by Prof. Asuman Ozdaglar and Prof. Daron Acemoglu. He is broadly interested in learning theory, optimization, game theory, and network science with applications to social and technological systems. More specifically, he has been working on the role of information provision in several contexts such as online rating systems and traffic information systems.

He has received his undergraduate degree from the Sharif University of Technology, majoring in Pure Mathematics and Electrical Engineering.

Faculty Host:  Nina Balcan (ML)

For More Information, Please Contact: