Cookies?
Library Header Image
LSE Theses Online London School of Economics web site

Statistical inference for some choice models

Zhou, Kaifang (2023) Statistical inference for some choice models. PhD thesis, London School of Economics and Political Science.

[img] Text - Submitted Version
Download (1MB)
Identification Number: 10.21953/lse.00004515

Abstract

This thesis comprises two chapters that study the statistical inference problems for two types of choice models, namely, the discrete voter model and the Bradley-Terry models, respectively. In Chapter 1, we consider a discrete-time voter model process on a set of nodes, each being in one of two states, either 0 or 1. In each time step, each node adopts the state of a randomly sampled neighbour according to sampling probabilities, referred to as node interaction parameters. We study the maximum likelihood estimation of the node interaction parameters from observed node states for a given number of realizations of the voter model process. We present parameter estimation error bounds by interpreting the observation data as being generated according to an extended voter process that consists of cycles, each corresponding to a realization of the voter model process until absorption to a consensus state. We present new bounds for all moments and a probability tail bound for consensus time. We also present a sampling complexity lower bound for parameter estimation within a prescribed error tolerance for the class of locally stable estimators. In Chapter 2, we study the popular methods for inference of the Bradley-Terry model parameters, namely the gradient descent and MM algorithm, for maximum likelihood estimation and maximum a posteriori probability estimation. This class of models includes the Bradley-Terry model of paired comparisons, the Rao-Kupper model of paired comparisons allowing for tie outcomes, the Luce choice model, and the Plackett-Luce ranking model. We propose a simple modification of the classical gradient descent and MM algorithm with a parameter rescaling performed at each iteration step that avoids the observed slow convergence issue that we found in our previous work (Vojnovic et al. [2020]). We study the convergence rates of accelerated gradient descent and MM Algorithms for Bradley-Terry models. We also produce some experimental results using synthetic and real-world data to show that significant efficiency gains can be obtained by our new proposed method.

Item Type: Thesis (PhD)
Additional Information: © 2023 Kaifang Zhou
Library of Congress subject classification: Q Science > QA Mathematics
Sets: Departments > Statistics
Supervisor: Vojnovic, Milan and Chen, Yining
URI: http://etheses.lse.ac.uk/id/eprint/4515

Actions (login required)

Record administration - authorised staff only Record administration - authorised staff only

Downloads

Downloads per month over past year

View more statistics