Seminar by Cong Ma, University of Chicago

Top-K Ranking with a Monotone Adversary
Event Date & Time
| -
Event Location
150 Ford Hall

224 Church St SE
Minneapolis, MN 55455

Abstract 

In this talk, we address the top-$K$ ranking problem with a monotone adversary. We consider the scenario where a comparison graph is randomly generated and the adversary is allowed to add arbitrary edges. The statistician's goal is then to accurately identify the top-$K$ preferred items based on pairwise comparisons derived from this semi-random comparison graph. The main contribution of this paper is to develop a weighted maximum likelihood estimator (MLE) that achieves near-optimal sample complexity, up to a $\log^2(n)$ factor, where $n$ denotes the number of items under comparison. This is made possible through a combination of analytical and algorithmic innovations. On the analytical front, we provide a refined $\ell_\infty$ error analysis of the weighted MLE that is more explicit and tighter than existing analyses. It relates the $\ell_\infty$ error with the spectral properties of the weighted comparison graph. Motivated by this, our algorithmic innovation involves the development of an semi-definite-program-based (SDP-based) approach to reweight the semi-random graph and meet specified spectral properties. Additionally, we propose a first-order method based on the Matrix Multiplicative Weight Update (MMWU) framework. This method efficiently solves the resulting SDP in nearly-linear time relative to the size of the semi-random comparison graph."

Bio 

Cong Ma is currently an assistant professor in the Department of Statistics at the University of Chicago. He obtained his PhD from Princeton University in 2020, advised by Professor Yuxin Chen and Professor Jianqing Fan. After that, he did a one-year postdoc at UC Berkeley advised by Professor Martin Wainwright. Cong is broadly interested in the mathematics of data science, with a focus on the interplay between statistics and optimization. Recently he is excited about problems arising from transfer learning, reinforcement learning, and low-rank matrix recovery. 

Share on: