Distributed Learning for Sketched Kernel Regression - 练恒 副教授 (香港城市大学数学系)

作者:   来源:  时间:2021-09-23


题目:Distributed Learning for Sketched Kernel Regression

报告人:练恒 副教授 (香港城市大学数学系)

时间:2021年9月23日 14:00-16:00

地点:线上腾讯会议(会议号:411 239 246)

Abstract :  We study distributed learning for regularized least squares regression in a reproducing kernel Hilbert space (RKHS). The divide-and-conquer strategy is a frequently used approach for dealing with very large datasets, which computes an estimate on each subset and then takes an average of the estimators. Existing theoretical constraint on the number of subsets implies the size of each subset can still be large. Random sketching can thus be used to produce the local estimators on each subset to further reduce the computation compared to vanilla divide-and-conquer. In this setting, sketching and divide-and-conquer are complementary to each other in dealing with the large sample size. We show that optimal learning rates can be retained. Simulations are performed to compare sketched and non-standard divide-and-conquer methods.

报告人简介:现任香港城市大学数学系副教授,于2000年在中国科学技术大学获得数学和计算机学士学位,2007年在美国布朗大学获得计算机硕士,经济学硕士和应用数学博士学位。先后在新加坡南洋理工大学,澳大利亚新南威尔士大学,和香港城市大学工作。研究方向包括高维数据分析,函数数据分析,机器学习等。在《Journal of the Royal Statistical Society,Series B》、《Journal of the American Statistical Association》等国际期刊上发表学术论文30多篇.


举办单位:ebet官网 交叉科学研究院 、国家应用数学中心北京、北京应用统计学会