April 9, 2024, 4:43 a.m. | Yue Kang, Cho-Jui Hsieh, Thomas C. M. Lee

cs.LG updates on arXiv.org arxiv.org

arXiv:2302.09440v3 Announce Type: replace
Abstract: In stochastic contextual bandits, an agent sequentially makes actions from a time-dependent action set based on past experience to minimize the cumulative regret. Like many other machine learning algorithms, the performance of bandits heavily depends on the values of hyperparameters, and theoretically derived parameter values may lead to unsatisfactory results in practice. Moreover, it is infeasible to use offline tuning methods like cross-validation to choose hyperparameters under the bandit environment, as the decisions should be …

abstract agent algorithms arxiv continuous cs.lg experience generalized hyperparameter linear machine machine learning machine learning algorithms optimization performance set stat.ml stochastic type values

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist (Computer Science)

@ Nanyang Technological University | NTU Main Campus, Singapore

Intern - Sales Data Management

@ Deliveroo | Dubai, UAE (Main Office)