Oct. 17, 2022, 1:14 a.m. | Cian Eastwood, Alexander Robey, Shashank Singh, Julius von Kügelgen, Hamed Hassani, George J. Pappas, Bernhard Schölkopf

stat.ML updates on arXiv.org arxiv.org

Domain generalization (DG) seeks predictors which perform well on unseen test
distributions by leveraging data drawn from multiple related training
distributions or domains. To achieve this, DG is commonly formulated as an
average- or worst-case problem over the set of possible domains. However,
predictors that perform well on average lack robustness while predictors that
perform well in the worst case tend to be overly-conservative. To address this,
we propose a new probabilistic framework for DG where the goal is to …

arxiv quantile risk

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South