March 15, 2024, 4:41 a.m. | Prithviraj Manivannan, Ivo Pascal de Jong, Matias Valdenegro-Toro, Andreea Ioana Sburlea

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.09228v1 Announce Type: new
Abstract: Uncertainty Quantification aims to determine when the prediction from a Machine Learning model is likely to be wrong. Computer Vision research has explored methods for determining epistemic uncertainty (also known as model uncertainty), which should correspond with generalisation error. These methods theoretically allow to predict misclassifications due to inter-subject variability. We applied a variety of Uncertainty Quantification methods to predict misclassifications for a Motor Imagery Brain Computer Interface. Deep Ensembles performed best, both in terms …

abstract arxiv classification computer computer vision cs.lg error machine machine learning machine learning model prediction quantification research type uncertainty vision vision research

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Scientist

@ Publicis Groupe | New York City, United States

Bigdata Cloud Developer - Spark - Assistant Manager

@ State Street | Hyderabad, India