all AI news
SleepTransformer: Automatic Sleep Staging with Interpretability and Uncertainty Quantification. (arXiv:2105.11043v3 [cs.LG] UPDATED)
Jan. 27, 2022, 2:11 a.m. | Huy Phan, Kaare Mikkelsen, Oliver Y. Chén, Philipp Koch, Alfred Mertins, Maarten De Vos
cs.LG updates on arXiv.org arxiv.org
Background: Black-box skepticism is one of the main hindrances impeding
deep-learning-based automatic sleep scoring from being used in clinical
environments. Methods: Towards interpretability, this work proposes a
sequence-to-sequence sleep-staging model, namely SleepTransformer. It is based
on the transformer backbone and offers interpretability of the model's
decisions at both the epoch and sequence level. We further propose a simple yet
efficient method to quantify uncertainty in the model's decisions. The method,
which is based on entropy, can serve as a metric …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Program Control Data Analyst
@ Ford Motor Company | Mexico
Vice President, Business Intelligence / Data & Analytics
@ AlphaSense | Remote - United States