Feb. 28, 2024, 5:42 a.m. | Vadim Liventsev, Tobias Fritz

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.17501v1 Announce Type: new
Abstract: Reinforcement Learning in Healthcare is typically concerned with narrow self-contained tasks such as sepsis prediction or anesthesia control. However, previous research has demonstrated the potential of generalist models (the prime example being Large Language Models) to outperform task-specific approaches due to their capability for implicit transfer learning. To enable training of foundation models for Healthcare as well as leverage the capabilities of state of the art Transformer architectures, we propose the paradigm of Healthcare as …

abstract anesthesia arxiv big capability control cs.ai cs.lg example healthcare language language models large language large language models modeling narrow prediction prime reinforcement reinforcement learning research tasks transfer transfer learning type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Intern Large Language Models Planning (f/m/x)

@ BMW Group | Munich, DE

Data Engineer Analytics

@ Meta | Menlo Park, CA | Remote, US