all AI news
Controllable Prompt Tuning For Balancing Group Distributional Robustness
March 6, 2024, 5:41 a.m. | Hoang Phan, Andrew Gordon Wilson, Qi Lei
cs.LG updates on arXiv.org arxiv.org
Abstract: Models trained on data composed of different groups or domains can suffer from severe performance degradation under distribution shifts. While recent methods have largely focused on optimizing the worst-group objective, this often comes at the expense of good performance on other groups. To address this problem, we introduce an optimization scheme to achieve good performance across groups and find a good solution for all without severely sacrificing performance on any of them. However, directly applying …
abstract arxiv cs.lg data distribution domains good performance prompt prompt tuning robustness type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
Software Engineer III -Full Stack Developer - ModelOps, MLOps
@ JPMorgan Chase & Co. | NY, United States
Senior Lead Software Engineer - Full Stack Senior Developer - ModelOps, MLOps
@ JPMorgan Chase & Co. | NY, United States
Software Engineer III - Full Stack Developer - ModelOps, MLOps
@ JPMorgan Chase & Co. | NY, United States
Research Scientist (m/w/d) - Numerische Simulation Laser-Materie-Wechselwirkung
@ Fraunhofer-Gesellschaft | Freiburg, DE, 79104
Research Scientist, Speech Real-Time Dialog
@ Google | Mountain View, CA, USA