all AI news
Less is Better: Recovering Intended-Feature Subspace to Robustify NLU Models. (arXiv:2209.07879v1 [cs.CL])
Sept. 19, 2022, 1:15 a.m. | Ting Wu, Tao Gui
cs.CL updates on arXiv.org arxiv.org
Datasets with significant proportions of bias present threats for training a
trustworthy model on NLU tasks. Despite yielding great progress, current
debiasing methods impose excessive reliance on the knowledge of bias
attributes. Definition of the attributes, however, is elusive and varies across
different datasets. Furthermore, leveraging these attributes at input level to
bias mitigation may leave a gap between intrinsic properties and the underlying
decision rule. To narrow down this gap and liberate the supervision on bias, we
suggest extending …
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
1 day, 14 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
1 day, 14 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Global Data Architect, AVP - State Street Global Advisors
@ State Street | Boston, Massachusetts
Data Engineer
@ NTT DATA | Pune, MH, IN