March 21, 2024, 4:41 a.m. | Khaoula Chehbouni (McGill University, Mila - Quebec AI Institute), Megha Roshan (University of Montreal, Mila - Quebec AI Institute), Emmanuel Ma (McG

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.13213v1 Announce Type: new
Abstract: Recent progress in large language models (LLMs) has led to their widespread adoption in various domains. However, these advancements have also introduced additional safety risks and raised concerns regarding their detrimental impact on already marginalized populations. Despite growing mitigation efforts to develop safety safeguards, such as supervised safety-oriented fine-tuning and leveraging safe reinforcement learning from human feedback, multiple concerns regarding the safety and ingrained biases in these models remain. Furthermore, previous work has demonstrated that …

abstract adoption arxiv case case study concerns cs.cl cs.cy cs.lg domains however impact language language models large language large language models llama llama 2 llms progress quality risks safeguards safety service study type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York