March 13, 2024, 4:43 a.m. | Qibing Ren, Chang Gao, Jing Shao, Junchi Yan, Xin Tan, Wai Lam, Lizhuang Ma

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.07865v1 Announce Type: cross
Abstract: The rapid advancement of Large Language Models (LLMs) has brought about remarkable capabilities in natural language processing but also raised concerns about their potential misuse. While strategies like supervised fine-tuning and reinforcement learning from human feedback have enhanced their safety, these methods primarily focus on natural languages, which may not generalize to other domains. This paper introduces CodeAttack, a framework that transforms natural language inputs into code inputs, presenting a novel environment for testing the …

abstract advancement arxiv capabilities challenges code concerns cs.ai cs.cl cs.cr cs.lg cs.se feedback fine-tuning focus human human feedback language language models language processing large language large language models llms misuse natural natural language natural language processing processing reinforcement reinforcement learning safety strategies supervised fine-tuning type via

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York