April 3, 2024, 4:47 a.m. | Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun

cs.CL updates on arXiv.org arxiv.org

arXiv:2311.09447v2 Announce Type: replace
Abstract: The rapid progress in open-source Large Language Models (LLMs) is significantly driving AI development forward. However, there is still a limited understanding of their trustworthiness. Deploying these models at scale without sufficient trustworthiness can pose significant risks, highlighting the need to uncover these issues promptly. In this work, we conduct an adversarial assessment of open-source LLMs on trustworthiness, scrutinizing them across eight different aspects including toxicity, stereotypes, ethics, hallucination, fairness, sycophancy, privacy, and robustness against …

abstract ai development arxiv assessment cs.ai cs.cl development driving highlighting however language language models large language large language models llms progress risks scale shows trustworthy type understanding vulnerabilities

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US