April 3, 2024, 4:47 a.m. | Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun

cs.CL updates on arXiv.org arxiv.org

arXiv:2311.09447v2 Announce Type: replace
Abstract: The rapid progress in open-source Large Language Models (LLMs) is significantly driving AI development forward. However, there is still a limited understanding of their trustworthiness. Deploying these models at scale without sufficient trustworthiness can pose significant risks, highlighting the need to uncover these issues promptly. In this work, we conduct an adversarial assessment of open-source LLMs on trustworthiness, scrutinizing them across eight different aspects including toxicity, stereotypes, ethics, hallucination, fairness, sycophancy, privacy, and robustness against …

abstract ai development arxiv assessment cs.ai cs.cl development driving highlighting however language language models large language large language models llms progress risks scale shows trustworthy type understanding vulnerabilities

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Sr. VBI Developer II

@ Atos | Texas, US, 75093

Wealth Management - Data Analytics Intern/Co-op Fall 2024

@ Scotiabank | Toronto, ON, CA