all AI news
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Microsoft Research www.microsoft.com
How trustworthy are generative pre-trained transformer (GPT) models? To answer this question, University of Illinois Urbana-Champaign, together with Stanford University, University of California, Berkeley, Center for AI Safety, and Microsoft Research, released a comprehensive trustworthiness evaluation platform for large language models (LLMs), which is presented in the recent paper: DecodingTrust: A Comprehensive Assessment of Trustworthiness […]
The post DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models appeared first on Microsoft Research.
assessment berkeley california center center for ai safety evaluation generative generative pre-trained transformer gpt illinois language language models large language large language models llms microsoft microsoft research paper platform research research blog safety stanford stanford university together transformer trustworthy university