April 16, 2024, 4:51 a.m. | David Nadeau, Mike Kroutikov, Karen McNeil, Simon Baribeau

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.09785v1 Announce Type: new
Abstract: This paper introduces fourteen novel datasets for the evaluation of Large Language Models' safety in the context of enterprise tasks. A method was devised to evaluate a model's safety, as determined by its ability to follow instructions and output factual, unbiased, grounded, and appropriate content. In this research, we used OpenAI GPT as point of comparison since it excels at all levels of safety. On the open-source side, for smaller models, Meta Llama2 performs well …

abstract arxiv benchmarking bias context cs.cl datasets enterprise evaluation gemma gpt hallucinations language language models large language large language models llama2 mistral novel paper safety tasks toxicity type unbiased

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US