May 7, 2024, 4:44 a.m. | Yunqi Li, Lanjing Zhang, Yongfeng Zhang

cs.LG updates on arXiv.org arxiv.org

arXiv:2305.18569v2 Announce Type: replace
Abstract: Understanding and addressing unfairness in LLMs are crucial for responsible AI deployment. However, there is a limited number of quantitative analyses and in-depth studies regarding fairness evaluations in LLMs, especially when applying LLMs to high-stakes fields. This work aims to fill this gap by providing a systematic evaluation of the effectiveness and fairness of LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's performance in high-takes fields including education, criminology, finance and …

abstract ai deployment arxiv chatgpt cs.ai cs.cl cs.cy cs.lg deployment evaluation fairness fields gap however llms quantitative responsible responsible ai studies type understanding work

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US