April 4, 2024, 4:43 a.m. | Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.14607v2 Announce Type: replace-cross
Abstract: Recent literature has suggested the potential of using large language models (LLMs) to make classifications for tabular tasks. However, LLMs have been shown to exhibit harmful social biases that reflect the stereotypes and inequalities present in society. To this end, as well as the widespread use of tabular data in many high-stake applications, it is important to explore the following questions: what sources of information do LLMs draw upon when making classifications for tabular tasks; …

abstract arxiv biases cs.cl cs.lg fairness however language language models large language large language models literature llms social society stereotypes tabular tasks type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer - AWS

@ 3Pillar Global | Costa Rica

Cost Controller/ Data Analyst - India

@ John Cockerill | Mumbai, India, India, India