April 16, 2024, 4:42 a.m. | Ahmed Agiza, Mohamed Mostagir, Sherief Reda

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.08699v1 Announce Type: cross
Abstract: In an era where language models are increasingly integrated into decision-making and communication, understanding the biases within Large Language Models (LLMs) becomes imperative, especially when these models are applied in the economic and political domains. This work investigates the impact of fine-tuning and data selection on economic and political biases in LLM. We explore the methodological aspects of biasing LLMs towards specific ideologies, mindful of the biases that arise from their extensive training on diverse …

abstract arxiv biases communication cs.ai cs.cl cs.lg data decision domains economic fine-tuning impact language language models large language large language models llms making political type understanding work

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York