Feb. 15, 2024, 5:46 a.m. | Stanis{\l}aw Wo\'zniak, Bart{\l}omiej Koptyra, Arkadiusz Janz, Przemys{\l}aw Kazienko, Jan Koco\'n

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.09269v1 Announce Type: new
Abstract: Large language models (LLMs) have significantly advanced Natural Language Processing (NLP) tasks in recent years. However, their universal nature poses limitations in scenarios requiring personalized responses, such as recommendation systems and chatbots. This paper investigates methods to personalize LLMs, comparing fine-tuning and zero-shot reasoning approaches on subjective tasks. Results demonstrate that personalized fine-tuning improves model reasoning compared to non-personalized models. Experiments on datasets for emotion recognition and hate speech detection show consistent performance gains with …

abstract advanced arxiv chatbots cs.ai cs.cl fine-tuning language language models language processing large language large language models limitations llms natural natural language natural language processing nature nlp paper personalized processing reasoning recommendation recommendation systems responses systems tasks type zero-shot

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Consultant - Artificial Intelligence & Data (Google Cloud Data Engineer) - MY / TH

@ Deloitte | Kuala Lumpur, MY