all AI news
Fine-Tuning Pre-trained Language Models to Detect In-Game Trash Talks
March 26, 2024, 4:42 a.m. | Daniel Fesalbon, Arvin De La Cruz, Marvin Mallari, Nelson Rodelas
cs.LG updates on arXiv.org arxiv.org
Abstract: Common problems in playing online mobile and computer games were related to toxic behavior and abusive communication among players. Based on different reports and studies, the study also discusses the impact of online hate speech and toxicity on players' in-game performance and overall well-being. This study investigates the capability of pre-trained language models to classify or detect trash talk or toxic in-game messages The study employs and evaluates the performance of pre-trained BERT and GPT …
abstract arxiv behavior communication computer cs.cl cs.lg fine-tuning game game performance games hate speech impact language language models mobile performance playing reports speech studies study talks toxicity trash type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Scientist
@ Publicis Groupe | New York City, United States
Bigdata Cloud Developer - Spark - Assistant Manager
@ State Street | Hyderabad, India