Oct. 19, 2022, 1:17 a.m. | Atijit Anuchitanukul, Julia Ive, Lucia Specia

cs.CL updates on arXiv.org arxiv.org

Understanding toxicity in user conversations is undoubtedly an important
problem. Addressing "covert" or implicit cases of toxicity is particularly hard
and requires context. Very few previous studies have analysed the influence of
conversational context in human perception or in automated detection models. We
dive deeper into both these directions. We start by analysing existing
contextual datasets and come to the conclusion that toxicity labelling by
humans is in general influenced by the conversational structure, polarity and
topic of the context. …

arxiv conversations detection toxicity toxicity detection

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Risk Management - Machine Learning and Model Delivery Services, Product Associate - Senior Associate-

@ JPMorgan Chase & Co. | Wilmington, DE, United States

Senior ML Engineer (Speech/ASR)

@ ObserveAI | Bengaluru