April 25, 2022, 1:11 a.m. | Jayant Chhillar

cs.LG updates on arXiv.org arxiv.org

This work describes the development of different models to detect patronising
and condescending language within extracts of news articles as part of the
SemEval 2022 competition (Task-4). This work explores different models based on
the pre-trained RoBERTa language model coupled with LSTM and CNN layers. The
best models achieved 15$^{th}$ rank with an F1-score of 0.5924 for subtask-A
and 12$^{th}$ in subtask-B with a macro-F1 score of 0.3763.

arxiv language roberta

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Data Engineer (m/f/d)

@ Project A Ventures | Berlin, Germany

Principle Research Scientist

@ Analog Devices | US, MA, Boston