May 23, 2022, 1:12 a.m. | Hai-Ming Xu, Lingqiao Liu, Ehsan Abbasnejad

cs.CL updates on arXiv.org arxiv.org

Semi-supervised learning is a promising way to reduce the annotation cost for
text-classification. Combining with pre-trained language models (PLMs), e.g.,
BERT, recent semi-supervised learning methods achieved impressive performance.
In this work, we further investigate the marriage between semi-supervised
learning and a pre-trained language model. Unlike existing approaches that
utilize PLMs only for model parameter initialization, we explore the inherent
topic matching capability inside PLMs for building a more powerful
semi-supervised learning approach. Specifically, we propose a joint
semi-supervised learning process …

arxiv classification semantic semi-supervised text text classification

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Machine Learning Engineer (m/f/d)

@ StepStone Group | Düsseldorf, Germany

2024 GDIA AI/ML Scientist - Supplemental

@ Ford Motor Company | United States