all AI news
SkIn: Skimming-Intensive Long-Text Classification Using BERT for Medical Corpus. (arXiv:2209.05741v2 [cs.CL] UPDATED)
Sept. 27, 2022, 1:14 a.m. | Yufeng Zhao, Haiying Che
cs.CL updates on arXiv.org arxiv.org
BERT is a widely used pre-trained model in natural language processing.
However, since BERT is quadratic to the text length, the BERT model is
difficult to be used directly on the long-text corpus. In some fields, the
collected text data may be quite long, such as in the health care field.
Therefore, to apply the pre-trained language knowledge of BERT to long text, in
this paper, imitating the skimming-intensive reading method used by humans when
reading a long paragraph, the …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Technology Consultant Master Data Management (w/m/d)
@ SAP | Walldorf, DE, 69190
Research Engineer, Computer Vision, Google Research
@ Google | Nairobi, Kenya