June 28, 2022, 2:30 p.m. | Synced

Synced syncedreview.com

In the new paper ReStructured Pre-training, a Carnegie Mellon University research team proposes "reStructured Pre-training" (RST), a novel NLP paradigm that pretrains models over valuable restructured data. The team’s resulting QIN system scores 40 points higher than the student average on the Gaokao-English Exam and 15 points higher than GPT-3 with 1/16 of the parameters.


The post CMU’s Novel ‘ReStructured Pre-training’ NLP Approach Scores 40 Points Above Student Average on a Standard English Exam first appeared on Synced.

ai artificial intelligence deep-neural-networks exam machine learning machine learning & data science ml natural language processing nature language tech nlp pretrained language model pre-training research standard technology training

More from syncedreview.com / Synced

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Lead Data Scientist, Commercial Analytics

@ Checkout.com | London, United Kingdom

Data Engineer I

@ Love's Travel Stops | Oklahoma City, OK, US, 73120