all AI news
CMU’s Novel ‘ReStructured Pre-training’ NLP Approach Scores 40 Points Above Student Average on a Standard English Exam
Synced syncedreview.com
In the new paper ReStructured Pre-training, a Carnegie Mellon University research team proposes "reStructured Pre-training" (RST), a novel NLP paradigm that pretrains models over valuable restructured data. The team’s resulting QIN system scores 40 points higher than the student average on the Gaokao-English Exam and 15 points higher than GPT-3 with 1/16 of the parameters.
The post CMU’s Novel ‘ReStructured Pre-training’ NLP Approach Scores 40 Points Above Student Average on a Standard English Exam first appeared on Synced.
ai artificial intelligence deep-neural-networks exam machine learning machine learning & data science ml natural language processing nature language tech nlp pretrained language model pre-training research standard technology training