Web: http://arxiv.org/abs/2209.06664

Sept. 15, 2022, 1:14 a.m. | Wanwei He, Yinpei Dai, Min Yang, Jian Sun, Fei Huang, Luo Si, Yongbin Li

cs.CL updates on arXiv.org arxiv.org

Recently, pre-training methods have shown remarkable success in task-oriented
dialog (TOD) systems. However, most existing pre-trained models for TOD focus
on either dialog understanding or dialog generation, but not both. In this
paper, we propose SPACE-3, a novel unified semi-supervised pre-trained
conversation model learning from large-scale dialog corpora with limited
annotations, which can be effectively fine-tuned on a wide range of downstream
dialog tasks. Specifically, SPACE-3 consists of four successive components in a
single transformer to maintain a task-flow in …

arxiv pre-training space training understanding

More from arxiv.org / cs.CL updates on arXiv.org

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Research Engineer - VFX, Neural Compositing

@ Flawless | Los Angeles, California, United States

[Job-TB] Senior Data Engineer

@ CI&T | Brazil

Data Analytics Engineer

@ The Fork | Paris, France