Web: http://arxiv.org/abs/2209.11068

Sept. 23, 2022, 1:16 a.m. | Josef Valvoda, Yimai Fang, David Vandyke

cs.CL updates on arXiv.org arxiv.org

Dialog modelling faces a difficult trade-off. Models are trained on a large
amount of text, yet their responses need to be limited to a desired scope and
style of a dialog agent. Because the datasets used to achieve the former
contain language that is not compatible with the latter, pre-trained dialog
models are fine-tuned on smaller curated datasets. However, the fine-tuning
process robs them of the ability to produce diverse responses, eventually
reducing them to dull conversation partners. In this …

arxiv conversation

More from arxiv.org / cs.CL updates on arXiv.org

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Machine Learning Data Engineer Intern (Jyoti Dharna)

@ Benson Hill | St. Louis, Missouri

Software Engineer / SDE I, Chime SDK Video Research Engineering

@ Amazon.com | East Palo Alto, California, USA

IND (New) Senior ML Ops Engineer - WiQ

@ Quantium | Hyderabad

Data Engineer

@ LendingTree | Remote