April 26, 2024, 4:47 a.m. | Cheng Kang, Daniel Novak, Katerina Urbanova, Yuqing Cheng, Yong Hu

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.16160v1 Announce Type: new
Abstract: Large language models (LLMs) have demonstrated impressive generalization capabilities on specific tasks with human-written instruction data. However, the limited quantity, diversity, and professional expertise of such instruction data raise concerns about the performance of LLMs in psychotherapy tasks when provided with domain-specific instructions. To address this, we firstly propose Domain-Specific Assistant Instructions based on AlexanderStreet therapy, and secondly, we use an adaption fine-tuning method and retrieval augmented generation method to improve pre-trained LLMs. Through quantitative …

abstract arxiv assistant capabilities chatbot concerns cs.ai cs.cl data diversity domain expertise however human improvement language language models large language large language models llms performance professional raise specific tasks tasks type

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote