Feb. 5, 2024, 6:47 a.m. | Andrew Brown Jiading Zhu Mohamed Abdelwahab Alec Dong Cindy Wang Jonathan Rose

cs.CL updates on arXiv.org arxiv.org

Large Foundational Language Models are capable of performing many tasks at a high level but are difficult to deploy in many applications because of their size and proprietary ownership. Many will be motivated to distill specific capabilities of foundational models into smaller models that can be owned and controlled. In the development of a therapeutic chatbot, we wish to distill a capability known as reflective listening, in which a therapist produces reflections of client speech. These reflections either restate what …

applications capabilities cs.cl deploy distillation evaluation foundational models interviewing language language model language models ownership proprietary reflections style tasks will

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

DataOps - La redoute Porto

@ Alter Solutions | Leiria, Portugal

Professional 4, Information Technology (Chatbot technical lead)

@ Western Digital | Bengaluru, India

Data Strategy Lead

@ Beam Impact | Remote