March 8, 2024, 5:47 a.m. | Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.04222v1 Announce Type: new
Abstract: The proliferation of open-source Large Language Models (LLMs) underscores the pressing need for evaluation methods. Existing works primarily rely on external evaluators, focusing on training and prompting strategies. However, a crucial aspect - model-aware glass-box features - is overlooked. In this study, we explore the utility of glass-box features under the scenario of self-evaluation, namely applying an LLM to evaluate its own output. We investigate various glass-box feature groups and discovered that the softmax distribution …

abstract arxiv box cs.cl evaluation explore features glass however language language model language models large language large language model large language models llms prompting strategies study training type utility

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote