March 4, 2024, 5:47 a.m. | Mengsay Loem, Masahiro Kaneko, Naoaki Okazaki

cs.CL updates on arXiv.org arxiv.org

arXiv:2311.08107v2 Announce Type: replace
Abstract: Large Language Models (LLMs) can justify or critique their predictions through discussions with other models or humans, thereby enriching their intrinsic understanding of instances. While proactive discussions in the inference phase have been shown to boost performance, such interactions have not been extensively explored during the training phase. We hypothesize that incorporating interactive discussions into the training process can enhance the models' understanding and improve their reasoning and verbal expression abilities during inference. This work …

abstract adversarial arxiv boost critique cs.cl discussions framework humans inference instances interactions intrinsic isn language language models large language large language models llm llms performance predictions support through training type understanding

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US