April 2, 2024, 7:52 p.m. | Yiwei Wang, Muhao Chen, Nanyun Peng, Kai-Wei Chang

cs.CL updates on arXiv.org arxiv.org

arXiv:2401.10471v2 Announce Type: replace
Abstract: We propose a new perspective of knowledge editing (KE) for large language models (LLMs) that treats it as a constrained decoding problem. We design decoding constraints to regulate LLMs, ensuring coherence between reasoning steps when incorporating new knowledge. To enforce these constraints, we utilize a depth-first search to adaptively substitute new knowledge for the LLMs' original reasoning steps, greedily seeking the optimal path of multi-hop reasoning with new knowledge. From this vantage, we propose DEEPEDIT: …

abstract arxiv constraints cs.ai cs.cl decoding design editing knowledge language language models large language large language models llms perspective reasoning search type

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Data Scientist 3

@ Wyetech | Annapolis Junction, Maryland

Technical Program Manager, Robotics

@ DeepMind | Mountain View, California, US

Machine Learning Engineer

@ Issuu | Braga

Business Intelligence Manager

@ Intuitive | Bengaluru, India

Expert Data Engineer (m/w/d)

@ REWE International Dienstleistungsgesellschaft m.b.H | Wien, Austria