April 2, 2024, 7:43 p.m. | Keyuan Cheng, Gang Lin, Haoyang Fei, Yuxuan zhai, Lu Yu, Muhammad Asif Ali, Lijie Hu, Di Wang

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.00492v1 Announce Type: cross
Abstract: Multi-hop question answering (MQA) under knowledge editing (KE) has garnered significant attention in the era of large language models. However, existing models for MQA under KE exhibit poor performance when dealing with questions containing explicit temporal contexts. To address this limitation, we propose a novel framework, namely TEMPoral knowLEdge augmented Multi-hop Question Answering (TEMPLE-MQA). Unlike previous methods, TEMPLE-MQA first constructs a time-aware graph (TAG) to store edit knowledge in a structured manner. Then, through our …

abstract arxiv attention cs.ai cs.cl cs.lg editing framework however knowledge language language models large language large language models novel performance question question answering questions temporal type

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Data Analyst, Client Insights and Analytics - New Graduate, Full Time

@ Scotiabank | Toronto, ON, CA

Consultant Senior Data Scientist (H/F)

@ Publicis Groupe | Paris, France

Data Analyst H/F - CDI

@ Octapharma | Lingolsheim, FR

Lead AI Engineer

@ Ford Motor Company | United States

Senior Staff Machine Learning Engineer

@ Warner Bros. Discovery | CA San Francisco 153 Kearny Street