March 5, 2024, 2:41 p.m. | Luyao Wang, Pengnian Qi, Xigang Bao, Chunlai Zhou, Biao Qin

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.01203v1 Announce Type: new
Abstract: Multi-modal entity alignment (MMEA) aims to identify equivalent entities between two multi-modal knowledge graphs for integration. Unfortunately, prior arts have attempted to improve the interaction and fusion of multi-modal information, which have overlooked the influence of modal-specific noise and the usage of labeled and unlabeled data in semi-supervised settings. In this work, we introduce a Pseudo-label Calibration Multi-modal Entity Alignment (PCMEA) in a semi-supervised way. Specifically, in order to generate holistic entity representations, we first …

abstract alignment arts arxiv cs.cl cs.db cs.lg data fusion graphs identify influence information integration knowledge knowledge graphs modal multi-modal noise prior semi-supervised type usage

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote