all AI news
Sparse multimodal fusion with modal channel attention
April 1, 2024, 4:42 a.m. | Josiah Bjorgaard
cs.LG updates on arXiv.org arxiv.org
Abstract: The ability of masked multimodal transformer architectures to learn a robust embedding space when modality samples are sparsely aligned is studied by measuring the quality of generated embedding spaces as a function of modal sparsity. An extension to the masked multimodal transformer model is proposed which incorporates modal-incomplete channels in the multihead attention mechanism called modal channel attention (MCA). Two datasets with 4 modalities are used, CMU-MOSEI for multimodal sentiment recognition and TCGA for multiomics. …
abstract architectures arxiv attention cs.ai cs.lg embedding extension function fusion generated learn measuring modal multimodal quality robust samples space spaces sparsity transformer transformer model type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
GN SONG MT Market Research Data Analyst 11
@ Accenture | Bengaluru, BDC7A
GN SONG MT Market Research Data Analyst 09
@ Accenture | Bengaluru, BDC7A