all AI news
Revisiting Multi-modal Emotion Learning with Broad State Space Models and Probability-guidance Fusion
April 30, 2024, 4:50 a.m. | Yuntao Shou, Tao Meng, Fuchen Zhang, Nan Yin, Keqin Li
cs.CL updates on arXiv.org arxiv.org
Abstract: Multi-modal Emotion Recognition in Conversation (MERC) has received considerable attention in various fields, e.g., human-computer interaction and recommendation systems. Most existing works perform feature disentanglement and fusion to extract emotional contextual information from multi-modal features and emotion classification. After revisiting the characteristic of MERC, we argue that long-range contextual semantic information should be extracted in the feature disentanglement stage and the inter-modal semantic information consistency should be maximized in the feature fusion stage. Inspired by …
abstract arxiv attention classification computer conversation cs.cl emotion extract feature features fields fusion guidance human human-computer interaction information modal multi-modal probability recognition recommendation recommendation systems space state state space models systems type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US