all AI news
M&M: Multimodal-Multitask Model Integrating Audiovisual Cues in Cognitive Load Assessment
March 15, 2024, 4:45 a.m. | Long Nguyen-Phuoc, Renald Gaboriau, Dimitri Delacroix, Laurent Navarro
cs.CV updates on arXiv.org arxiv.org
Abstract: This paper introduces the M&M model, a novel multimodal-multitask learning framework, applied to the AVCAffe dataset for cognitive load assessment (CLA). M&M uniquely integrates audiovisual cues through a dual-pathway architecture, featuring specialized streams for audio and video inputs. A key innovation lies in its cross-modality multihead attention mechanism, fusing the different modalities for synchronized multitasking. Another notable feature is the model's three specialized branches, each tailored to a specific cognitive load label, enabling nuanced, task-specific …
abstract architecture arxiv assessment audio cognitive cs.cv cs.mm cs.sd dataset eess.as framework innovation inputs key lies multimodal multitask learning novel paper through type video
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Principal Machine Learning Engineer (AI, NLP, LLM, Generative AI)
@ Palo Alto Networks | Santa Clara, CA, United States
Consultant Senior Data Engineer F/H
@ Devoteam | Nantes, France