April 15, 2024, 4:45 a.m. | Linhuang Wang, Xin Kang, Fei Ding, Satoshi Nakagawa, Fuji Ren

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.08433v1 Announce Type: new
Abstract: Unlike typical video action recognition, Dynamic Facial Expression Recognition (DFER) does not involve distinct moving targets but relies on localized changes in facial muscles. Addressing this distinctive attribute, we propose a Multi-Scale Spatio-temporal CNN-Transformer network (MSSTNet). Our approach takes spatial features of different scales extracted by CNN and feeds them into a Multi-scale Embedding Layer (MELayer). The MELayer extracts multi-scale spatial information and encodes these features before sending them into a Temporal Transformer (T-Former). The …

abstract action recognition arxiv cnn cs.cv dynamic facial expression features moving network recognition scale spatial targets temporal transformer transformer network type video

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Data Tools - Full Stack

@ DoorDash | Pune, India

Senior Data Analyst

@ Artsy | New York City