May 8, 2023, 12:46 a.m. | Ajian Liu, Zichang Tan, Zitong Yu, Chenxu Zhao, Jun Wan, Yanyan Liang, Zhen Lei, Du Zhang, Stan Z. Li, Guodong Guo

cs.CV updates on arXiv.org arxiv.org

The availability of handy multi-modal (i.e., RGB-D) sensors has brought about
a surge of face anti-spoofing research. However, the current multi-modal face
presentation attack detection (PAD) has two defects: (1) The framework based on
multi-modal fusion requires providing modalities consistent with the training
input, which seriously limits the deployment scenario. (2) The performance of
ConvNet-based model on high fidelity datasets is increasingly limited. In this
work, we present a pure transformer-based framework, dubbed the Flexible Modal
Vision Transformer (FM-ViT), for …

arxiv consistent defects deployment detection face framework fusion presentation research sensors training transformers vision vit

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Science Analyst

@ Mayo Clinic | AZ, United States

Sr. Data Scientist (Network Engineering)

@ SpaceX | Redmond, WA