March 21, 2024, 4:46 a.m. | Diwei Wang, Kun Yuan, Candice Muller, Fr\'ed\'eric Blanc, Nicolas Padoy, Hyewon Seo

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.13756v1 Announce Type: new
Abstract: We present a knowledge augmentation strategy for assessing the diagnostic groups and gait impairment from monocular gait videos. Based on a large-scale pre-trained Vision Language Model (VLM), our model learns and improves visual, textual, and numerical representations of patient gait videos, through a collective learning across three distinct modalities: gait videos, class-specific descriptions, and numerical gait parameters. Our specific contributions are two-fold: First, we adopt a knowledge-aware prompt tuning strategy to utilize the class-specific medical …

abstract analysis arxiv augmentation cs.cv diagnostic diseases knowledge language language model numerical patient scale strategy textual type video video analysis videos vision vision language model visual vlm

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne