all AI news
MoE-TinyMed: Mixture of Experts for Tiny Medical Large Vision-Language Models
April 17, 2024, 4:46 a.m. | Songtao Jiang, Tuo Zheng, Yan Zhang, Yeying Jin, Zuozhu Liu
cs.CL updates on arXiv.org arxiv.org
Abstract: Mixture of Expert Tuning (MoE-Tuning) has effectively enhanced the performance of general MLLMs with fewer parameters, yet its application in resource-limited medical settings has not been fully explored. To address this gap, we developed MoE-TinyMed, a model tailored for medical applications that significantly lowers parameter demands. In evaluations on the VQA-RAD, SLAKE, and Path-VQA datasets, MoE-TinyMed outperformed LLaVA-Med in all Med-VQA closed settings with just 3.6B parameters. Additionally, a streamlined version with 2B parameters surpassed …
abstract application applications arxiv cs.cl cs.cv expert experts gap general language language models medical mixture of experts mllms moe parameters performance type vision vision-language vision-language models
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Software Engineer, Data Tools - Full Stack
@ DoorDash | Pune, India
Senior Data Analyst
@ Artsy | New York City