Web: http://arxiv.org/abs/2209.07118

Sept. 16, 2022, 1:16 a.m. | Zhihong Chen, Guanbin Li, Xiang Wan

cs.CL updates on arXiv.org arxiv.org

Medical vision-and-language pre-training (Med-VLP) has received considerable
attention owing to its applicability to extracting generic vision-and-language
representations from medical images and texts. Most existing methods mainly
contain three elements: uni-modal encoders (i.e., a vision encoder and a
language encoder), a multi-modal fusion module, and pretext tasks, with few
studies considering the importance of medical domain expert knowledge and
explicitly exploiting such knowledge to facilitate Med-VLP. Although there
exist knowledge-enhanced vision-and-language pre-training (VLP) methods in the
general domain, most require off-the-shelf …

arxiv knowledge language medical pre-training training vision

More from arxiv.org / cs.CL updates on arXiv.org

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

BI Data Analyst

@ EquipmentShare | Remote: Kansas City; Denver; Columbia MO

2023 Data Science Intern

@ Dialexa | Dallas, Texas, United States

Senior Data Engineer - Gdańsk (Remote)

@ Craft | Gdańsk, Pomeranian Voivodeship, Poland

Scientist / Sr. Scientist, Machine Learning & Computational Biology (Genomics)

@ 23andMe | Chicago, Illinois