March 14, 2024, 4:45 a.m. | Zijian Wu, Adam Schmidt, Peter Kazanzides, Septimiu E. Salcudean

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.08003v1 Announce Type: new
Abstract: The Segment Anything Model (SAM) is a powerful vision foundation model that is revolutionizing the traditional paradigm of segmentation. Despite this, a reliance on prompting each frame and large computational cost limit its usage in robotically assisted surgery. Applications, such as augmented reality guidance, require little user intervention along with efficient inference to be usable clinically. In this study, we address these limitations by adopting lightweight SAM variants to meet the speed requirement and employing …

abstract applications arxiv augmented reality computational cost cs.cv foundation foundation model paradigm prompting reality real-time reliance sam segment segment anything segment anything model segmentation surgery tracking type usage video vision

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Global Data Architect, AVP - State Street Global Advisors

@ State Street | Boston, Massachusetts

Data Engineer

@ NTT DATA | Pune, MH, IN