March 14, 2024, 4:45 a.m. | Zijian Wu, Adam Schmidt, Peter Kazanzides, Septimiu E. Salcudean

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.08003v1 Announce Type: new
Abstract: The Segment Anything Model (SAM) is a powerful vision foundation model that is revolutionizing the traditional paradigm of segmentation. Despite this, a reliance on prompting each frame and large computational cost limit its usage in robotically assisted surgery. Applications, such as augmented reality guidance, require little user intervention along with efficient inference to be usable clinically. In this study, we address these limitations by adopting lightweight SAM variants to meet the speed requirement and employing …

abstract applications arxiv augmented reality computational cost cs.cv foundation foundation model paradigm prompting reality real-time reliance sam segment segment anything segment anything model segmentation surgery tracking type usage video vision

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US