all AI news
Zero-Shot Segmentation of Eye Features Using the Segment Anything Model (SAM)
April 9, 2024, 4:48 a.m. | Virmarie Maquiling, Sean Anthony Byrne, Diederick C. Niehorster, Marcus Nystr\"om, Enkelejda Kasneci
cs.CV updates on arXiv.org arxiv.org
Abstract: The advent of foundation models signals a new era in artificial intelligence. The Segment Anything Model (SAM) is the first foundation model for image segmentation. In this study, we evaluate SAM's ability to segment features from eye images recorded in virtual reality setups. The increasing requirement for annotated eye-image datasets presents a significant opportunity for SAM to redefine the landscape of data annotation in gaze estimation. Our investigation centers on SAM's zero-shot learning abilities and …
abstract artificial artificial intelligence arxiv cs.ai cs.cv cs.hc features foundation foundation model image images intelligence reality sam segment segment anything segment anything model segmentation study type virtual virtual reality zero-shot
More from arxiv.org / cs.CV updates on arXiv.org
Compact 3D Scene Representation via Self-Organizing Gaussian Grids
2 days, 16 hours ago |
arxiv.org
Fingerprint Matching with Localized Deep Representation
2 days, 16 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne