April 9, 2024, 4:48 a.m. | Virmarie Maquiling, Sean Anthony Byrne, Diederick C. Niehorster, Marcus Nystr\"om, Enkelejda Kasneci

cs.CV updates on arXiv.org arxiv.org

arXiv:2311.08077v2 Announce Type: replace
Abstract: The advent of foundation models signals a new era in artificial intelligence. The Segment Anything Model (SAM) is the first foundation model for image segmentation. In this study, we evaluate SAM's ability to segment features from eye images recorded in virtual reality setups. The increasing requirement for annotated eye-image datasets presents a significant opportunity for SAM to redefine the landscape of data annotation in gaze estimation. Our investigation centers on SAM's zero-shot learning abilities and …

abstract artificial artificial intelligence arxiv cs.ai cs.cv cs.hc features foundation foundation model image images intelligence reality sam segment segment anything segment anything model segmentation study type virtual virtual reality zero-shot

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne