all AI news
Promoting Segment Anything Model towards Highly Accurate Dichotomous Image Segmentation
March 25, 2024, 4:45 a.m. | Xianjie Liu, Keren Fu, Qijun Zhao
cs.CV updates on arXiv.org arxiv.org
Abstract: The Segment Anything Model (SAM) represents a significant breakthrough into foundation models for computer vision, providing a large-scale image segmentation model. However, despite SAM's zero-shot performance, its segmentation masks lack fine-grained details, particularly in accurately delineating object boundaries. We have high expectations regarding whether SAM, as a foundation model, can be improved towards highly accurate object segmentation, which is known as dichotomous image segmentation (DIS). To address this issue, we propose DIS-SAM, which advances SAM …
abstract arxiv computer computer vision cs.ai cs.cv fine-grained foundation however image masks object performance sam scale segment segment anything segment anything model segmentation type vision zero-shot
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Reporting & Data Analytics Lead (Sizewell C)
@ EDF | London, GB
Data Analyst
@ Notable | San Mateo, CA