Feb. 23, 2024, 5:45 a.m. | Shuang Chen, Amir Atapour-Abarghouei, Hubert P. H. Shum

cs.CV updates on arXiv.org arxiv.org

arXiv:2402.14185v1 Announce Type: new
Abstract: Existing image inpainting methods leverage convolution-based downsampling approaches to reduce spatial dimensions. This may result in information loss from corrupted images where the available information is inherently sparse, especially for the scenario of large missing regions. Recent advances in self-attention mechanisms within transformers have led to significant improvements in many computer vision tasks including inpainting. However, limited by the computational costs, existing methods cannot fully exploit the efficacy of long-range modelling capabilities of such models. …

abstract advances arxiv attention attention mechanisms convolution cs.cv dimensions downsampling encoding image images information inpainting loss quality reduce self-attention spatial transformer transformers type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Machine Learning Engineer

@ Samsara | Canada - Remote