Web: http://arxiv.org/abs/2205.05076

May 11, 2022, 1:10 a.m. | Qiankun Liu, Zhentao Tan, Dongdong Chen, Qi Chu, Xiyang Dai, Yinpeng Chen, Mengchen Liu, Lu Yuan, Nenghai Yu

cs.CV updates on arXiv.org arxiv.org

Transformers have achieved great success in pluralistic image inpainting
recently. However, we find existing transformer based solutions regard each
pixel as a token, thus suffer from information loss issue from two aspects: 1)
They downsample the input image into much lower resolutions for efficiency
consideration, incurring information loss and extra misalignment for the
boundaries of masked regions. 2) They quantize $256^3$ RGB pixels to a small
number (such as 512) of quantized pixels. The indices of quantized pixels are
used …

arxiv cv image information inpainting loss reduce transformers

More from arxiv.org / cs.CV updates on arXiv.org

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC