April 18, 2024, 4:44 a.m. | Cansu Korkmaz, A. Murat Tekalp

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.11273v1 Announce Type: cross
Abstract: Transformer-based models have achieved remarkable results in low-level vision tasks including image super-resolution (SR). However, early Transformer-based approaches that rely on self-attention within non-overlapping windows encounter challenges in acquiring global information. To activate more input pixels globally, hybrid attention models have been proposed. Moreover, training by solely minimizing pixel-wise RGB losses, such as L1, have been found inadequate for capturing essential high-frequency details. This paper presents two contributions: i) We introduce convolutional non-local sparse attention …

abstract arxiv attention challenges cs.cv eess.iv global however hybrid image information losses low performance pixels quantitative resolution results self-attention tasks training transformer transformer-based models transformer models type vision visual wavelet windows

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Machine Learning (Tel Aviv)

@ Meta | Tel Aviv, Israel

Senior Data Scientist- Digital Government

@ Oracle | CASABLANCA, Morocco