Web: http://arxiv.org/abs/2206.11476

June 24, 2022, 1:12 a.m. | Xia Hua, Junxiong Fei, Mingxin Li, ZeZheng Li, Yu Shi, JiangGuo Liu, Hanyu Hong

cs.CV updates on arXiv.org arxiv.org

The deep convolutional neural networks (CNNs) using attention mechanism have
achieved great success for dynamic scene deblurring. In most of these networks,
only the features refined by the attention maps can be passed to the next layer
and the attention maps of different layers are separated from each other, which
does not make full use of the attention information from different layers in
the CNN. To address this problem, we introduce a new continuous cross-layer
attention transmission (CCLAT) mechanism that …

arxiv attention cross cv on

More from arxiv.org / cs.CV updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY