Nov. 8, 2022, 2:15 a.m. | Chiyu Zhang, Jun Yang, Lei Wang, Zaiyan Dai

cs.CV updates on arXiv.org arxiv.org

This paper presents a new hierarchical vision Transformer for image style
transfer, called Strips Window Attention Transformer (S2WAT), which serves as
an encoder of encoder-transfer-decoder architecture. With hierarchical
features, S2WAT can leverage proven techniques in other fields of computer
vision, such as feature pyramid networks (FPN) or U-Net, to image style
transfer in future works. However, the existing window-based Transformers will
cause a problem that the stylized images will be grid-like when introduced into
image style transfer directly. To solve …

arxiv attention hierarchical image style transfer transfer transformer vision

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

[Job - 14823] Senior Data Scientist (Data Analyst Sr)

@ CI&T | Brazil

Data Engineer

@ WorldQuant | Hanoi

ML Engineer / Toronto

@ Intersog | Toronto, Ontario, Canada

Analista de Business Intelligence (Industry Insights)

@ NielsenIQ | Cotia, Brazil