April 25, 2024, 7:45 p.m. | Hongyi Cai, Mohammad Mahdinur Rahman, Jingyu Wu, Yulun Deng

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.15451v1 Announce Type: new
Abstract: Feature pyramids have been widely adopted in convolutional neural networks (CNNs) and transformers for tasks like medical image segmentation and object detection. However, the currently existing models generally focus on the Encoder-side Transformer to extract features, from which decoder improvement can bring further potential with well-designed architecture. We propose CFPFormer, a novel decoder block that integrates feature pyramids and transformers. Specifically, by leveraging patch embedding, cross-layer feature concatenation, and Gaussian attention mechanisms, CFPFormer enhances feature …

abstract arxiv cnns convolutional neural networks cs.cv decoder detection encoder extract feature features focus however image improvement medical networks neural networks object pyramid segmentation tasks transformer transformer decoder transformers type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

DevOps Engineer (Data Team)

@ Reward Gateway | Sofia/Plovdiv