all AI news
Improving Shift Invariance in Convolutional Neural Networks with Translation Invariant Polyphase Sampling
April 12, 2024, 4:42 a.m. | Sourajit Saha, Tejas Gokhale
cs.LG updates on arXiv.org arxiv.org
Abstract: Downsampling operators break the shift invariance of convolutional neural networks (CNNs) and this affects the robustness of features learned by CNNs when dealing with even small pixel-level shift. Through a large-scale correlation analysis framework, we study shift invariance of CNNs by inspecting existing downsampling operators in terms of their maximum-sampling bias (MSB), and find that MSB is negatively correlated with shift invariance. Based on this crucial insight, we propose a learnable pooling operator called Translation …
abstract analysis arxiv cnns convolutional neural networks correlation cs.cv cs.lg downsampling features framework improving networks neural networks operators pixel robustness sampling scale shift small study through translation type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Data Science Analyst- ML/DL/LLM
@ Mayo Clinic | Jacksonville, FL, United States
Machine Learning Research Scientist, Robustness and Uncertainty
@ Nuro, Inc. | Mountain View, California (HQ)