Oct. 23, 2023, 7:22 p.m. | /u/Dependent_Bluejay_45

Machine Learning www.reddit.com

There is an architecture for images/videos called `MViT`, where 2D `MaxPooling` layers are added to reduce computations for `ViT`. But `MaxPooling` has a drawback - it discards information independently of context, equally discarding information from both important and uninformative parts of the image. For traditional `Conv2D` networks, there's little we can do about this, but for transformers, we can reduce dimensionality in a more meaningful way - discarding only those elements that don't carry unique information. Are there any articles/developments …

architecture context image images information machinelearning networks pooling reduce smart transformers videos visual vit

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States