June 18, 2022, 10:11 p.m. | /u/No_Coffee_4638

machinelearningnews www.reddit.com

🚦 HIPT is pretrained across 33 cancer types using 10,678 gigapixel WSIs, 408,218 4096×4096 images, and 104M 256 × 256 images

🚦 HIPT pushes the boundaries of both Vision Transformers and self-supervised learning in two important ways.

🚦 The code is available

[Continue reading](https://www.marktechpost.com/2022/06/18/harvard-researchers-introduce-a-novel-vit-architecture-called-hierarchical-image-pyramid-transformer-hipt-that-can-scale-vision-transformers-to-gigapixel-images-via-hierarchical-self-supervised-lear/) | *Checkout the* [*paper*](https://arxiv.org/pdf/2206.02647.pdf)*,* [*github*](https://github.com/mahmoodlab/HIPT)

​

https://i.redd.it/wrm3nvnceg691.gif

architecture harvard hierarchical image images learning machinelearningnews researchers scale self-supervised learning supervised learning transformer transformers vision

More from www.reddit.com / machinelearningnews

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Applied Scientist, Control Stack, AWS Center for Quantum Computing

@ Amazon.com | Pasadena, California, USA

Specialist Marketing with focus on ADAS/AD f/m/d

@ AVL | Graz, AT

Machine Learning Engineer, PhD Intern

@ Instacart | United States - Remote

Supervisor, Breast Imaging, Prostate Center, Ultrasound

@ University Health Network | Toronto, ON, Canada

Senior Manager of Data Science (Recommendation Science)

@ NBCUniversal | New York, NEW YORK, United States