June 18, 2022, 10:11 p.m. | /u/No_Coffee_4638

Computer Vision www.reddit.com

🚦 HIPT is pretrained across 33 cancer types using 10,678 gigapixel WSIs, 408,218 4096×4096 images, and 104M 256 × 256 images

🚦 HIPT pushes the boundaries of both Vision Transformers and self-supervised learning in two important ways.

🚦 The code is available

[Continue reading](https://www.marktechpost.com/2022/06/18/harvard-researchers-introduce-a-novel-vit-architecture-called-hierarchical-image-pyramid-transformer-hipt-that-can-scale-vision-transformers-to-gigapixel-images-via-hierarchical-self-supervised-lear/) | *Checkout the* [*paper*](https://arxiv.org/pdf/2206.02647.pdf)*,* [*github*](https://github.com/mahmoodlab/HIPT)

​

https://i.redd.it/c0ivcnxbeg691.gif

architecture computervision harvard hierarchical image images learning researchers scale self-supervised learning supervised learning transformer transformers vision

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

IT Commercial Data Analyst - ESO

@ National Grid | Warwick, GB, CV34 6DA

Stagiaire Data Analyst – Banque Privée - Juillet 2024

@ Rothschild & Co | Paris (Messine-29)

Operations Research Scientist I - Network Optimization Focus

@ CSX | Jacksonville, FL, United States

Machine Learning Operations Engineer

@ Intellectsoft | Baku, Baku, Azerbaijan - Remote

Data Analyst

@ Health Care Service Corporation | Richardson Texas HQ (1001 E. Lookout Drive)