June 18, 2022, 10:11 p.m. | /u/No_Coffee_4638

Artificial Intelligence www.reddit.com

🚦 HIPT is pretrained across 33 cancer types using 10,678 gigapixel WSIs, 408,218 4096×4096 images, and 104M 256 × 256 images

🚦 HIPT pushes the boundaries of both Vision Transformers and self-supervised learning in two important ways.

🚦 The code is available

[Continue reading](https://www.marktechpost.com/2022/06/18/harvard-researchers-introduce-a-novel-vit-architecture-called-hierarchical-image-pyramid-transformer-hipt-that-can-scale-vision-transformers-to-gigapixel-images-via-hierarchical-self-supervised-lear/) | *Checkout the* [*paper*](https://arxiv.org/pdf/2206.02647.pdf)*,* [*github*](https://github.com/mahmoodlab/HIPT)

​

https://i.redd.it/5jt6a83deg691.gif

architecture artificial harvard hierarchical image images learning researchers scale self-supervised learning supervised learning transformer transformers vision

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior AI & Data Engineer

@ Bertelsmann | Kuala Lumpur, 14, MY, 50400

Analytics Engineer

@ Reverse Tech | Philippines - Remote