all AI news
Cascaded Cross-Attention Networks for Data-Efficient Whole-Slide Image Classification Using Transformers. (arXiv:2305.06963v1 [cs.CV])
cs.CV updates on arXiv.org arxiv.org
Whole-Slide Imaging allows for the capturing and digitization of
high-resolution images of histological specimen. An automated analysis of such
images using deep learning models is therefore of high demand. The transformer
architecture has been proposed as a possible candidate for effectively
leveraging the high-resolution information. Here, the whole-slide image is
partitioned into smaller image patches and feature tokens are extracted from
these image patches. However, while the conventional transformer allows for a
simultaneous processing of a large set of input …
analysis architecture arxiv attention automated classification data deep learning demand digitization image images imaging information networks transformer transformer architecture transformers