Skip to main content Skip to navigation

Handcrafted Histological Transformer (H2T): Unsupervised Representation of Whole Slide Images

Abstract

Diagnostic, prognostic and therapeutic decision-making of cancer in pathology clinics can now be carried out based on analysis of multi-gigapixel tissue images, also known as whole-slide images (WSIs). Recently, deep convolutional neural networks (CNNs) have been proposed to derive unsupervised WSI representations; these are attractive as they rely less on expert annotation which is cumbersome. However, a major trade-off is that higher predictive power generally comes at the cost of interpretability, posing a challenge to their clinical use where transparency in decision-making is generally expected. To address this challenge, we present a handcrafted framework based on deep CNN for constructing holistic WSI-level representations. Building on recent findings about the internal working of the Transformer in the domain of natural language processing, we break down its processes and handcraft them into a more transparent framework that we term as the Handcrafted Histological Transformer or H2T. Based on our experiments involving various datasets consisting of a total of 10,042 WSIs, the results demonstrate that H2T based holistic WSI-level representations offer competitive performance compared to recent state-of-the-art methods and can be readily utilized for various downstream analysis tasks. Finally, our results demonstrate that the H2T framework can be up to 14 times faster than the Transformer models.

Highlights

  • A novel paradigm for deriving holistic WSI-level representations in an unsupervised manner.
  • A novel approach to generate prototypical patterns that are mined from WSIs and their usages.
  • A handcrafted framework that is as predictive as the Transformer model on WSIbased cancer subtype classifications.
  • An extensive set of results verified on more than 5000 WSIs.

Publication

Vu, Quoc Dang, et al. "Handcrafted Histological Transformer (H2T): Unsupervised representation of whole slide images." Medical Image Analysis (2023): 102743.

Access Links: [arxiv][MedIA]

Code: https://github.com/vqdang/H2T

Data Sharing

Due to the size of the data (3TB in total), we use a local SFTP server to transfer the data. Please request a user account by sending an email to h2t_support@warwick.ac.uk to receive additional instructions. To request data, please put "Request: " as a prefix to your email title.

We share the followings data:

  • Deep features for TCGA-Lung, TCGA-Breast, TCGA-Kidney, CPTAC-Lung.
  • Tissue masks for TCGA-Lung, TCGA-Breast, TCGA-Kidney, CPTAC-Lung.
  • Pretrained models for feature extraction Supervised-ResNet50, SWAV-ResNet50.
  • Prototype patterns of tumorous or normal tissue that is from either breast, lung or kidney WSIs within TCGA or CPTAC dataset.

All data shared by us are licensed under CC BY-NC-SA 4.0