S. Javed, A. Mahmood, M. M. Fraz, N. A. Koohbanani, K. Benes, Y.-W. Tsang, K. Hewitt, D. Epstein, D. Snead, N. M. Rajpoot, “Cellular community detection for tissue phenotyping in colorectal cancer histology images,” Medical Image Analysis, Jul. 2020, vol. 63, p. 101696. [Abstract] [doi] [Data]
Classification of various types of tissue in cancer histology images based on the cellular compositions is an important step towards the development of computational pathology tools for systematic digital profiling of the spatial tumor microenvironment. Most existing methods for tissue phenotyping are limited to the classification of tumor and stroma and require large amount of annotated histology images which are often not available. In the current work, we pose the problem of identifying distinct tissue phenotypes as finding communities in cellular graphs or networks. First, we train a deep neural network for cell detection and classification into five distinct cellular components. Considering the detected nuclei as nodes, potential cell-cell connections are assigned using Delaunay triangulation resulting in a cell-level graph. Based on this cell graph, a feature vector capturing potential cell-cell connection of different types of cells is computed. These feature vectors are used to construct a patch-level graph based on chi-square distance. We map patch-level nodes to the geometric space by representing each node as a vector of geodesic distances from other nodes in the network and iteratively drifting the patch nodes in the direction of positive density gradients towards maximum density regions. The proposed algorithm is evaluated on a publicly available dataset and another new large-scale dataset consisting of 280K patches of seven tissue phenotypes. The estimated communities have significant biological meanings as verified by the expert pathologists. A comparison with current state-of-the-art methods reveals significant performance improvement in tissue phenotyping.
M. Shaban, R. Awan, M.M. Fraz, A. Azam, Y. Tsang, D. Snead, N.M. Rajpoot, “Context-Aware Convolutional Neural Network for Grading of Colorectal Cancer Histology Images.,” IEEE Transactions on Medical Imaging (TMI), 03 Feb. 2020, vol. x, p. x-x. [Abstract] [doi] [Data]
Digital histology images are amenable to the application of convolutional neural networks (CNNs) for analysis due to the sheer size of pixel data present in them. CNNs are generally used for representation learning from small image patches (e.g. 224 × 224) extracted from digital histology images due to computational and memory constraints. However, this approach does not incorporate high-resolution contextual information in histology images. We propose a novel way to incorporate a larger context by a context-aware neural network based on images with a dimension of 1792 × 1792 pixels. The proposed framework first encodes the local representation of a histology image into high dimensional features then aggregates the features by considering their spatial organization to make a final prediction. We evaluated the proposed method on two colorectal cancer datasets for the task of cancer grading. Our method outperformed the traditional patch-based approaches, problem-specific methods, and existing context-based methods. We also presented a comprehensive analysis of different variants of the proposed method.
R. M. S. Bashir, H. Mahmood, M. Shaban, S. E. A. Raza, M. M. Fraz, S. A. Khurram & N. M. Rajpoot, “Automated grade classification of oral epithelial dysplasia using morphometric analysis of histology images,” in Medical Imaging 2020: Digital Pathology, Houston, Texas, USA, vol. 11320, p. 1132011. [Abstract] [doi]
Oral dysplasia is a pre-malignant stage of oral epithelial carcinomas, e.g., oral squamous cell carcinoma, where significant changes in tissue layers and cells can be observed under the microscope. However, malignancy can be reverted or cured using proper medication or surgery if the grade of malignancy is assessed properly. The assessment of correct grade is therefore critical in patient management as it can change the treatment decisions and prognosis for the dysplastic lesion. This assessment is highly challenging due to considerable inter- and intraobserver variability in pathologists’ agreement, which highlights the need for an automated grading system that can predict more accurate and reliable grade. Recent advancements have made it possible for digital pathology (DP) and artificial intelligence (AI) to join forces from the digitization of tissue slides into images and using those images to train and predict more accurate grades using complex AI models. In this regard, we propose a novel morphometric approach exploiting the architectural features in dysplastic lesions i.e., irregular epithelial stratification where we measure the widths of different layers of the epithelium from the boundary layer i.e., keratin projecting inwards to the epithelium and basal layers to the rest of the tissue section from a clinically significant viewpoint.
S. Graham, Q. Dang, S. E A. Raza, A. Azam, Y.-W Tsang, J. T. Kwak, N.M. Rajpoot, “Hover-Net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images,” Medical Image Analysis, Dec. 2019, vol. 58, p. 101563. [Abstract] [doi] [Data]
Nuclear segmentation and classification within Haematoxylin & Eosin stained histology images is a fundamental prerequisite in the digital pathology work-flow. The development of automated methods for nuclear segmentation and classification enables the quantitative analysis of tens of thousands of nuclei within a whole-slide pathology image, opening up possibilities of further analysis of large-scale nuclear morphometry. However, automated nuclear segmentation and classification is faced with a major challenge in that there are several different types of nuclei, some of them exhibiting large intra-class variability such as the nuclei of tumour cells. Additionally, some of the nuclei are often clustered together. To address these challenges, we present a novel convolutional neural network for simultaneous nuclear segmentation and classification that leverages the instance-rich information encoded within the vertical and horizontal distances of nuclear pixels to their centres of mass. These distances are then utilised to separate clustered nuclei, resulting in an accurate segmentation, particularly in areas with overlapping instances. Then, for each segmented instance the network predicts the type of nucleus via a devoted up-sampling branch. We demonstrate state-of-the-art performance compared to other methods on multiple independent multi-tissue histology image datasets. As part of this work, we introduce a new dataset of Haematoxylin & Eosin stained colorectal adenocarcinoma image tiles, containing 24,319 exhaustively annotated nuclei with associated class labels.
M. Shaban, A. Mahmood, S. A. Al-Maadeed & N. M. Rajpoot, “An Information Fusion Framework for Person Localization Via Body Pose in Spectator Crowds,” Information Fusion, Nov. 2019, vol. 51, p. 178–188. [Abstract] [doi]
Person localization or segmentation in low resolution crowded scenes is important for person tracking and recognition, action detection and anomaly identification. Due to occlusion and lack of inter-person space, person localization becomes a difficult task. In this work, we propose a novel information fusion framework to integrate a Deep Head Detector and a body pose detector. A more accurate body pose showing limb positions will result in more accurate person localization. We propose a novel Deep Head Detector (DHD) to detect person heads in crowds. The proposed DHD is a fully convolutional neural network and it has shown improved head detection performance in crowds. We modify Deformable Parts Model (DPM) pose detector to detect multiple upper body poses in crowds. We efficiently fuse the information obtained by the proposed DHD and the modified DPM to obtain a more accurate person pose detector. The proposed framework is named as Fusion DPM (FDPM) and it has exhibited improved body pose detection performance on spectator crowds. The detected body poses are then used for more accurate person localization by segmenting each person in the crowd.
S. Javed, A. Mahmood, N. Werghi & N. M. Rajpoot, “Deep Multiresolution Cellular Communities for Semantic Segmentation of Multi-Gigapixel Histology Images,” 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) Seoul, Korea (South), 2019, vol. 10581, p. 342–351. [Abstract] [doi]
Tissue phenotyping in cancer histology images is a fundamental step in computational pathology. Automatic tools for tissue phenotyping assist pathologists for digital profiling of the tumor microenvironment. Recently, deep learning and classical machine learning methods have been proposed for tissue phenotyping. However, these methods do not integrate the cellular community interaction features which present biological significance in tissue phenotyping context. In this paper, we propose to exploit deep multiresolution cellular communities for tissue phenotyping from multi-level cell graphs and show that such communities offer better performance compared to the deep learning and texture-based methods. We propose to use deep features extracted from two distinct layers of a deep neural network at the cell-level, in order to construct cellular graphs encoding cellular interactions at multiple scales. From these graphs, we extract cellular interaction-based features, which are then employed to construct patch-level graphs. Multiresolution communities are detected by considering the patch-level graphs as layers of multi-level graphs, and also by proposing novel objective function based on non-negative matrix factorization. We report results of our experiments on two datasets for colon cancer tissue phenotyping and demonstrate excellent performance of the proposed algorithm as compared to current state-of-the-art methods.
N. Kumar, R. Verma, D. Anand, Y. Zhou, O. F. Onder, E. Tsougenis, H. Chen, P. A. Heng, J. Li, Z. Hu, Y. Wang, N. A. Koohbanani, M. Jahanifar, N. Z. Tajeddin, A. Gooya, N. Rajpoot, X. Ren, S. Zhou, Q. Wang, D. Shen, C. K. Yang, C. H. Weng, W. H. Yu, C. Y. Yeh, S. Yang, S. Xu, P. H. Yeung, P. Sun, A. Mahbod, G. Schaefer, I. Ellinger, R. Ecker, O. Smedby, C. Wang, B. Chidester, T. V. Ton, M. T. Tran, J. Ma, M. N. Do, S. Graham, Q. D. Vu, J. T. Kwak, A. Gunda, R. Chunduri, C. Hu, X. Zhou, D. Lotfi, R. Safdari, A. Kascenas, A. O'Neil, D. Eschweiler, J. Stegmaier, Y. Cui, B. Yin, K. Chen, X. Tian, P. Gruening, E. Barth, E. Arbel, I. Remer, A. Ben-Dor, E. Sirazitdinova, M. Kohl, S. Braunewell, Y. Li, X. Xie, L. Shen, J. Ma, K. D. Baksi, M. A. Khan, J. Choo, A. Colomer, V. Naranjo, L. Pei, K. M. Iftekharuddin, K. Roy, D. Bhattacharjee, A. Pedraza, M. G. Bueno, S. Devanathan, S. Radhakrishnan, P. Koduganty, Z. Wu, G. Cai, X. Liu, Y. Wang, A. Sethi, “A Multi-organ Nucleus Segmentation Challenge.,” IEEE Transactions on Medical Imaging (TMI), Oct. 2019, p. 1. [Abstract] [doi]
Generalized nucleus segmentation techniques can contribute greatly to reducing the time to develop and validate visual biomarkers for new digital pathology datasets. We summarize the results of MoNuSeg 2018 Challenge whose objective was to develop generalizable nuclei segmentation techniques in digital pathology. The challenge was an official satellite event of the MICCAI 2018 conference in which 32 teams with more than 80 participants from geographically diverse institutes participated. Contestants were given a training set with 30 images from seven organs with annotations of 21,623 individual nuclei. A test dataset with 14 images taken from seven organs, including two organs that did not appear in the training set was released without annotations. Entries were evaluated based on average aggregated Jaccard index (AJI) on the test set to prioritize accurate instance segmentation as opposed to mere semantic segmentation. More than half the teams that completed the challenge outperformed a previous baseline 1. Among the trends observed that contributed to increased accuracy were the use of color normalization as well as heavy data augmentation. Additionally, fully convolutional networks inspired by variants of U-Net 2, FCN 3, and Mask-RCNN 4 were popularly used, typically based on ResNet 5 or VGG 6 base architectures. Watershed segmentation on predicted semantic segmentation maps was a popular post-processing strategy. Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics.
M. Shaban, S.A. Khurram, M.M. Fraz, N. Alsubaie, I. Masood, S. Mushtaq, M. Hassan, A. Loya, N. M. Rajpoot, “A Novel Digital Score for Abundance of Tumour Infiltrating Lymphocytes Predicts Disease Free Survival in Oral Squamous Cell Carcinoma,” Nature Scientific Reports, Sep. 2019, vol. 9, p. 13341. [Abstract] [doi]
Oral squamous cell carcinoma (OSCC) is the most common type of head and neck (H&N) cancers with an increasing worldwide incidence and a worsening prognosis. The abundance of tumour infiltrating lymphocytes (TILs) has been shown to be a key prognostic indicator in a range of cancers with emerging evidence of its role in OSCC progression and treatment response. However, the current methods of TIL analysis are subjective and open to variability in interpretation. An automated method for quantification of TIL abundance has the potential to facilitate better stratification and prognostication of oral cancer patients. We propose a novel method for objective quantification of TIL abundance in OSCC histology images. The proposed TIL abundance (TILAb) score is calculated by first segmenting the whole slide images (WSIs) into underlying tissue types (tumour, lymphocytes, etc.) and then quantifying the co-localization of lymphocytes and tumour areas in a novel fashion. We investigate the prognostic significance of TILAb score on digitized WSIs of Hematoxylin and Eosin (H&E) stained slides of OSCC patients. Our deep learning based tissue segmentation achieves high accuracy of 96.31%, which paves the way for reliable downstream analysis. We show that the TILAb score is a strong prognostic indicator (p = 0.0006) of disease free survival (DFS) on our OSCC test cohort. The automated TILAb score has a significantly higher prognostic value than the manual TIL score (p = 0.0024). In summary, the proposed TILAb score is a digital biomarker which is based on more accurate classification of tumour and lymphocytic regions, is motivated by the biological definition of TILs as tumour infiltrating lymphocytes, with the added advantages of objective and reproducible quantification.
H. Lin, H. Chen, S. Graham, Q. Dou, N. M. Rajpoot, P.-A Heng, “Fast scannet: Fast and dense analysis of multi-gigapixel whole-slide images for cancer metastasis detection,” IEEE Transactions on Medical Imaging (TMI), Aug. 2019, vol. 38(8), p. 1948–1958. [Abstract] [doi]
Lymph node metastasis is one of the most important indicators in breast cancer diagnosis, that is traditionally observed under the microscope by pathologists. In recent years, with the dramatic advance of high-throughput scanning and deep learning technology, automatic analysis of histology from whole-slide images has received a wealth of interest in the field of medical image computing, which aims to alleviate pathologists' workload and simultaneously reduce misdiagnosis rate. However, the automatic detection of lymph node metastases from whole-slide images remains a key challenge because such images are typically very large, where they can often be multiple gigabytes in size. Also, the presence of hard mimics may result in a large number of false positives. In this paper, we propose a novel method with anchor layers for model conversion, which not only leverages the efficiency of fully convolutional architectures to meet the speed requirement in clinical practice but also densely scans the whole-slide image to achieve accurate predictions on both micro- and macro-metastases. Incorporating the strategies of asynchronous sample prefetching and hard negative mining, the network can be effectively trained. The efficacy of our method is corroborated on the benchmark dataset of 2016 Camelyon Grand Challenge. Our method achieved significant improvements in comparison with the state-of-the-art methods on tumor localization accuracy with a much faster speed and even surpassed human performance on both challenge tasks.
T. Qaiser, Y.-W Tsang, D. Taniyama, N. Sakamoto, K. Nakane, D. Epstein, N. M. Rajpoot, “Fast and Accurate Tumor Segmentation of Histology Images using Persistent Homology and Deep Convolutional Features,” Medical Image Analysis, Jul. 2019, vol. 55, p. 1–14. [Abstract] [doi]
Tumor segmentation in whole-slide images of histology slides is an important step towards computer-assisted diagnosis. In this work, we propose a tumor segmentation framework based on the novel concept of persistent homology profiles (PHPs). For a given image patch, the homology profiles are derived by efficient computation of persistent homology, which is an algebraic tool from homology theory. We propose an efficient way of computing topological persistence of an image, alternative to simplicial homology. The PHPs are devised to distinguish tumor regions from their normal counterparts by modeling the atypical characteristics of tumor nuclei. We propose two variants of our method for tumor segmentation: one that targets speed without compromising accuracy and the other that targets higher accuracy. The fast version is based on a selection of exemplar image patches from a convolution neural network (CNN) and patch classification by quantifying the divergence between the PHPs of exemplars and the input image patch. Detailed comparative evaluation shows that the proposed algorithm is significantly faster than competing algorithms while achieving comparable results. The accurate version combines the PHPs and high-level CNN features and employs a multi-stage ensemble strategy for image patch labeling. Experimental results demonstrate that the combination of PHPs and CNN features outperform competing algorithms. This study is performed on two independently collected colorectal datasets containing adenoma, adenocarcinoma, signet, and healthy cases. Collectively, the accurate tumor segmentation produces the highest average patch-level F1-score, as compared with competing algorithms, on malignant and healthy cases from both the datasets. Overall the proposed framework highlights the utility of persistent homology for histopathology image analysis.
S. Graham, D. Epstein & N. M. Rajpoot, “Rota-Net: Rotation Equivariant Network for Simultaneous Gland and Lumen Segmentation in Colon Histology Images,” European Congress on Digital Pathology, Jul. 2019, vol. 10581, p. 109–116. [Abstract] [doi]
Analysis of the shape of glands and their lumen in digitised images of Haematoxylin & Eosin stained colon histology slides can provide insight into the degree of malignancy. Segmenting each glandular component is an essential prerequisite step for subsequent automatic morphological analysis. Current automated segmentation approaches typically do not take into account the inherent rotational symmetry within histology images. We incorporate this rotational symmetry into an encoder-decoder based network by utilising group equivariant convolutions, specifically using the symmetry group of rotations by multiples of 90 degrees . Our rotation equivariant network splits into two separate branches after the final up-sampling operation, where the output of a given branch achieves either gland or lumen segmentation. In addition, at the output of the gland branch, we use a multi-class strategy to assist with the separation of touching instances. We show that our proposed approach achieves the state-of-the-art performance on the GlaS challenge dataset.
T. Qaiser, M. Pugh, S. Margielewska, R. Hollows, P. Murray, N. M. Rajpoot, “Digital Tumor-Collagen Proximity Signature Predicts Survival in Diffuse Large B-Cell Lymphoma,” European Congress on Digital Pathology, Jul. 2019, vol. 11435, p. 163–171. [Abstract] [doi]
Diffuse large B-cell lymphoma (DLBCL) is a heterogeneous tumor that originates from normal B-cells. A limited number of studies have investigated the role of acellular stromal microenvironment on outcome in DLBCL. Here, we propose a novel digital proximity signature (DPS) for predicting overall survival (OS) in DLBCL patients. We propose a novel end-to-end multi-task deep learning model for cell detection and classification and investigate the spatial proximity of collagen (type VI) and tumor cells for estimating the DPS. To the best of our knowledge, this is the first study that performs automated analysis of tumor and collagen on DLBCL to identify potential prognostic factors. Experimental results favor our cell classification algorithm over conventional approaches. In addition, our pilot results show that strongly associated tumor-collagen regions are statistically significant (p = 0.03) in predicting OS in DLBCL patients.
J. Gamper, N. A. Koohbanani, K. Benet, A. Khuram, N. M. Rajpoot, “PanNuke: An Open Pan-Cancer Histology Dataset for Nuclei Instance Segmentation and Classification,” European Congress on Digital Pathology, Jul. 2019, vol. 11435, p. 11–19. [Abstract] [doi]
In this work we present an experimental setup to semi automatically obtain exhaustive nuclei labels across 19 different tissue types, and therefore construct a large pan-cancer dataset for nuclei instance segmentation and classification, with minimal sampling bias. The dataset consists of 455 visual fields, of which 312 are randomly sampled from more than 20K whole slide images at different magnifications, from multiple data sources. In total the dataset contains 216.4K labeled nuclei, each with an instance segmentation mask. We independently pursue three separate streams to create the dataset: detection, classification, and instance segmentation by ensembling in total 34 models from already existing, public datasets, therefore showing that the learnt knowledge can be efficiently transferred to create new datasets. All three streams are either validated on existing public benchmarks or validated by expert pathologists, and finally merged and validated once again to create a large, comprehensive pan-cancer nuclei segmentation and detection dataset PanNuke.
T. Qaiser, M. Pugh, S. Margielewska, R. Hollows, P. Murray, N. M. Rajpoot, “Tumor-collagen digital proximity signature to indicate prognosis in diffuse large B-cell lymphoma,” Journal Clinical Oncology, Jun. 2019, e19073. [Abstract] [doi]
Background: Diffuse large B-cell lymphoma (DLBCL) is a heterogeneous tumor that originates from normal B-cells. Despite the use of combination chemotherapy, around 40% of DLBCL patients die (de Jonge, et al. European Journal of Cancer, 2016). Limited studies have investigated the role of collagen in the acellular tumor microenvironment. In this study, we present a novel digital signature of the proximity of tumor cells and collagen-VI (COL6) that can predict overall survival (OS) in DLBCL patients. To the best of our knowledge, this is the first study of its kind to employ automated image analysis.
Methods: The proposed digital proximity signature (DPS) aggregates summary-level statistics from the entire whole slide image (WSI) and serves as a marker of regions, categorizing weak, moderate, significant, and strong tumor-collagen proximity and can be described as a surrogate for signaling. To accomplish this, we developed a novel artificial intelligence (AI) based multi-task model for simultaneous detection and classification of tumor cells and another bespoke method for automatically identifying COL6 fiber. The tumor-collagen proximity analysis was then performed by aggregating tumor cell statistics within the vicinity of COL6 fibres. Finally, the prognostic significance of DPS for OS in DLBCL was investigated with Kaplan-Meier analysis, stratifying patients into two groups based on the median of the DPS values.
Results: We took WSIs of DLBCL tissue slides for 32 cases immunohistochemically stained with COL6 and Hematoxylin counterstain. The AI model for tumor cell identification achieved a high F1-score of 0.84, outperforming recent single-task learning models. Our results show that strong proximity of COL6 and tumor cells is linked to better OS in DLBCL patients (p = 0.03). Conclusions: Our novel digitally computed COL6-tumor proximity signature shows prognostic significance for overall survival on a pilot dataset of 32 DLBCL patients. We are further validating the utility of this novel signature as a prognostic biomarker in larger cohorts of DLBCL patients.
M. Veta, Y. J. Heng, N. Stathonikos, B. E. Bejnordi, F. Beca, T. Wollmann, K. Rohr, M. A. Shah, D. Wang, M. Rousson, M. Hedlund, D. Tellez, F. Ciompi, E. Zerhouni, D. Lanyi, M. Viana, V. Kovalev, V. Liauchuk, H. A. Phoulady, T. Qaiser, S. Graham, N. M. Rajpoot, E. Sjöblom, J. Molin, K. Paengo, S. Hwang, S. Park, Z. Jia, EI-C. Chang, Y. Xu, A. H. Beck, P. J. van Diest, J. P. W. Pluim, “Predicting breast tumor proliferation from whole-slide images: The TUPAC16 challenge,” Medical Image Analysis, May. 2019, vol. 54, p. 111–121. [Abstract] [doi]
Tumor proliferation is an important biomarker indicative of the prognosis of breast cancer patients. Assessment of tumor proliferation in a clinical setting is a highly subjective and labor-intensive task. Previous efforts to automate tumor proliferation assessment by image analysis only focused on mitosis detection in predefined tumor regions. However, in a real-world scenario, automatic mitosis detection should be performed in whole-slide images (WSIs) and an automatic method should be able to produce a tumor proliferation score given a WSI as input. To address this, we organized the TUmor Proliferation Assessment Challenge 2016 (TUPAC16) on prediction of tumor proliferation scores from WSIs.
The challenge dataset consisted of 500 training and 321 testing breast cancer histopathology WSIs. In order to ensure fair and independent evaluation, only the ground truth for the training dataset was provided to the challenge participants. The first task of the challenge was to predict mitotic scores, i.e., to reproduce the manual method of assessing tumor proliferation by a pathologist. The second task was to predict the gene expression based PAM50 proliferation scores from the WSI.
The best performing automatic method for the first task achieved a quadratic-weighted Cohen's kappa score of κ = 0.567, 95% CI [0.464, 0.671] between the predicted scores and the ground truth. For the second task, the predictions of the top method had a Spearman's correlation coefficient of r = 0.617, 95% CI [0.581 0.651] with the ground truth.
This was the first comparison study that investigated tumor proliferation assessment from WSIs. The achieved results are promising given the difficulty of the tasks and weakly-labeled nature of the ground truth. However, further research is needed to improve the practical utility of image analysis methods for this task.
R. Colling, H. Pitman, K. Oien, N.M. Rajpoot, P. Macklin, CM-Path AI in Histopathology Working Group, D. Snead, T. Sackville, C. Verrill, “Artificial intelligence in digital pathology: a roadmap to routine use in clinical practice,” Journal of Pathology, May. 2019, vol. 249(2), p. 143–150. [Abstract] [doi]
The use of artificial intelligence will transform clinical practice over the next decade and the early impact of this will likely be the integration of image analysis and machine learning into routine histopathology. In the UK and around the world, a digital revolution is transforming the reporting practice of diagnostic histopathology and this has sparked a proliferation of image analysis software tools. While this is an exciting development that could discover novel predictive clinical information and potentially address international pathology workforce shortages, there is a clear need for a robust and evidence‐based framework in which to develop these new tools in a collaborative manner that meets regulatory approval. With these issues in mind, the NCRI Cellular Molecular Pathology (CM‐Path) initiative and the British In Vitro Diagnostics Association (BIVDA) have set out a roadmap to help academia, industry, and clinicians develop new software tools to the point of approved clinical use.
R. Pell, K. Oien, M. Robinson, H. Pitman, N. M. Rajpoot, J. Rittscher, D. Snead, C. Verrill, CM-Path, “The use of digital pathology and image analysis in clinical trials,” The Journal of Pathology: Clinical Research, Apr. 2019, vol. 5(2), p. 81–90. [Abstract] [doi]
Digital pathology and image analysis potentially provide greater accuracy, reproducibility and standardisation of pathology‐based trial entry criteria and endpoints, alongside extracting new insights from both existing and novel features. Image analysis has great potential to identify, extract and quantify features in greater detail in comparison to pathologist assessment, which may produce improved prediction models or perform tasks beyond manual capability. In this article, we provide an overview of the utility of such technologies in clinical trials and provide a discussion of the potential applications, current challenges, limitations and remaining unanswered questions that require addressing prior to routine adoption in such studies. We reiterate the value of central review of pathology in clinical trials, and discuss inherent logistical, cost and performance advantages of using a digital approach. The current and emerging regulatory landscape is outlined. The role of digital platforms and remote learning to improve the training and performance of clinical trial pathologists is discussed. The impact of image analysis on quantitative tissue morphometrics in key areas such as standardisation of immunohistochemical stain interpretation, assessment of tumour cellularity prior to molecular analytical applications and the assessment of novel histological features is described. The standardisation of digital image production, establishment of criteria for digital pathology use in pre‐clinical and clinical studies, establishment of performance criteria for image analysis algorithms and liaison with regulatory bodies to facilitate incorporation of image analysis applications into clinical practice are key issues to be addressed to improve digital pathology incorporation into clinical trials.
Q. D. Vu, S. Graham, T. Kurc, M. N. To, M. Shaban, T. Qaiser, N. A. Koohbanani, S. A. Khurram, T. Kurc, K. Farahani, T. Zhao, R. Gupta, J. T. Kwak, N. M. Rajpoot, J. Saltz, “Methods for segmentation and classification of digital microscopy tissue images,” Frontiers in bioengineering and biotechnology, Apr. 2019, vol. 7, p. 53. [Abstract] [doi]
High-resolution microscopy images of tissue specimens provide detailed information about the morphology of normal and diseased tissue. Image analysis of tissue morphology can help cancer researchers develop a better understanding of cancer biology. Segmentation of nuclei and classification of tissue images are two common tasks in tissue image analysis. Development of accurate and efficient algorithms for these tasks is a challenging problem because of the complexity of tissue morphology and tumor heterogeneity. In this paper we present two computer algorithms; one designed for segmentation of nuclei and the other for classification of whole slide tissue images. The segmentation algorithm implements a multiscale deep residual aggregation network to accurately segment nuclear material and then separate clumped nuclei into individual nuclei. The classification algorithm initially carries out patch-level classification via a deep learning method, then patch-level statistical and morphological features are used as input to a random forest regression model for whole slide image classification. The segmentation and classification algorithms were evaluated in the MICCAI 2017 Digital Pathology challenge. The segmentation algorithm achieved an accuracy score of 0.78. The classification algorithm achieved an accuracy score of 0.81. These scores were the highest in the challenge.
M Shapcott, K Hewitt & N. M. Rajpoot, “Deep Learning with Sampling for Colon Cancer Histology Images,” Frontiers of Bioengineering and Biotechnology, Mar. 2019, vol. 7, p. 52. [Abstract] [doi]
This study applied a deep-learning cell identification algorithm to diagnostic images from the colon cancer repository at The Cancer Genome Atlas (TCGA). Within-image sampling improved performance without loss of accuracy. The features thus derived were associated with various clinical variables including metastasis, residual tumor, venous invasion, and lymphatic invasion. The deep-learning algorithm was trained using images from a locally available data set, then applied to the TCGA images by tiling them, and identifying cells in each patch defined by the tiling. In this application the average number of patches containing tissue in an image was ~900. Processing a random sample of patches greatly reduced computation costs. The cell identification algorithm was applied directly to each sampled patch, resulting in a list of cells. Each cell was labeled with its location and classification (“epithelial,” “inflammatory,” “fibroblast,” or “other”). The number of cells of a given type in the patch was calculated, resulting in a patch profile containing four features. A morphological profile that applied to the entire image was obtained by averaging profiles over all patches. Two sampling policies were examined. The first policy was random sampling which samples patches with uniform weighting. The second policy was systematic random sampling which takes spatial dependencies into account. Compared with the processing of complete whole slide images there was a seven-fold improvement in performance when systematic random spatial sampling was used to select 100 tiles from the whole-slide image for processing, with very little loss of accuracy (~4% on average). We found links between the predicted features and clinical variables in the TCGA colon cancer data set. Several significant associations were found: increased fibroblast numbers were associated with the presence of metastasis, venous invasion, lymphatic invasion and residual tumor while decreased numbers of inflammatory cells were associated with mucinous carcinomas. Regarding the four different types of cell, deep learning has generated morphological features that are indicators of cell density. The features are related to cellularity, the numbers, degree, or quality of cells present in a tumor. Cellularity has been reported to be related to patient survival and other diagnostic and prognostic indicators, indicating that the features calculated here may be of general usefulness.
T. Qaiser & N.M. Rajpoot, “Learning Where to See: A Novel Attention Model for Automated Immunohistochemical Scoring,” IEEE Transactions on Medical Imaging (TMI), Mar. 2019, vol. , p. 1–1. [Abstract] [doi][Data]
Estimating over-amplification of human epidermal growth factor receptor 2 (HER2) on invasive breast cancer (BC) is regarded as a significant predictive and prognostic marker. We propose a novel deep reinforcement learning (DRL) based model that treats immunohistochemical (IHC) scoring of HER2 as a sequential learning task. For a given image tile sampled from multi-resolution giga-pixel whole slide image (WSI), the model learns to sequentially identify some of the diagnostically relevant regions of interest (ROIs) by following a parameterized policy. The selected ROIs are processed by recurrent and residual convolution networks to learn the discriminative features for different HER2 scores and predict the next location, without requiring to process all the sub-image patches of a given tile for predicting the HER2 score, mimicking the histopathologist who would not usually analyze every part of the slide at the highest magnification. The proposed model incorporates a task-specific regularization term and inhibition of return mechanism to prevent the model from revisiting the previously attended locations. We evaluated our model on two IHC datasets: a publicly available dataset from the HER2 scoring challenge contest and another dataset consisting of WSIs of gastroenteropancreatic neuroendocrine tumor sections stained with Glo1 marker. We demonstrate that the proposed model outperforms other methods based on state-of-the-art deep convolutional networks. To the best of our knowledge, this is the first study using DRL for IHC scoring and could potentially lead to wider use of DRL in the domain of computational pathology reducing the computational burden of the analysis of large multigigapixel histology images.
S. Graham, H. Chen, J. Gamper, H. Chen, J. Gamper, Q. Dou, P.-A Heng, D. Snead, Y.-W Tsang, N. M. Rajpoot, “MILD-Net: Minimal information loss dilated network for gland instance segmentation in colon histology images,” Medical Image Analysis, Feb. 2019, vol. 52, p. 199–211. [Abstract] [doi] [Data]
The analysis of glandular morphology within colon histopathology images is an important step in determining the grade of colon cancer. Despite the importance of this task, manual segmentation is laborious, time-consuming and can suffer from subjectivity among pathologists. The rise of computational pathology has led to the development of automated methods for gland segmentation that aim to overcome the challenges of manual segmentation. However, this task is non-trivial due to the large variability in glandular appearance and the difficulty in differentiating between certain glandular and non-glandular histological structures. Furthermore, a measure of uncertainty is essential for diagnostic decision making. To address these challenges, we propose a fully convolutional neural network that counters the loss of information caused by max-pooling by re-introducing the original image at multiple points within the network. We also use atrous spatial pyramid pooling with varying dilation rates for preserving the resolution and multi-level aggregation. To incorporate uncertainty, we introduce random transformations during test time for an enhanced segmentation result that simultaneously generates an uncertainty map, highlighting areas of ambiguity. We show that this map can be used to define a metric for disregarding predictions with high uncertainty. The proposed network achieves state-of-the-art performance on the GlaS challenge dataset and on a second independent colorectal adenocarcinoma dataset. In addition, we perform gland instance segmentation on whole-slide images from two further datasets to highlight the generalisability of our method. As an extension, we introduce MILD-Net+ for simultaneous gland and lumen segmentation, to increase the diagnostic power of the network.
L. Maier-Hein, M. Eisenmann, A. Reinke, S. Onogur, M. Stankovic, P. Scholz, T. Arbel, H. Bogunovic, A. P. Bradley, A. Carass, C. Feldmann, A. F. Frangi, P. M. Full, B. van Ginneken, A. Hanbury, K. Honauer, M. Kozubek, B. A. Landman, K. März, O. Maier, K. Maier-Hein, B. H. Menze, H. Müller, P. F. Neher, W. Niessen, N. M. Rajpoot, G. C. Sharp, K. Sirinukunwattana, S. Speidel, C. Stock, D. Stoyanov, A. A. Taha, F. van der Sommen, C.-W Wang, M.-A Weber, G. Zheng, P. Jannin, A. Kopp-Schneider, “Why rankings of biomedical image analysis competitions should be interpreted with care,” Nature Communications, Dec. 2018, vol. 9, p. 5217. [Abstract] [doi]
International challenges have become the standard for validation of biomedical image analysis methods. Given their scientific impact, it is surprising that a critical analysis of common practices related to the organization of challenges has not yet been performed. In this paper, we present a comprehensive analysis of biomedical image analysis challenges conducted up to now. We demonstrate the importance of challenges and show that the lack of quality control has critical consequences. First, reproducibility and interpretation of the results is often hampered as only a fraction of relevant information is typically provided. Second, the rank of an algorithm is generally not robust to a number of variables such as the test data used for validation, the ranking scheme applied and the observers that make the reference annotations. To overcome these problems, we recommend best practice guidelines and define open research questions to be addressed in the future.
S. E. A. Raza, L. Cheung, M. Shaban, S. Graham, D. Epstein, S. Pelengaris, M. Khan, N. M. Rajpoot, “Micro-Net: A unified model for segmentation of various objects in microscopy images,” Medical Image Analysis, Dec. 2018, vol. 52, p. 160–173. [Abstract] [doi] [Data]
Object segmentation and structure localization are important steps in automated image analysis pipelines for microscopy images. We present a convolution neural network (CNN) based deep learning architecture for segmentation of objects in microscopy images. The proposed network can be used to segment cells, nuclei and glands in fluorescence microscopy and histology images after slight tuning of input parameters. The network trains at multiple resolutions of the input image, connects the intermediate layers for better localization and context and generates the output using multi-resolution deconvolution filters. The extra convolutional layers which bypass the max-pooling operation allow the network to train for variable input intensities and object size and make it robust to noisy data. We compare our results on publicly available data sets and show that the proposed network outperforms recent deep learning algorithms.
T.-H. Song, V. Sanchez, H. ElDaly & N. M. Rajpoot, “Simultaneous Cell Detection and Classification in Bone Marrow Histology Images,” IEEE Journal of Biomedical and Health Informatics, Oct. 2018, vol. 23 (4), p. 1469 – 1476. [Abstract] [doi]
Recently, deep learning frameworks have been shown to be successful and efficient in processing digital histology images for various detection and classification tasks. Among these tasks, cell detection and classification are key steps in many computer-assisted diagnosis systems. Traditionally, cell detection and classification is performed as a sequence of two consecutive steps by using two separate deep learning networks: one for detection and the other for classification. This strategy inevitably increases the computational complexity of the training stage. In this paper, we propose a synchronized deep autoencoder network for simultaneous detection and classification of cells in bone marrow histology images. The proposed network uses a single architecture to detect the positions of cells and classify the detected cells, in parallel. It uses a curvesupport Gaussian model to compute probability maps that allow detecting irregularly shape cells precisely. Moreover, the network includes a novel neighborhood selection mechanism to boost the classification accuracy. We show that the performance of the proposed network is superior than traditional deep learning detection methods and very competitive compared to traditional deep learning classification networks. Runtime comparison also shows that our network requires less time to be trained.
K. Sirinukunwattana, D. Snead, D. Epstein, Z. Aftab, I. Mujeeb, Y.-W. Tsang, I. Cree, N. M. Rajpoot, “Novel digital signatures of tissue phenotypes for predicting distant metastasis in colorectal cancer,” Nature Scientific Reports, Sep. 2018, vol. 8, p. 13692. [Abstract] [doi]
Distant metastasis is the major cause of death in colorectal cancer (CRC). Patients at high risk of developing distant metastasis could benefit from appropriate adjuvant and follow-up treatments if stratified accurately at an early stage of the disease. Studies have increasingly recognized the role of diverse cellular components within the tumor microenvironment in the development and progression of CRC tumors. In this paper, we show that automated analysis of digitized images from locally advanced colorectal cancer tissue slides can provide estimate of risk of distant metastasis on the basis of novel tissue phenotypic signatures of the tumor microenvironment. Specifically, we determine what cell types are found in the vicinity of other cell types, and in what numbers, rather than concentrating exclusively on the cancerous cells. We then extract novel tissue phenotypic signatures using statistical measurements about tissue composition. Such signatures can underpin clinical decisions about the advisability of various types of adjuvant therapy.
S. Javed, M. M. Fraz, D. Epstein, D. Snead, & N. M. Rajpoot, “Cellular community detection for tissue phenotyping in histology images,” in MICCAI Workshop Computational Pathology and Ophthalmic Medical Image Analysis, Sep. 2018, pp. 120–129. [Abstract] [doi]
A primary aim of detailed analysis of multi-gigapixel histology images is assisting pathologists for better cancer grading and prognostication. Several methods have been proposed for the analysis of histology images in the literature. However, these methods are often limited to the classification of two classes i.e., tumor and stroma. Also, most existing methods are based on fully supervised learning and require a large amount of annotations, which are very difficult to obtain. To alleviate these challenges, we propose a novel community detection algorithm for the classification of tissue in Whole-slide Images (WSIs). The proposed algorithm uses a novel graph-based approach to the problem of detecting prevalent communities in a collection of histology images in an semi-supervised manner resulting the identification of six distinct tissue phenotypes in the multi-gigapixel image data. We formulate the problem of identifying distinct tissue phenotypes as the problem of finding network communities using the geodesic density gradient in the space of potential interaction between different cellular components. We show that prevalent communities found in this way represent distinct and biologically meaningful tissue phenotypes. Experiments on two independent Colorectal Cancer (CRC) datasets demonstrate that the proposed algorithm outperforms current state-of-the-art methods.
R. Awan & N. Rajpoot, “Deep Autoencoder Features for Registration of Histology Images,” Annual Conference on Medical Image Understanding and Analysis, Jun 2018, p. 371–378. [Abstract] [doi]
Registration of histology whole slide images of consecutive sections of a tissue block is mandatory for cross-slide analysis. Due to the stain variations, a feature-based method for deriving the transformation maps for these images is considered to be a reasonable choice as compared to the methods which work on image intensities. Autoencoders have been employed in a wide variety of applications due to their potential for representation learning and transfer learning for deep architectures. Representation learned by autoencoders has been used for a number of challenging problems including classification and regression. In this study, we analyze deep autoencoder features for the purpose of registering histology images by maximizing the feature similarities between the fixed and moving images. In this paper, we demonstrate the capability of autoencoder features for registration of histology images.
M. Shaban, S. A. Khurram, M. Hassan, S. Mushtaq, A. Loya, N. M. Rajpoot, “Prognostic significance of automated score of tumor infiltrating lymphocytes in oral cancer,” Journal of Clinical Oncology, Jun. 2018, p. e18036. [Abstract] [doi]
Oral squamous cell carcinoma (OSCC) is the most common malignancy of the head and neck region, with a rising incidence particularly in South Asian countries. Abundance of tumor infiltrating lymphocytes (TILs) in the tumor microenvironment has been associated with good prognosis and response to therapy in a variety of cancers (Ruiter et al., Oncoimmunol 2017). Our goal in this study was to explore whether novel artificial intelligence (AI) based automated quantification of TIL abundance carries any prognostic significance for disease-free survival for OSCC patients.
A total of 59 OSCC patients of South Asian origin were included in this study, including 19 patients with recurrent disease. A novel AI based algorithm was developed for recognition of tumor-rich and lymphocyte-rich regions in whole-slide images of Hematoxylin & Eosin (H&E) stained tissue slides from the OSCC patients after training the algorithm on just under half of the dataset (n = 27) with pathologist annotations. We then computed a statistical measure of co-localization of tumor and lymphocytic regions that we term here as the TIL Abundance score (or TILAb score). Finally, prognostic significance of the TILAb score for disease-free survival was investigated with the Cox proportional hazard analysis, using half of the dataset as a discovery subset to determine the best cutoff value of the TILAb score and the remaining half for validation purposes.
Our novel AI algorithm achieved high accuracy of 90% for the recognition of co-localised tumor and lymphocytic regions making the downstream analysis reliable. Higher TILAb score was significantly associated (p< 0.013) with better disease free survival on completely unseen data which is in agreement with previous findings based on manual TIL quantification. To the best of our knowledge, this is the first study to automate the quantification of TIL abundance from routine H&E slides of OSCC.
The automated TIL abundance score shows prognostic significance similar to manual score but with the added advantages of a more rapid and objective quantification. Large-scale multi-centric validation is required to establish the TILAb score as a prognostic biomarker in OSCC.
N. A. Koohababni, M. Jahanifar, A. Gooya, N.M.Rajpoot, “Nuclei detection using mixture density networks.,” International Workshop on Machine Learning in Medical Imaging, Sep. 2018, vol. 11046, p. 241–248. [Abstract] [doi]
Nuclei detection is an important task in the histology domain as it is a main step toward further analysis such as cell counting, cell segmentation, study of cell connections, etc. This is a challenging task due to complex texture of histology image, variation in shape, and touching cells. To tackle these hurdles, many approaches have been proposed in the literature where deep learning methods stand on top in terms of performance. Hence, in this paper, we propose a novel framework for nuclei detection based on Mixture Density Networks (MDNs). These networks are suitable to map a single input to several possible outputs and we utilize this property to detect multiple seeds in a single image patch. A new modified form of a cost function is proposed for training and handling patches with missing nuclei. The probability maps of the nuclei in the individual patches are next combined to generate the final image-wide result. The experimental results show the state-of-the-art performance on complex colorectal adenocarcinoma dataset.
S. Graham, H. Chen, P.-A. Heng, N. M. Rajpoot, “MILD-Net: Minimal Information Loss Dilated Network for Gland Instance Segmentation in Colon Histology Images,” Medical Imaging with Deep Learning (MIDL) , Jul. 2018. [Abstract] [doi]
The analysis of glandular morphology within colon histopathology images is a crucial step in determining the stage of colon cancer. Despite the importance of this task, manual segmentation is laborious, time-consuming and can suffer from subjectivity among pathologists. The rise of computational pathology has led to the development of automated methods for gland segmentation that aim to overcome the challenges of manual segmentation. However, this task is non-trivial due to the large variability in glandular appearance and the difficulty in differentiating between certain glandular and non-glandular histological structures. Furthermore, within pathological practice, a measure of uncertainty is essential for diagnostic decision making. For example, ambiguous areas may require further examination from numerous pathologists. To address these challenges, we propose a fully convolutional neural network that counters the loss of information caused by max-pooling by re-introducing the original image at multiple points within the network. We also use atrous spatial pyramid pooling with varying dilation rates for resolution maintenance and multi-level aggregation. To incorporate uncertainty, we introduce random transformations during test time for an enhanced segmentation result that simultaneously generates an uncertainty map, highlighting areas of ambiguity. We show that this map can be used to define a metric for disregarding predictions with high uncertainty. The proposed network achieves state-of-the-art performance on the GlaS challenge dataset, as part of MICCAI 2015, and on a second independent colorectal adenocarcinoma dataset.
R. Awan, N. A. Koohbanani, M. Shaban, A. Lisowska, N. M. Rajpoot, “Context-aware learning using transferable features for classification of breast cancer histology images,” International Conference Image Analysis and Recognition, 2018, p. 788–795. [Abstract] [doi]
Convolutional neural networks (CNNs) have been recently used for a variety of histology image analysis. However, availability of a large dataset is a major prerequisite for training a CNN which limits its use by the computational pathology community. In previous studies, CNNs have demonstrated their potential in terms of feature generalizability and transferability accompanied with better performance. Considering these traits of CNN, we propose a simple yet effective method which leverages the strengths of CNN combined with the advantages of including contextual information, particularly designed for a small dataset. Our method consists of two main steps: first it uses the activation features of CNN trained for a patch-based classification and then it trains a separate classifier using features of overlapping patches to perform image-based classification using the contextual information. The proposed framework outperformed the state-of-the-art method for breast cancer classification.
S. Graham & N.M. Rajpoot, “Sams-net: Stain-aware multi-scale network for instance-based nuclei segmentation in histology images,” IEEE International Symposium on Biomedical Imaging, May. 2018, p. 590–594. [Abstract] [doi]
Segmentation of nuclear material in histology slides is an important step in the digital pathology work-flow, due to the ability for nuclei to act as key diagnostic markers. Manual segmentation can be a laborious task, where pathologists are often required to analyse many nuclei within a whole slide image (WSI). The rise in digital pathology has been matched with an increase in interest for automated nuclei segmentation in Hematoxylin & Eosin (H&E) stained histology images, yet this remains a challenge due to the heterogeneous appearance of different types of nuclei. This heterogeneity can lead to nuclei having a variable Hematoxylin intensity, which often has detrimental effects on the success of current methods. We propose a deep multi-scale neural network, with a novel loss function that is sensitive to the Hematoxylin intensity, for precise object-level nuclei segmentation. We show that the proposed network outperforms all competing methods for the computational precision medicine (CPM) nuclei segmentation challenge dataset as part of MICCAI 2017.
S. Graham, M. Shaban, T. Qaiser, N. A. Koohababni, S. A. Khurram, N. M. Rajpoot, “Classification of lung cancer histology images using patch-level summary statistics,” SPIE Medical Imaging, Mar. 2018, vol. 10581, p. 1058119. [Abstract] [doi]
There are two main types of lung cancer: small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC), which are grouped accordingly due to similarity in behaviour and response to treatment. The main types of NSCLC are lung adenocarcinoma (LUAD), which accounts for about 40% of all lung cancers and lung squamous cell carcinoma (LUSC), which accounts for about 25-30% of all lung cancers. Due to their differences, automated classification of these two main subtypes of NSCLC is a critical step in developing a computer aided diagnostic system. We present an automated method for NSCLC classification, that consists of a two-part approach. Firstly, we implement a deep learning framework to classify input patches as LUAD, LUSC or non-diagnostic (ND). Next, we extract a collection of statistical and morphological measurements from the labeled whole-slide image (WSI) and use a random forest regression model to classify each WSI as lung adenocarcinoma or lung squamous cell carcinoma. This task is part of the Computational Precision Medicine challenge at the MICCAI 2017 conference, where we achieved the greatest classification accuracy with a score of 0.81.
T. Qaiser, A. Mukherjee, C. Reddy Pb, S. D. Munugoti, V. Tallam, T. Pitkäaho, T. Lehtimäki, T. Naughton, M. Berseth, A. Pedraza, R. Mukundan, M. Smith, A. Bhalerao, E. Rodner, M. Simon, J. Denzler, C.-H Huang, G. Bueno, D. Snead, I. Ellis, M. Ilyas, N. M. Rajpoot, “HER2 challenge contest: a detailed assessment of automated HER2 scoring algorithms in whole slide images of breast cancer tissues,” Histopathology, Jan. 2018, vol. 72(2), p. 227–238. [Abstract] [doi][Data]
Evaluating expression of the human epidermal growth factor receptor 2 (HER2) by visual examination of immunohistochemistry (IHC) on invasive breast cancer (BCa) is a key part of the diagnostic assessment of BCa due to its recognized importance as a predictive and prognostic marker in clinical practice. However, visual scoring of HER2 is subjective, and consequently prone to interobserver variability. Given the prognostic and therapeutic implications of HER2 scoring, a more objective method is required. In this paper, we report on a recent automated HER2 scoring contest, held in conjunction with the annual PathSoc meeting held in Nottingham in June 2016, aimed at systematically comparing and advancing the state‐of‐the‐art artificial intelligence (AI)‐based automated methods for HER2 scoring. Methods and results: The contest data set comprised digitized whole slide images (WSI) of sections from 86 cases of invasive breast carcinoma stained with both haematoxylin and eosin (H&E) and IHC for HER2. The contesting algorithms predicted scores of the IHC slides automatically for an unseen subset of the data set and the predicted scores were compared with the ‘ground truth’ (a consensus score from at least two experts). We also report on a simple ‘Man versus Machine’ contest for the scoring of HER2 and show that the automated methods could beat the pathology experts on this contest data set. Conclusions: This paper presents a benchmark for comparing the performance of automated algorithms for scoring of HER2. It also demonstrates the enormous potential of automated algorithms in assisting the pathologist with objective IHC scoring.
A. Ahmad, A. Asif, N. M. Rajpoot, M. Arif, F. Minhas, “Correlation Filters for Detection of Cellular Nuclei in Histopathology Images,” Journal of Medical Systems, Jan. 2018, vol. 42, p. 7. [Abstract] [doi]
Nuclei detection in histology images is an essential part of computer aided diagnosis of cancers and tumors. It is a challenging task due to diverse and complicated structures of cells. In this work, we present an automated technique for detection of cellular nuclei in hematoxylin and eosin stained histopathology images. Our proposed approach is based on kernelized correlation filters. Correlation filters have been widely used in object detection and tracking applications but their strength has not been explored in the medical imaging domain up till now. Our experimental results show that the proposed scheme gives state of the art accuracy and can learn complex nuclear morphologies. Like deep learning approaches, the proposed filters do not require engineering of image features as they can operate directly on histopathology images without significant preprocessing. However, unlike deep learning methods, the large-margin correlation filters developed in this work are interpretable, computationally efficient and do not require specialized or expensive computing hardware.
J. van der Laak, N. M. Rajpoot, D. Vossen, “The Promise of Computational Pathology,” The Pathologist, Jan. 2018, vol. 38, p. 16–26. [Magazine article]
B. E. Bejnordi, M. Veta, P. J. van Diest, B van Ginneken, N Karssemeijer, G Litjens, JAWM van der Laak, M. Hermsen, Q. F. Manson, M. Balkenhol, O. Geessink, N. Stathonikos, M. CRF van Dijk, P. Bult, F. Beca, A. H. Beck, D. Wang, A. Khosla, R. Gargeya, H. Irshad, A. Zhong, Q. Dou, Q. Li, H. Chen, H.-J. Lin, P.-A. Heng, C. Haß, E. Bruni, Q. Wong, U. Halici, M. Ü. Öner, R. Cetin-Atalay, M. Berseth, V. Khvatkov, A. Vylegzhanin, O. Kraus, M. Shaban, N. M. Rajpoot, R. Awan, K. Sirinukunwattana, T. Qaiser, Y.-W. Tsang, D. Tellez, J. Annuscheit, P. Hufnagl, M. Valkonen, K. Kartasalo, L. Latonen, P. Ruusuvuori, K. Liimatainen, S. Albarqouni, B. Mungal, A. George, S. Demirci, N. Navab, S. Watanabe, S. Seno, Y. Takenaka, H. Matsuda, H. A. Phoulady, V. Kovalev, A. Kalinovsky, V. Liauchuk, G. Bueno, M. M. Fernandez-Carrobles, I. Serrano, O. Deniz, D. Racoceanu, R. Venâncio, “Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer,” Journal of the American Medical Association (JAMA), Dec. 2017, vol. 318(22), p. 2199–2210. [Abstract] [doi]
Importance Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency.
Objective Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin–stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists’ diagnoses in a diagnostic setting.
Design, Setting, and Participants Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC).
Exposures Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation.
Main Outcomes and Measures The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor.
Results The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P < .001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC).
Conclusions and Relevance In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.
R. Awan, K. Sirinukunwattana, D. Epstein, S. Jefferyes, U. Qidwai, Z. Aftab, I. Mujeeb, D. Snead, N. M. Rajpoot, “Glandular morphometrics for objective grading of colorectal adenocarcinoma histology images,” Nature Scientific Reports, Dec. 2017, vol. 7(1), p. 16852. [Abstract] [doi]
Determining the grade of colon cancer from tissue slides is a routine part of the pathological analysis. In the case of colorectal adenocarcinoma (CRA), grading is partly determined by morphology and degree of formation of glandular structures. Achieving consistency between pathologists is difficult due to the subjective nature of grading assessment. An objective grading using computer algorithms will be more consistent, and will be able to analyse images in more detail. In this paper, we measure the shape of glands with a novel metric that we call the Best Alignment Metric (BAM). We show a strong correlation between a novel measure of glandular shape and grade of the tumour. We used shape specific parameters to perform a two-class classification of images into normal or cancerous tissue and a three-class classification into normal, low grade cancer, and high grade cancer. The task of detecting gland boundaries, which is a prerequisite of shape-based analysis, was carried out using a deep convolutional neural network designed for segmentation of glandular structures. A support vector machine (SVM) classifier was trained using shape features derived from BAM. Through cross-validation, we achieved an accuracy of 97% for the two-class and 91% for three-class classification.
T.-H. Song, V. Sanchez, H. Eldaly & N. M. Rajpoot, “Dual-Channel Active Contour Model for Megakaryocytic Cell Segmentation in Bone Marrow Trephine Histology Images,” IEEE Transactions on Biomedical Engineering, Dec. 2017, vol. 64(12), p. 2913–2923. [Abstract] [doi]
Assessment of morphological features of megakaryocytes (MKs) (special kind of cells) in bone marrow trephine biopsies play an important role in the classification of different subtypes of Philadelphia-chromosome-negative myeloproliferative neoplasms (Ph-negative MPNs). In order to aid hematopathologists in the study of MKs, we propose a novel framework that can efficiently delineate the nuclei and cytoplasm of these cells in digitized images of bone marrow trephine biopsies. The framework first employs a supervised machine learning approach that utilizes color and texture features to delineate megakaryocytic nuclei. It then employs a novel dual-channel active contour model to delineate the boundary of megakaryocytic cytoplasm by using different deconvolved stain channels. Compared to other recent models, the proposed framework achieves accurate results for both megakaryocytic nuclear and cytoplasmic delineation.
E. Rezk, Z. Awan, F. Islam, A. Jaoua, S. AlMaadeed, N. Zhang, G. Das, N. M. Rajpoot, “Conceptual data sampling for breast cancer histology image classification,” Computers in Biology and Medicine, Oct. 2017, vol. 89, p. 59–67. [Abstract] [doi]
Data analytics have become increasingly complicated as the amount of data has increased. One technique that is used to enable data analytics in large datasets is data sampling, in which a portion of the data is selected to preserve the data characteristics for use in data analytics. In this paper, we introduce a novel data sampling technique that is rooted in formal concept analysis theory. This technique is used to create samples reliant on the data distribution across a set of binary patterns. The proposed sampling technique is applied in classifying the regions of breast cancer histology images as malignant or benign. The performance of our method is compared to other classical sampling methods. The results indicate that our method is efficient and generates an illustrative sample of small size. It is also competing with other sampling methods in terms of sample size and sample quality represented in classification accuracy and F1 measure.
M. Xue, A. Shafie, T. Qaiser, N. M. Rajpoot, G. Kaltsas, S. James, K. Gopalakrishnan, A. Fisk, G. K. Dimitriadis, D. K. Grammatopoulos, N. Rabbani, P. J. Thornalley, M. O. Weickert, “Glyoxalase 1 copy number variation in patients with well differentiated gastro-entero-pancreatic neuroendocrine tumours (GEP-NET).,” Oncotarget, Sep. 2017, vol. 8(44), p. 76961–76973. [Abstract] [doi]
Background: The glyoxalase-1 gene (GLO1) is a hotspot for copy-number variation (CNV) in human genomes. Increased GLO1 copy-number is associated with multidrug resistance in tumour chemotherapy, but prevalence of GLO1 CNV in gastro-entero-pancreatic neuroendocrine tumours (GEP-NET) is unknown.
Methods: GLO1 copy-number variation was measured in 39 patients with GEP-NET (midgut NET, n = 25; pancreatic NET, n = 14) after curative or debulking surgical treatment. Primary tumour tissue, surrounding healthy tissue and, where applicable, additional metastatic tumour tissue were analysed, using real time qPCR. Progression and survival following surgical treatment were monitored over 4.2 ± 0.5 years.
Results: In the pooled GEP-NET cohort, GLO1 copy-number in healthy tissue was 2.0 in all samples but significantly increased in primary tumour tissue in 43% of patients with pancreatic NET and in 72% of patients with midgut NET, mainly driven by significantly higher GLO1 copy-number in midgut NET. In tissue from additional metastases resection (18 midgut NET and one pancreatic NET), GLO1 copy number was also increased, compared with healthy tissue; but was not significantly different compared with primary tumour tissue. During mean 3 - 5 years follow-up, 8 patients died and 16 patients showed radiological progression. In midgut NET, a high GLO1 copy-number was associated with earlier progression. In NETs with increased GLO1 copy number, there was increased Glo1 protein expression compared to non-malignant tissue.
Conclusions: GLO1 copy-number was increased in a large percentage of patients with GEP-NET and correlated positively with increased Glo1 protein in tumour tissue. Analysis of GLO1 copy-number variation particularly in patients with midgut NET could be a novel prognostic marker for tumour progression.
N. Trahearn, D. Epstein, I. Cree, D. R. J. Snead, N. M. Rajpoot, “Hyper-Stain Inspector: A Framework for Robust Registration and Localised Co-Expression Analysis of Multiple Whole-Slide Images of Serial Histology Sections,” Nature Scientific Reports, Jul. 2017, vol. 7, p. 5641. [Abstract] [doi]
In this paper, we present a fast method for registration of multiple large, digitised whole-slide images (WSIs) of serial histology sections. Through cross-slide WSI registration, it becomes possible to select and analyse a common visual field across images of several serial section stained with different protein markers. It is, therefore, a critical first step for any downstream co-localised cross-slide analysis. The proposed registration method uses a two-stage approach, first estimating a fast initial alignment using the tissue sections’ external boundaries, followed by an efficient refinement process guided by key biological structures within the visual field. We show that this method is able to produce a high quality alignment in a variety of circumstances, and demonstrate that the refinement is able to quantitatively improve registration quality. In addition, we provide a case study that demonstrates how the proposed method for cross-slide WSI registration could be used as part of a specific co-expression analysis framework.
N. Trahearn, Y.-W . Tsang, I. Cree, D. R. J. Snead, D. B. A. Epstein, N. M. Rajpoot, “Simultaneous Automatic Scoring and Co-Registration of Hormone Receptors in Tumour Areas in Whole Slide Images of Breast Cancer Tissue Slides,” Cytometry: Part A special issue on Computer-Aided Diagnostics in Digital Pathology, Jun. 2017, vol. 91(6), p. 585–594. [Abstract] [doi]
Automation of downstream analysis may offer many potential benefits to routine histopathology. One area of interest for automation is in the scoring of multiple immunohistochemical markers to predict the patient's response to targeted therapies. Automated serial slide analysis of this kind requires robust registration to identify common tissue regions across sections. We present an automated method for co‐localized scoring of Estrogen Receptor and Progesterone Receptor (ER/PR) in breast cancer core biopsies using whole slide images. Regions of tumor in a series of fifty consecutive breast core biopsies were identified by annotation on H&E whole slide images. Sequentially cut immunohistochemical stained sections were scored manually, before being digitally scanned and then exported into JPEG 2000 format. A two‐stage registration process was performed to identify the annotated regions of interest in the immunohistochemistry sections, which were then scored using the Allred system. Overall correlation between manual and automated scoring for ER and PR was 0.944 and 0.883, respectively, with 90% of ER and 80% of PR scores within in one point or less of agreement. This proof of principle study indicates slide registration can be used as a basis for automation of the downstream analysis for clinically relevant biomarkers in the majority of cases. The approach is likely to be improved by implantation of safeguarding analysis steps post registration.
T. Qaiser, Y.-W. Tsang, D. Epstein, N. M. Rajpoot, “Tumor segmentation in whole slide images using persistent homology and deep convolutional features.,” In Medical Image Understanding and Analysis (MIUA), Jul. 2017, vol. 723, p. 320–329. [Abstract] [doi]
This paper presents a novel automated tumor segmentation approach for Hematoxylin & Eosin stained histology images. The proposed method enhances the segmentation performance by combining the topological and convolution neural network (CNN) features. Our approach is based on 3 steps: (1) construct enhanced persistent homology profiles by using topological features; (2) train a CNN to extract convolutional features; (3) employ a multi-stage ensemble strategy to combine Random Forest regression models. The experimental results demonstrate that proposed method outperforms the conventional CNN.
S. E A. Raza, L. Cheung, D. Epstein, S. Pelengaris, M. Khan, N. M. Rajpoot, “Mimonet: Gland segmentation using multi-input-multi-output convolutional neural network,” In Medical Image Understanding and Analysis (MIUA), Jul. 2017, pp. 698–706. [Abstract] [doi]
Morphological assessment of glands in histopathology images is very important in cancer grading. However, this is labour intensive, requires highly trained pathologists and has limited reproducibility. Digitisation of tissue slides provides us with the opportunity to employ computers, which are very efficient in repetitive tasks, allowing us to automate the morphological assessment with input from the pathologist. The first step in automated morphological assessment is the segmentation of these glandular regions. In this paper, we present a multi-input multi-output convolutional neural network for segmentation of glands in histopathology images. We test our algorithm on the publicly available GLaS data set and show that our algorithm produces competitive results compared to the state-of-the-art algorithms in terms of various quantitative measures.
S. E. A. Raza, L. Cheung, D. Epstein, S. Pelengaris, M. Khan, N. M. Rajpoot, “MIMO-Net: A multi-input multi-output convolutional neural network for cell segmentation in fluorescence microscopy images,” IEEE International Symposium on Biomedical Imaging (ISBI), Apr. 2017, p. 337-340. [Abstract] [doi]
We propose a novel multiple-input multiple-output convolution neural network (MIMO-Net) for cell segmentation in fluorescence microscopy images. The proposed network trains the network parameters using multiple resolutions of the input image, connects the intermediate layers for better localization and context and generates the output using multi-resolution deconvolution filters. The MIMO-Net allows us to deal with variable intensity cell boundaries and highly variable cell size in the mouse pancreatic tissue by adding extra convolutional layers which bypass the max-pooling operation. The results show that our method outperforms state-of-the-art deep learning based approaches for segmentation.
G. Li, S.E.A. Raza, N.M. Rajpoot, “Multi-Resolution Cell Orientation Congruence Descriptors for Epithelium Segmentation in Endometrial Histology Images,” Medical Image Analysis, Jan. 2017, vol. 37, p. 91–100. [Abstract] [doi]
It has been recently shown that recurrent miscarriage can be caused by abnormally high ratio of number of uterine natural killer (UNK) cells to the number of stromal cells in human female uterus lining. Due to high workload, the counting of UNK and stromal cells needs to be automated using computer algorithms. However, stromal cells are very similar in appearance to epithelial cells which must be excluded in the counting process. To exclude the epithelial cells from the counting process it is necessary to identify epithelial regions. There are two types of epithelial layers that can be encountered in the endometrium: luminal epithelium and glandular epithelium. To the best of our knowledge, there is no existing method that addresses the segmentation of both types of epithelium simultaneously in endometrial histology images. In this paper, we propose a multi-resolution Cell Orientation Congruence (COCo) descriptor which exploits the fact that neighbouring epithelial cells exhibit similarity in terms of their orientations. Our experimental results show that the proposed descriptors yield accurate results in simultaneously segmenting both luminal and glandular epithelium.
N. Alsubaie, N. Trahearn, S.E.A. Raza, D. R. J. Snead, N. M. Rajpoot, “Stain Deconvolution Using Statistical Analysis of Multi-Resolution Stain Colour Representation,” PLoS One, Jan. 2017, vol. 12, no. 1, p.e0169875. [Abstract] [doi]
Stain colour estimation is a prominent factor of the analysis pipeline in most of histology image processing algorithms. Providing a reliable and efficient stain colour deconvolution approach is fundamental for robust algorithm. In this paper, we propose a novel method for stain colour deconvolution of histology images. This approach statistically analyses the multi-resolutional representation of the image to separate the independent observations out of the correlated ones. We then estimate the stain mixing matrix using filtered uncorrelated data. We conducted an extensive set of experiments to compare the proposed method to the recent state of the art methods and demonstrate the robustness of this approach using three different datasets of scanned slides, prepared in different labs using different scanners.
K. Sirinukunwattana, J. P. W. Pluim, H. Chen, X. Qi, P-A Heng, Y. B. Guo, L. Y. Wang, B. J. Matuszewski, E. Bruni, U. Sanchez, A. Böhm, O. Ronneberger, B. B. Cheikh, D. Racoceanu, P. Kainz, M. Pfeiffer, M. Urschler, D. R. J. Snead, N. M. Rajpoot, “Gland Segmentation in Colon Histology Images: The GlaS Challenge Contest,” Medical Image Analysis, Jan. 2017, vol. 35, p. 489–502. [Abstract] [doi] [Data]
Colorectal adenocarcinoma originating in intestinal glandular structures is the most common form of colon cancer. In clinical practice, the morphology of intestinal glands, including architectural appearance and glandular formation, is used by pathologists to inform prognosis and plan the treatment of individual patients. However, achieving good inter-observer as well as intra-observer reproducibility of cancer grading is still a major challenge in modern pathology. An automated approach which quantifies the morphology of glands is a solution to the problem.
This paper provides an overview to the Gland Segmentation in Colon Histology Images Challenge Contest (GlaS) held at MICCAI’2015. Details of the challenge, including organization, dataset and evaluation criteria, are presented, along with the method descriptions and evaluation results from the top performing methods.
T. Qaiser, K. Sirinukunwattana, K. Nakane, Y.-W Tsang, D. B. A. Epstein, N. M. Rajpoot, “Persistent Homology for Fast Tumor Segmentation in Whole Slide Histology Images,” Procedia Computer Science, Dec. 2016, vol. 90, p.119-124. [Abstract] [doi]
Automated tumor segmentation in Hematoxylin & Eosin stained histology images is an essential step towards a computer-aided diagnosis system. In this work we propose a novel tumor segmentation approach for a histology whole-slide image (WSI) by exploring the degree of connectivity among nuclei using the novel idea of persistent homology profiles. Our approach is based on 3 steps: 1) selection of exemplar patches from the training dataset using convolutional neural networks (CNNs); 2) construction of persistent homology profiles based on topological features; 3) classification using variant of k-nearest neighbors (k-NN). Extensive experimental results favor our algorithm over a conventional CNN.
A. Mahmood, M. Small, S. Al-Maadeed, N. M. Rajpoot, “Using Geodesic Space Density Gradients for Network Community Detection,” IEEE Transactions on Knowledge and Data Engineering, Nov. 2016, vol. PP, no.99, p.1-1. [Abstract] [doi]
Many real world complex systems naturally map to network data structures instead of geometric spaces because the only available information is the presence or absence of a link between two entities in the system. To enable data mining techniques to solve problems in the network domain, the nodes need to be mapped to a geometric space. We propose this mapping by representing each network node with its geodesic distances from all other nodes. The space spanned by the geodesic distance vectors is the geodesic space of that network. Position of different nodes in the geodesic space encode the network structure. In this space, considering a continuous density field induced by each node, density at a specific point is the summation of density fields induced by all nodes. We drift each node in the direction of positive density gradient using an iterative algorithm till each node reaches a local maximum. Due to the network structure captured by this space, the nodes that drift to the same region of space belong to the same communities in the original network. We use the direction of movement and final position of each node as important clues for community membership assignment. The proposed algorithm is compared with more than ten state of the art community detection techniques on two benchmark networks with known communities using Normalized Mutual Information criterion. The proposed algorithm outperformed these methods by a significant margin. Moreover, the proposed algorithm has also shown excellent performance on many real-world networks.
Automation of downstream analysis may offer many potential benefits to routine histopathology. One area of interest for automation is in the scoring of multiple immunohistochemical markers to predict the patient's response to targeted therapies. Automated serial slide analysis of this kind requires robust registration to identify common tissue regions across sections. We present an automated method for co-localized scoring of Estrogen Receptor and Progesterone Receptor (ER/PR) in breast cancer core biopsies using whole slide images. Regions of tumor in a series of fifty consecutive breast core biopsies were identified by annotation on H&E whole slide images. Sequentially cut immunohistochemical stained sections were scored manually, before being digitally scanned and then exported into JPEG 2000 format. A two-stage registration process was performed to identify the annotated regions of interest in the immunohistochemistry sections, which were then scored using the Allred system. Overall correlation between manual and automated scoring for ER and PR was 0.944 and 0.883, respectively, with 90% of ER and 80% of PR scores within in one point or less of agreement. This proof of principle study indicates slide registration can be used as a basis for automation of the downstream analysis for clinically relevant biomarkers in the majority of cases. The approach is likely to be improved by implantation of safeguarding analysis steps post registration.
V. N. Kovacheva, N. M. Rajpoot, “Subcellular protein expression models for microsatellite instability in colorectal adenocarcinoma tissue images,” BMC Bioinformatics, Oct. 2016, vol. 17:430. [Abstract] [doi]
Background New bioimaging techniques capable of visualising the co-location of numerous proteins within individual cells have been proposed to study tumour heterogeneity of neighbouring cells within the same tissue specimen. These techniques have highlighted the need to better understand the interplay between proteins in terms of their colocalisation. Results We recently proposed a cellular-level model of the healthy and cancerous colonic crypt microenvironments. Here, we extend the model to include detailed models of protein expression to generate synthetic multiplex fluorescence data. As a first step, we present models for various cell organelles learned from real immunofluorescence data from the Human Protein Atlas. Comparison between the distribution of various features obtained from the real and synthetic organelles has shown very good agreement. This has included both features that have been used as part of the model input and ones that have not been explicitly considered. We then develop models for six proteins which are important colorectal cancer biomarkers and are associated with microsatellite instability, namely MLH1, PMS2, MSH2, MSH6, P53 and PTEN. The protein models include their complex expression patterns and which cell phenotypes express them. The models have been validated by comparing distributions of real and synthesised parameters and by application of frameworks for analysing multiplex immunofluorescence image data. Conclusions The six proteins have been chosen as a case study to illustrate how the model can be used to generate synthetic multiplex immunofluorescence data. Further proteins could be included within the model in a similar manner to enable the study of a larger set of proteins of interest and their interactions. To the best of our knowledge, this is the first model for expression of multiple proteins in anatomically intact tissue, rather than within cells in culture.
V. N. Kovacheva, D. Snead, N. M. Rajpoot, “A model of the spatial tumour heterogeneity in colorectal adenocarcinoma tissue,” BMC Bioinformatics, Jun. 2016, vol. 17:25. [Abstract] [doi]
Background There have been great advancements in the field of digital pathology. The surge in development of analytical methods for such data makes it crucial to develop benchmark synthetic datasets for objectively validating and comparing these methods. In addition, developing a spatial model of the tumour microenvironment can aid our understanding of the underpinning laws of tumour heterogeneity. Results We propose a model of the healthy and cancerous colonic crypt microenvironment. Our model is designed to generate synthetic histology image data with parameters that allow control over cancer grade, cellularity, cell overlap ratio, image resolution, and objective level. Conclusions To the best of our knowledge, ours is the first model to simulate histology image data at sub-cellular level for healthy and cancerous colon tissue, where the cells have different compartments and are organised to mimic the microenvironment of tissue in situ rather than dispersed cells in a cultured environment. Qualitative and quantitative validation has been performed on the model results demonstrating good similarity to the real data. The simulated data could be used to validate techniques such as image restoration, cell and crypt segmentation, and cancer grading.
T. Qaiser, K. Sirinukunwattana, & N.M. Rajpoot, “An Integrated Environment For Tissue Morphometrics And Analytics.,” Diagnostic Pathology, Jun. 2016, vol. 1(8). [Abstract] [doi]
Introduction/ Background: Attaining high reproducibility in cancer diagnosis is still one of the main challenges in modern pathology due to subjectivity. An integrated framework to extract quantitative morphological features from histology images and perform analytics that can lead to the identification of outcome-related features will provide a more accurate and reproducible means to assess cancer.
Aims: We propose an integrated environment which enables analytics of whole-slide tissue morphometry for a selected cohort of cancer patients. The proposed integrated environment includes three main components (1) core module comprising of visualization of WSIs at multiple magnification levels, enabling display of clinical and imaging data for multiple cases simultaneously, (2) analytical module contains an interactive tool for measuring dimensions of tissue componentsand interactive annotation module, and (3) analytics module applied to data from a selected subset of cases or all the cases using quantitative morphological measurements including those derived from automatic phenotyping of cells. The proposed environment can be further extended by adding new analytical modules and it can directly bring the benefits of quantitative analysis into pathological practices, thereby increasing reproducibility of cancer diagnosis. Furthermore, it can facilitate the studies of prognostic models, in which morphometric features strongly correlated with the outcome of cancers can be identified and used as image-based markers.
Methods: Our integrated environment for tissue analytics consists of core and analytical modules. The core module is a main portal to assess the already available imaging and clinical data, as well as analytical data which comes from integrated analytical modules. These data are interconnected, allowing the users to query imaging data according to available variables in clinical and/or analytical data. It also enables the examination of the clinical and imaging data at the same time. We developed a WSI viewer for exploring the tissue components at different magnification levels by supporting multi-threaded architecture for decompressing the image regions. The state-of-the-art algorithm for automatic phenotyping of all cells in WSIs is the analytic module that makes our interface different from other existing software. The algorithm is capable of identifying multiple classes of cells with high accuracy both in terms of quantitative and quantitative validation. This tool offers a fully automated analysis at the cell population level, in terms of the number and the spatial distribution of different cell types. This can, consequently reduce subjectivity and tediousness of the routine semiquantitative analyses performed by pathologists. This analytical tool is applicable to many prognostic applications such as identifying incidence of metastasis in sentinel lymph nodes, measuring the number of as well as locating tumor-infiltrating lymphocytes, etc.
Results: In this study, we have presented a fully customizable interactive environment for tissue analytics, equipped with measuring, annotating, and automatically cell phenotyping tools. The proposed integrated environment has remarkable potential to assist researchers and pathologists to reduce the human errors (if any) in diagnosing cancers. Further, this environment can serve as a benchmark to develop other morphometric measuring tools.
N. Alsubaie, S. E. A. Raza, and N. M. Rajpoot, “Stain Deconvoloution of Histology Images via Independent Component Analysis in the Wavelet Domain,” IEEE International Symposium on Biomedical Imaging (ISBI), Apr. 2016, p. 803-806. [Abstract] [doi]
With the ubiquity of digital slide scanners, histology image analysis is rapidly emerging as an active area of research. Several histology image analysis algorithms such as those for mitotic cell detection, nuclei segmentation and hormone receptors scoring depend on colour information obtained from images of the scanned slides. However, different standards followed by different labs and the technical variation among different scanners result in stain inconsistency in histology images. Thus, applications that use colour information may fail when they are applied to images with different appearance of stain colours. In this paper, we propose a novel method to estimate the so called stain matrix via independent component analysis in the wavelet domain for stain deconvolution in histology images. Experimental results demonstrate stable and more accurate stain deconvolution results as compared to other recently proposed algorithms.
M. N. Kashif, S.E.A. Raza, K. Sirinukunwattana, M. Arif, N. M. Rajpoot, “Handcrafted features with convolutional neural networks for detection of tumor cells in histology images,” IEEE International Symposium on Biomedical Imaging (ISBI), Apr. 2016, p. 1029-1032. [Abstract] [doi]
Detection of tumor nuclei in cancer histology images requires sophisticated techniques due to the irregular shape, size and chromatin texture of the tumor nuclei. Some very recently proposed methods employ deep convolutional neural networks (CNNs) to detect cells in H&E stained images. However, all such methods use some form of raw pixel intensities as input and rely on the CNN to learn the deep features. In this work, we extend a recently proposed spatially constrained CNN (SC-CNN) by proposing features that capture texture characteristics and show that although CNN produces good results on automatically learned features, it can perform better if the input consists of a combination of handcrafted features and the raw data. The handcrafted features are computed through the scattering transform which gives non-linear invariant texture features. The combination of handcrafted features with raw data produces sharp proximity maps and better detection results than the results of raw intensities with a similar kind of CNN architecture.
S. E. A. Raza, D. Langenkämper, K. Sirinukunwattana, D. B. A. Epstein, T. W. Nattkemper, and N. M. Rajpoot, “Robust Normalization Protocols for Multiplexed Fluorescence Bioimage Analysis,” BMC Biodata Min., Mar. 2016, vol. 9:11. [Abstract] [doi]
The study of mapping and interaction of co-localized proteins at a sub-cellular level is important for understanding complex biological phenomena. One of the recent techniques to map co-localized proteins is to use the standard immuno-fluorescence microscopy in a cyclic manner. Unfortunately, these techniques suffer from variability in intensity and positioning of signals from protein markers within a run and across different runs. Therefore, it is necessary to standardize protocols for preprocessing of the multiplexed bioimaging (MBI) data from multiple runs to a comparable scale before any further analysis can be performed on the data. In this paper, we compare various normalization protocols and propose on the basis of the obtained results, a robust normalization technique that produces consistent results on the MBI data collected from different runs using the Toponome Imaging System (TIS). Normalization results produced by the proposed method on a sample TIS data set for colorectal cancer patients were ranked favorably by two pathologists and two biologists. We show that the proposed method produces higher between class Kullback-Leibler (KL) divergence and lower within class KL divergence on a distribution of cell phenotypes from colorectal cancer and histologically normal samples.
K. Sirinukunwattana, S.E.A. Raza, Y.-W. Tsang, D. Snead, I. Cree, and N.M. Rajpoot, “Locality Sensitive Deep Learning for Detection and Classification of Nuclei in Routine Colon Cancer Histology Images,” IEEE Transactions on Medical Imaging (TMI), pp. 1–1, Jan. 2016. [Abstract] [doi] [Data]
Detection and classification of cell nuclei in histopathology images of cancerous tissue stained with the standard hematoxylin and eosin stain is a challenging task due to cellular heterogeneity. Deep learning approaches have been shown to produce encouraging results on histopathology images in various studies. In this paper, we propose a Spatially Constrained Convolutional Neural Network (SC-CNN) to perform nucleus detection. SC-CNN regresses the likelihood of a pixel being the center of a nucleus, where high probability values are spatially constrained to locate in the vicinity of the center of nuclei. For classification of nuclei, we propose a novel Neighboring Ensemble Predictor (NEP) coupled with CNN to more accurately predict the class label of detected cell nuclei. The proposed approaches for detection and classification do not require segmentation of nuclei. We have evaluated them on a large dataset of colorectal adenocarcinoma images, consisting of more than 20,000 annotated nuclei belonging to four different classes. Our results show that the joint detection and classification of the proposed SC-CNN and NEP produces the highest average F1 score as compared to other recently published approaches. Prospectively, the proposed methods could offer benefit to pathology practice in terms of quantitative analysis of tissue constituents in whole-slide images, and could potentially lead to a better understanding of cancer.
G. Li, V. Sanchez, P. C. S. B. Nagaraj, S. Khan, and N. M. Rajpoot, “A novel multitarget tracking algorithm for Myosin VI protein molecules on actin filaments in TIRFM sequences,” Journal of Microscopy, vol. 260, no. 3, pp. 312–325, Dec. 2015. [Abstract] [doi]
We propose a novel multitarget tracking framework for Myosin VI protein molecules in total internal reflection fluorescence microscopy sequences which integrates an extended Hungarian algorithm with an interacting multiple model filter. The extended Hungarian algorithm, which is a linear assignment problem based method, helps to solve measurement assignment and spot association problems commonly encountered when dealing with multiple targets, although a two-motion model interacting multiple model filter increases the tracking accuracy by modelling the nonlinear dynamics of Myosin VI protein molecules on actin filaments. The evaluation of our tracking framework is conducted on both real and synthetic total internal reflection fluorescence microscopy sequences. The results show that the framework achieves higher tracking accuracies compared to the state-of-the-art tracking methods, especially for sequences with high spot density.
K. Sirinukunwattana, S. E. A.Raza, Y.-W. Tsang, D. Snead, I. Cree, and N.M. Rajpoot, “A Spatially Constrained Deep Learning Framework for Detection of Epithelial Tumor Nuclei in Cancer Histology Images,” in 1st International Workshop on Patch-based Techniques in Medical Imaging, MICCAI, Oct. 2015, pp. 154–162. [Abstract] [doi]
Detection of epithelial tumor nuclei in standard Hematoxylin & Eosin stained histology images is an essential step for the analysis of tissue architecture. The problem is quite challenging due to the high chromatin texture of the tumor nuclei and their irregular size and shape. In this work, we propose a spatially constrained convolutional neural network (CNN) for the detection of malignant epithelial nuclei in histology images. Given an input patch, the proposed CNN is trained to regress, for every pixel in the patch, the probability of being the center of an epithelial tumor nucleus. The estimated probability values are topologically constrained such that high probability values are concentrated in the vicinity of the center of nuclei. The location of local maxima is then used as a cue for the final detection. Experimental results show that the proposed network outperforms the conventional CNN with center-pixel-only regression for the task of epithelial tumor nuclei detection.
G. Li, S. E. A. Raza, and N.M. Rajpoot, “A Novel Cell Orientation Congruence Descriptor for Superpixel Based Epithelium Segmentation in Endometrial Histology Images,” in 1st International Workshop on Patch-based Techniques in Medical Imaging, MICCAI, Oct. 2015, pp. 172–179. [Abstract] [doi]
Recurrent miscarriage can be caused by an abnormally high number of Uterine Natural Killer (UNK) cells in human female uterus lining. Recently a diagnosis protocol has been developed based on the ratio of UNK cells to stromal cells in endometrial biopsy slides immunohistochemically stained with Haematoxylin for all cells and CD56 as a marker for the UNK cells. The counting of UNK cells and stromal cells is an essential process in the protocol. However, the cell counts must not include epithelial cells from glandular structures and UNK cells from epithelium. In this paper, we propose a novel superpixel based epithelium segmentation algorithm based on the observation that neighbouring epithelial cells packed at the boundary of glandular structures or background tend to have similar local orientations. Our main contribution is a novel cell orientation congruence descriptor in a machine learning framework to differentiate between epithelial and non-epithelial cells.
D. R. Snead, Y.-W. Tsang, A. Meskiri, P. K. Kimani, R. Crossman, N. M. Rajpoot, E. Blessing, K. Chen, K. Gopalakrishnan, P. Matthews, N. Momtahan, S. Read-Jones, S. Sah, E. Simmons, B. Sinha, S. Suortamo, Y. Yeo, H. El Daly, and I. A. Cree, “Validation of digital pathology imaging for primary histopathological diagnosis,” Histopathology, Sep. 2015. [Abstract] [doi]
Aims Digital pathology (DP) offers advantages over glass slide microscopy (GS), but data demonstrating a statistically valid equivalent (i.e. non-inferior) performance of DP against GS are required to permit its use in diagnosis. The aim of this study is to provide evidence of non-inferiority. Methods and results Seventeen pathologists re-reported 3017 cases by DP. Of these, 1009 were re-reported by the same pathologist, and 2008 by a different pathologist. Re-examination of 10 138 scanned slides (2.22 terabytes) produced 72 variances between GS and DP reports, including 21 clinically significant variances. Ground truth lay with GS in 12 cases and with DP in nine cases. These results are within the 95% confidence interval for existing intraobserver and interobserver variability, proving that DP is non-inferior to GS. In three cases, the digital platform was deemed to be responsible for the variance, including a gastric biopsy, where Helicobacter pylori only became visible on slides scanned at the ×60 setting, and a bronchial biopsy and penile biopsy, where dysplasia was reported on DP but was not present on GS. Conclusions This is one of the largest studies proving that DP is equivalent to GS for the diagnosis of histopathology specimens. Error rates are similar in both platforms, although some problems e.g. detection of bacteria, are predictable.
A. M. Khan, K. Sirinukunwattana, and N.M. Rajpoot, “A Global Covariance Descriptor for Nuclear Atypia Scoring in Breast Histopathology Images,” IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 5, pp. 1637–1647, Sep. 2015. [Abstract] [doi]
Nuclear atypia scoring is a diagnostic measure commonly used to assess tumor grade of various cancers, including breast cancer. It provides a quantitative measure of deviation in visual appearance of cell nuclei from those in normal epithelial cells. In this paper, we present a novel image-level descriptor for nuclear atypia scoring in breast cancer histopathology images. The method is based on the region covariance descriptor that has recently become a popular method in various computer vision applications. The descriptor in its original form is not suitable for classification of histopathology images as cancerous histopathology images tend to possess diversely heterogeneous regions in a single field of view. Our proposed image-level descriptor, which we term as the geodesic mean of region covariance descriptors, possesses all the attractive properties of covariance descriptors lending itself to tractable geodesic-distance-based k-nearest neighbor classification using efficient kernels. The experimental results suggest that the proposed image descriptor yields high classification accuracy compared to a variety of widely used image-level descriptors.
K. Rajpoot, A. Riaz, W. Majeed, and N.M. Rajpoot, “Functional Connectivity Alterations in Epilepsy from Resting-State Functional MRI,” PLoS One, vol. 10, no. 8, p. e0134944, Aug. 2015. [Abstract] [doi]
The study of functional brain connectivity alterations induced by neurological disorders and their analysis from resting state functional Magnetic Resonance Imaging (rfMRI) is generally considered to be a challenging task. The main challenge lies in determining and interpreting the large-scale connectivity of brain regions when studying neurological disorders such as epilepsy. We tackle this challenging task by studying the cortical region connectivity using a novel approach for clustering the rfMRI time series signals and by identifying discriminant functional connections using a novel difference statistic measure. The proposed approach is then used in conjunction with the difference statistic to conduct automatic classification experiments for epileptic and healthy subjects using the rfMRI data. Our results show that the proposed difference statistic measure has the potential to extract promising discriminant neuroimaging markers. The extracted neuroimaging markers yield 93.08% classification accuracy on unseen data as compared to 80.20% accuracy on the same dataset by a recent state-of-the-art algorithm. The results demonstrate that for epilepsy the proposed approach confirms known functional connectivity alterations between cortical regions, reveals some new connectivity alterations, suggests potential neuroimaging markers, and predicts epilepsy with high accuracy from rfMRI scans.
S.E.A. Raza, V. Sanchez, G. Prince, J. Clarkson, and N. M. Rajpoot, “Registration of thermal and visible light images of diseased plants using silhouette extraction in the wavelet domain,” Pattern Recognition, vol. 48, pp. 2119–2128, Jul. 2015. [Abstract] [doi][Software]
The joint analysis of thermal and visible light images of plants can help to increase the accuracy of early disease detection. Registration of thermal and visible light images is an important pre-processing operation to perform this joint analysis correctly. In the case of diseased plants, registration using common methods based on mutual information is particularly challenging since the plant texture in the thermal image significantly differs from the corresponding texture in the visible light image. Registration methods based on silhouette extraction are therefore more appropriate. This paper proposes an algorithm for registration of thermal and visible light images of diseased plants based on silhouette extraction. The algorithm is based on a novel multi-scale method that employs the stationary wavelet transform to extract the silhouette of diseased plants in thermal images, in which common gradient-based methods usually fail due to the high noise content. Experimental results show that silhouettes extracted using this method can be used to register thermal and visible light images with high accuracy.
S. E. A. Raza and N. M. Rajpoot, “Cell Nuclei Segmentation in Variable Intensity Fluorescence Microscopy Images,” in Medical Image Understanding and Analysis, Jul. 2015, pp. 28–33. [Abstract] [doi]
We propose a method for automatic segmentation of variable intensity cell nuclei in the presence of highly variable noise in fluorescence microscopy images by adding novel texture information in the wavelet domain. The proposed method calculates the Hessian matrix using the stationary wavelet transform and uses eigenvalues of the Hessian matrix to obtain the underlying texture of nuclei and visual debris. The texture of chromatin nuclei helps to obtain the nucleus boundary in the presence of variable intensities and texture of the image noise helps to remove the noise. We demonstrate that our method produces better overlap with the hand-labelled ground truth on a publicly available data set with two different collections as compared to the state-of-the-art.
N. Alsubaie, N. Trahearn, S. E. A. Raza, and N. M. Rajpoot, “A Discriminative Framework for Stain Deconvolution of Histopathology Images in the Maxwellian Space,” in Medical Image Understanding and Analysis, Jul. 2015, pp. 132–137. [Abstract] [doi]
Histopathology image analysis has received a lot of attention since the advent of whole slide scanners. Digitisation of tissue slides lends itself to the automation of histopathology image analysis algorithms such as mitotic cell detection, nuclei segmentation and hormone receptors scoring. Most of these algorithms depend on the stain expression of scanned tissue slides. However, different standards followed by different labs and the technical variations among different scanners result in stain colour inconsistency in histopathology images across different labs. Thus, applications that rely on stain colour intensity might fail when they are applied to images with different colour appearance. In this paper, we present an effective method of stain deconvolution of histopathology images, which is a fast and reliable method of deriving the stain matrix. We propose a discriminative framework in the Maxwellian space to achieve reliable estimation of the stain matrix. We compare the proposed method with one of the state-of-the-art stain deconvolution methods and show that the proposed method estimates stain matrix with high accuracy.
K. Sirinukunwattana, A. M. Khan, and N. M. Rajpoot, “Cell words: Modelling the visual appearance of cells in histopathology images,” Computerized Medical Imaging and Graphics on Breakthrough Technologies in Digital Pathology, vol. 42, pp. 16–24, Jun. 2015. [Abstract] [doi][Software]
Detection and classification of cells in histological images is a challenging task because of the large intra-class variation in the visual appearance of various types of biological cells. In this paper, we propose a discriminative dictionary learning paradigm, termed as Cell Words, for modelling the visual appearance of cells which includes colour, shape, texture and context in a unified manner. The proposed framework is capable of distinguishing mitotic cells from non-mitotic cells (apoptotic, necrotic, epithelial) in breast histology images with high accuracy.
G. Li, V. Sanchez, G. Patel, S. Quenby, and N.M. Rajpoot, “Localisation of luminal epithelium edge in digital histopathology images of IHC stained slides of endometrial biopsies,” Comput. Med. Imaging Graph., vol. 42, pp. 56–64, Jun. 2015. [Abstract] [doi]
Diagnosis of recurrent miscarriage due to abnormally high number of uterine natural killer (uNK) cells has recently been made possible by a protocol devised by Quenby et al. Hum Reprod 2009;24(1):45–54. The diagnosis involves detection and counting of stromal and uNK cell nuclei in endometrial biopsy slides immunohistochemically stained with haematoxylin for staining cell nuclei and CD56 as a marker for the uNK cells. However, manual diagnosis is a laborious process, fraught with subjective errors. In this paper, we present a novel method for detection of uterine natural killer (uNK) cells in the human female uterus lining and localisation of the luminal epithelium edge in endometrial biopsies. Specifically, we employ a local phase symmetry based method to detect stromal cell nuclei and propose an adaptive background removal method that significantly eases the segmentation of uNK cell nuclei regions. We also propose a novel method using alpha shapes for the identification of epithelial cell nuclei and B-Spline curve fitting on identified cell nuclei to localise the luminal epithelium edge. The objective of edge localisation is to avoid cell nuclei near the luminal epithelium edge being counted in the diagnosis process due to their non-relevance to the calculation of stromal to uNK cell ratio that determines the diagnosis of recurrent miscarriages in the end. The resulting algorithm offers a promising potential for computer-assisted diagnosis of recurrent miscarriage due to its high accuracy.
K. Sirinukunwattana, D. Snead, and N.M. Rajpoot, “A Stochastic Polygons Model for Glandular Structures in Colon Histology Images,” IEEE Transactions on Medical Imaging (TMI), vol. 0062, no. c, pp. 1–1, May 2015. [Abstract] [doi]
In this paper, we present a stochastic model for glandular structures in histology images of tissue slides stained with Hematoxylin and Eosin, choosing colon tissue as an example. The proposed Random Polygons Model (RPM) treats each glandular structure in an image as a polygon made of a random number of vertices, where the vertices represent approximate locations of epithelial nuclei. We formulate the RPM as a Bayesian inference problem by defining a prior for spatial connectivity and arrangement of neighboring epithelial nuclei and a likelihood for the presence of a glandular structure. The inference is made via a Reversible-Jump Markov chain Monte Carlo simulation. To the best of our knowledge, all existing published algorithms for gland segmentation are designed to mainly work on healthy samples, adenomas, and low grade adenocarcinomas. One of them has been demonstrated to work on intermediate grade adenocarcinomas at its best. Our experimental results show that the RPM yields favorable results, both quantitatively and qualitatively, for extraction of glandular structures in histology images of normal human colon tissues as well as benign and cancerous tissues, excluding undifferentiated carcinomas.
V. N. Kovacheva, D. Snead, and N. M. Rajpoot, “A model of the spatial microenvironment of the colonic crypt,” in IEEE 12th International Symposium on Biomedical Imaging (ISBI), Apr. 2015, pp. 172–176. [Abstract] [doi]
There have been great advancements in the field of immunofluorescence imaging. The surge in development of analytical methods for such data makes it crucial to develop benchmark synthetic datasets for objectively validating these methods. We propose a model of the healthy colonic crypt microenvironments. Our model can simulate immunofluorescence image data with parameters that allow control over cellularity, cell overlap ratio, image resolution, and objective level. To the best of our knowledge, ours is the first model to simulate immunofluorescence image data at subcellular level for healthy colon tissue, where the cells have several compartments and are organized to mimic the microenvironment of tissue in situ rather than dispersed cells in a cultured environment. Validation of the model has been performed by comparing morphological features of the tissue structure between real and simulated images. In addition, we compare the performance of two cell counting algorithms. The simulated data could also be used to validate techniques such as image restoration, cell segmentation, and crypt segmentation.
K. Sirinukunwattana, D. R. J. Snead, and N. M. Rajpoot, “A random polygons model of glandular structures in colon histology images,” in IEEE 12th International Symposium on Biomedical Imaging (ISBI), Apr. 2015, pp. 1526–1529. [Abstract] [doi]
In this paper, we present a stochastic model for glandular structures in Hematoxylin and Eosin stained histology images, choosing colon tissue as an example. The proposed Random Polygons Model (RPM) treats each glandular structure in an image as a polygon made of a random number of vertices, where the vertices represent approximate locations of epithelial nuclei. We formulate the RPM as a Bayesian inference problem by defining a prior for spatial connectivity and arrangement of neighboring epithelial nuclei and likelihood about the presence of glandular structure. The inference is made via a Reversible-Jump Markov Chain Monte Carlo simulation. Our experimental results show that the RPM yields favorable results, both quantitatively and qualitatively, for extraction of glandular regions in histology images of human colon tissue.
S. E. A. Raza, G. Prince, J. Clarkson, and N. M. Rajpoot, “Automatic Detection of Diseased Tomato Plants using Thermal and Stereo Visible Light Images,” PLoS One, Apr. 2015. [Abstract] [doi]
Accurate and timely detection of plant diseases can help mitigate the worldwide losses experienced by the horticulture and agriculture industries each year. Thermal imaging provides a fast and non-destructive way of scanning plants for diseased regions and has been used by various researchers to study the effect of disease on the thermal profile of a plant. However, thermal image of a plant affected by disease has been known to be affected by environmental conditions which include leaf angles and depth of the canopy areas accessible to the thermal imaging camera. In this paper, we combine thermal and visible light image data with depth information and develop a machine learning system to remotely detect plants infected with the tomato powdery mildew fungus Oidium neolycopersici. We extract a novel feature set from the image data using local and global statistics and show that by combining these with the depth information, we can considerably improve the accuracy of detection of the diseased plants. In addition, we show that our novel feature set is capable of identifying plants which were not originally inoculated with the fungus at the start of the experiment but which subsequently developed disease through natural transmission.
S. E. A. Raza, M. Q. Marjan, M. Arif, F. Butt, F. Sultan, and N. M. Rajpoot, “Anisotropic tubular filtering for automatic detection of acid-fast bacilli in Ziehl-Neelsen stained sputum smear samples,” in SPIE Medical Imaging, Feb. 2015, vol. 9420, p. 942005. [Abstract] [doi]
One of the main factors for high workload in pulmonary pathology in developing countries is the relatively large proportion of tuberculosis (TB) cases which can be detected with high throughput using automated approaches. TB is caused by Mycobacterium tuberculosis, which appears as thin, rod-shaped acid-fast bacillus (AFB) in Ziehl-Neelsen (ZN) stained sputum smear samples. In this paper, we present an algorithm for automatic detection of AFB in digitized images of ZN stained sputum smear samples under a light microscope. A key component of the proposed algorithm is the enhancement of raw input image using a novel anisotropic tubular filter (ATF) which suppresses the background noise while simultaneously enhancing strong anisotropic features of AFBs present in the image. The resulting image is then segmented using color features and candidate AFBs are identified. Finally, a support vector machine classifier using morphological features from candidate AFBs decides whether a given image is AFB positive or not. We demonstrate the effectiveness of the proposed ATF method with two different feature sets by showing that the proposed image analysis pipeline results in higher accuracy and F1-score than the same pipeline with standard median filtering for image enhancement.
N. Trahearn, D. Snead, I. Cree, and N.M. Rajpoot, “Multi-class stain separation using independent component analysis,” in SPIE Medical Imaging, Feb. 2015, vol. 9420, p. 94200J. [Abstract] [doi]
Stain separation is the process whereby a full colour histology section image is transformed into a series of single channel images, each corresponding to a given stain's expression. Many algorithms in the field of digital pathology are concerned with the expression of a single stain, thus stain separation is a key preprocessing step in these situations. We present a new versatile method of stain separation. The method uses Independent Component Analysis (ICA) to determine a set of statistically independent vectors, corresponding to the individual stain expressions. In comparison to other popular approaches, such as PCA and NNMF, we found that ICA gives a superior projection of the data with respect to each stain. In addition, we introduce a correction step to improve the initial results provided by the ICA coefficients. Many existing approaches only consider separation of two stains, with primary emphasis on Haematoxylin and Eosin. We show that our method is capable of making a good separation when there are more than two stains present. We also demonstrate our method's ability to achieve good separation on a variety of different stain types.
T.-H. Song, V. Sanchez, H. EIDaly, and N. M. Rajpoot, “A circumscribing active contour model for delineation of nuclei and membranes of megakaryocytes in bone marrow trephine biopsy images,”in SPIE Medical Imaging, Feb. 2015, vol. 9420, p. 94200T. [Abstract] [doi]
The assessment of megakaryocytes (MKs) in bone marrow trephine images is an important step in the classification of different subtypes of myeloproliferative neoplasms (MPNs). In general, bone marrow trephine images include several types of cells mixed together, which make it quite difficult to visually identify MKs. In order to aid hematopathologists in the identification and study of MKs, we develop an image processing framework with supervised machine learning approaches and a novel circumscribing active contour model to identify potential MKs and then to accurately delineate the corresponding nucleus and membrane. Specifically, a number of color and texture features are used in a nave Bayesian classifier and an Adaboost classifier to locate the regions with a high probability of depicting MKs. A region-based active contour is used on the candidate MKs to accurately delineate the boundaries of nucleus and membrane. The proposed circumscribing active contour model employs external forces not only based on pixel intensities, but also on the probabilities of depicting MKs as computed by the classifiers. Experimental results suggest that the machine learning approach can detect potential MKs with an accuracy of more than 75%. When our circumscribing active contour model is employed on the candidate MKs, the nucleus and membrane boundaries are segmented with an accuracy of more than 80% as measured by the Dice similarity coefficient. Compared to traditional region-based active contours, the use of additional external forces based on the probability of depicting MKs improves segmentation performance and computational time by an average 5%.
K. Sirinukunwattana, D. R. Snead, and N. M. Rajpoot, “A novel texture descriptor for detection of glandular structures in colon histology images,”in SPIE Medical Imaging, Feb. 2015, vol. 9420, p. 94200S. [Abstract] [doi]
The first step prior to most analyses on most histopathology images is the detection of area of interest. In this work, we present a superpixel-based approach for glandular structure detection in colon histology images. An image is first segmented into superpixels with the constraint on the presence of glandular boundaries. Texture and color information is then extracted from each superpixel to calculate the probability of that superpixel belonging to glandular regions, resulting in a glandular probability map. In addition, we present a novel texture descriptor derived from a region covariance matrix of scattering coefficients. Our approach shows encouraging results for the detection of glandular structures in colon tissue samples.
M. Veta, P. J. van Diest, S. M. Willems, H. Wang, A. Madabhushi, A. Cruz-Roa, F. Gonzalez, A. B. L. Larsen, J. S. Vestergaard, A. B. Dahl, D. C. Cirean, J. Schmidhuber, A. Giusti, L. M. Gambardella, F. B. Tek, T. Walter, C.-W. Wang, S. Kondo, B. J. Matuszewski, F. Precioso, V. Snell, J. Kittler, T. E. de Campos, A. M. Khan, N. M. Rajpoot, E. Arkoumani, M. M. Lacle, M. A. Viergever, J. P. W. Pluim, “Assessment of algorithms for mitosis detection in breast cancer histopathology images,” Medical Image Analysis, Feb. 2015, vol. 20, no. 1, pp. 237–248. [Abstract] [doi]
The proliferative activity of breast tumors, which is routinely estimated by counting of mitotic figures in hematoxylin and eosin stained histology sections, is considered to be one of the most important prognostic markers. However, mitosis counting is laborious, subjective and may suffer from low inter-observer agreement. With the wider acceptance of whole slide images in pathology labs, automatic image analysis has been proposed as a potential solution for these issues. In this paper, the results from the Assessment of Mitosis Detection Algorithms 2013 (AMIDA13) challenge are described. The challenge was based on a data set consisting of 12 training and 11 testing subjects, with more than one thousand annotated mitotic figures by multiple observers. Short descriptions and results from the evaluation of eleven methods are presented. The top performing method has an error rate that is comparable to the inter-observer agreement among pathologists.
A.M. Khan, N.M. Rajpoot, D. Treanor, D. Magee, “A Non-Linear Mapping Approach to Stain Normalisation in Digital Histopathology Images using Image-Specific Colour Deconvolution,” IEEE Transactions on Biomedical Engineering, Jun. 2014, vol. 61 no. 6 p. 1729-1738. [Abstract] [doi][Software]
Histopathology diagnosis is based on visual examination of the morphology of histological sections under a microscope. With the increasing popularity of digital slide scanners, decision support systems based on the analysis of digital pathology images are in high demand. However, computerized decision support systems are fraught with problems that stem from color variations in tissue appearance due to variation in tissue preparation, variation in stain reactivity from different manufacturers/batches, user or protocol variation, and the use of scanners from different manufacturers. In this paper, we present a novel approach to stain normalization in histopathology images. The method is based on nonlinear mapping of a source image to a target image using a representation derived from color deconvolution. Color deconvolution is a method to obtain stain concentration values when the stain matrix, describing how the color is affected by the stain concentration, is given. Rather than relying on standard stain matrices, which may be inappropriate for a given image, we propose the use of a color-based classifier that incorporates a novel stain color descriptor to calculate image-specific stain matrix. In order to demonstrate the efficacy of the proposed stain matrix estimation and stain normalization methods, they are applied to the problem of tumor segmentation in breast histopathology images. The experimental results suggest that the paradigm of color normalization, as a preprocessing step, can significantly help histological image analysis algorithms to demonstrate stable performance which is insensitive to imaging conditions in general and scanner variations in particular.
A.M. Khan, S.E.A. Raza, M. Khan, N. M. Rajpoot, “Cell Phenotyping in Multi-Tag Fluorescent Bioimages,” Neurocomputing, Jun. 2014, vol. 134 no. 1 p. 254-261. [Abstract] [doi]
Multi-tag bioimaging systems have recently emerged as powerful tools which provide spatiotemporal localization of several different proteins in the same tissue specimen. The analysis of such multivariate bioimages requires sophisticated analytical methods that extract a molecular signature of various types of cells and assist in analyzing interaction behaviors of functional protein complexes. Previous studies were mainly focused on pixel-level analysis which essentially ignore cellular structures as units which can be crucial when analyzing cancerous cells. In this paper, we present a framework in order to overcome these limitations by incorporating cell-level analysis. We use this framework to identify cell phenotypes based on their high-dimensional co-expression profiles contained within the images generated by the robotically controlled TIS microscope installed at Warwick. The proposed paradigm employs a refined cell segmentation algorithm followed by a locality preserving nonlinear embedding algorithm which is shown to produce significantly better cell classification and phenotype distribution results as compared to its linear counterpart.
S.E.A Raza, H. Smith, G.J.J. Clarkson, G. Taylor, A. J. Thompson, J. Clarkson, N. M. Rajpoot, “Automatic Detection of Regions in Spinach Canopies Responding to Soil Moisture Deficit Using Combined Visible and Thermal Imagery,” PLoS ONE, Jun. 2014, vol. 9 no. 6 p. e97612. [Abstract] [doi]
Thermal imaging has been used in the past for remote detection of regions of canopy showing symptoms of stress, including water deficit stress. Stress indices derived from thermal images have been used as an indicator of canopy water status, but these depend on the choice of reference surfaces and environmental conditions and can be confounded by variations in complex canopy structure. Therefore, in this work, instead of using stress indices, information from thermal and visible light imagery was combined along with machine learning techniques to identify regions of canopy showing a response to soil water deficit. Thermal and visible light images of a spinach canopy with different levels of soil moisture were captured. Statistical measurements from these images were extracted and used to classify between canopies growing in well-watered soil or under soil moisture deficit using Support Vector Machines (SVM) and Gaussian Processes Classifier (GPC) and a combination of both the classifiers. The classification results show a high correlation with soil moisture. We demonstrate that regions of a spinach crop responding to soil water deficit can be identified by using machine learning techniques with a high accuracy of 97%. This method could, in principle, be applied to any crop at a range of scales.
V.N. Kovacheva, A.M. Khan, M. Khan, D. B. A. Epstein, N. M. Rajpoot, “DiSWOP: A Novel Measure for Cell-Level Protein Network Analysis in Localised Proteomics Image Data,” Bioinformatics, Mar. 2014, vol. 30 no. 3 p. 420-427. [Abstract] [doi]
Motivation: New bioimaging techniques have recently been proposed to visualize the colocation or interaction of several proteins within individual cells, displaying the heterogeneity of neighbouring cells within the same tissue specimen. Such techniques could hold the key to understanding complex biological systems such as the protein interactions involved in cancer. However, there is a need for new algorithmic approaches that analyze the large amounts of multi-tag bioimage data from cancerous and normal tissue specimens to begin to infer protein networks and unravel the cellular heterogeneity at a molecular level. Results: The proposed approach analyzes cell phenotypes in normal and cancerous colon tissue imaged using the robotically controlled Toponome Imaging System microscope. It involves segmenting the 4',6-diamidino-2-phenylindole-labelled image into cells and determining the cell phenotypes according to their protein–protein dependence profile. These were analyzed using two new measures, Difference in Sums of Weighted cO-dependence/Anti-co-dependence profiles (DiSWOP and DiSWAP) for overall co-expression and anti-co-expression, respectively. These novel quantities were extracted using 11 Toponome Imaging System image stacks from either cancerous or normal human colorectal specimens. This approach enables one to easily identify protein pairs that have significantly higher/lower co-expression levels in cancerous tissue samples when compared with normal colon tissue.
V.N. Kovacheva, D.B.A. Epstein, N.M. Rajpoot, “Advances in Discovery of Complex Biomarkers for Colorectal Cancer Using Multiplexed Proteomics Imaging,” Oncolongy News, Jan. 2014, vol. 8 no. 6 p. 191-193. [Abstract] [doi]
Multiplexed proteomics imaging techniques such as the Toponome Imaging System (TIS) can yield highresolution images of multiple proteins co-localised within individual cells. This enables the study of protein interactions and tumour heterogeneity both within and between cancer samples. Our group has recently developed methods for cell-level analysis of the multiplexed proteomics image data obtained from colorectal cancer samples. These methods together with the highly informative multiplexed proteomics image data hold great promise for discovering complex biomarkers that can aid the development of personalised medicine.
K. Sirinukunwattana, R.S. Savage, M.F. Bari, D. R. J. Snead, N. M. Rajpoot, “Bayesian Hierarchical Clustering for Studying Cancer Gene Expression Data with Unknown Statistics,” PLoS ONE, Oct. 2013, vol. 8 no. 10 p. e75748. [Abstract] [doi][Software]
Clustering analysis is an important tool in studying gene expression data. The Bayesian hierarchical clustering (BHC) algorithm can automatically infer the number of clusters and uses Bayesian model selection to improve clustering quality. In this paper, we present an extension of the BHC algorithm. Our Gaussian BHC (GBHC) algorithm represents data as a mixture of Gaussian distributions. It uses normal-gamma distribution as a conjugate prior on the mean and precision of each of the Gaussian components. We tested GBHC over 11 cancer and 3 synthetic datasets. The results on cancer datasets show that in sample clustering, GBHC on average produces a clustering partition that is more concordant with the ground truth than those obtained from other commonly used algorithms. Furthermore, GBHC frequently infers the number of clusters that is often close to the ground truth. In gene clustering, GBHC also produces a clustering partition that is more biologically plausible than several other state-of-the-art methods. This suggests GBHC as an alternative tool for studying gene expression data.
A.M. Khan, H. El-Daly, N.M. Rajpoot, “A Gamma-Gaussian Mixture Model for Detection of Mitotic Cells in Breast Cancer Histopathology Images,” Journal of Pathology Informatics, Mar. 2013, vol. 4 no. 11. [Abstract] [doi]
In this paper, we propose a statistical approach for mitosis detection in breast cancer histological images. The proposed algorithm models the pixel intensities in mitotic and non-mitotic regions by a Gamma-Gaussian mixture model (GGMM) and employs a context aware post-processing (CAPP) in order to reduce false positives. Experimental results demonstrate the ability of this simple, yet effective method to detect mitotic cells (MCs) in standard H & E breast cancer histology images. Context: Counting of MCs in breast cancer histopathology images is one of three components (the other two being tubule formation, nuclear pleomorphism) required for developing computer assisted grading of breast cancer tissue slides. This is very challenging since the biological variability of the MCs makes their detection extremely difficult. In addition, if standard H & E is used (which stains chromatin rich structures, such as nucleus, apoptotic, and MCs dark blue) and it becomes extremely difficult to detect the latter given the fact that former two are densely localized in the tissue sections. Aims: In this paper, a robust MCs detection technique is developed and tested on 35 breast histopathology images, belonging to five different tissue slides. Settings and Design: Our approach mimics a pathologists' approach to MCs detections. The idea is (1) to isolate tumor areas from non-tumor areas (lymphoid/inflammatory/apoptotic cells), (2) search for MCs in the reduced space by statistically modeling the pixel intensities from mitotic and non-mitotic regions, and finally (3) evaluate the context of each potential MC in terms of its texture. Materials and Methods: Our experimental dataset consisted of 35 digitized images of breast cancer biopsy slides with paraffin embedded sections stained with H and E and scanned at × 40 using an Aperio scanscope slide scanner. Statistical Analysis Used: We propose GGMM for detecting MCs in breast histology images. Image intensities are modeled as random variables sampled from one of the two distributions; Gamma and Gaussian. Intensities from MCs are modeled by a gamma distribution and those from non-mitotic regions are modeled by a gaussian distribution. The choice of Gamma-Gaussian distribution is mainly due to the observation that the characteristics of the distribution match well with the data it models. The experimental results show that the proposed system achieves a high sensitivity of 0.82 with positive predictive value (PPV) of 0.29. Employing CAPP on these results produce 241% increase in PPV at the cost of less than 15% decrease in sensitivity. Conclusions: In this paper, we presented a GGMM for detection of MCs in breast cancer histopathological images. In addition, we introduced CAPP as a tool to increase the PPV with a minimal loss in sensitivity. We evaluated the performance of the proposed detection algorithm in terms of sensitivity and PPV over a set of 35 breast histology images selected from five different tissue slides and showed that a reasonably high value of sensitivity can be retained while increasing the PPV. Our future work will aim at increasing the PPV further by modeling the spatial appearance of regions surrounding mitotic events.
A.M. Khan, H. El-Daly, E. Simmons, N. M. Rajpoot, “HyMaP: A hybrid magnitude-phase approach to unsupervised segmentation of tumor areas in breast cancer histology images,” Journal of Pathology Informatics, Mar. 2013, vol. 4 no. 1. [Abstract] [doi]
Background: Segmentation of areas containing tumor cells in standard H&E histopathology images of breast (and several other tissues) is a key task for computer-assisted assessment and grading of histopathology slides. Good segmentation of tumor regions is also vital for automated scoring of immunohistochemical stained slides to restrict the scoring or analysis to areas containing tumor cells only and avoid potentially misleading results from analysis of stromal regions. Furthermore, detection of mitotic cells is critical for calculating key measures such as mitotic index; a key criteria for grading several types of cancers including breast cancer. We show that tumor segmentation can allow detection and quantification of mitotic cells from the standard H&E slides with a high degree of accuracy without need for special stains, in turn making the whole process more cost-effective. Method: Based on the tissue morphology, breast histology image contents can be divided into four regions: Tumor, Hypocellular Stroma (HypoCS), Hypercellular Stroma (HyperCS), and tissue fat (Background). Background is removed during the preprocessing stage on the basis of color thresholding, while HypoCS and HyperCS regions are segmented by calculating features using magnitude and phase spectra in the frequency domain, respectively, and performing unsupervised segmentation on these features. Results: All images in the database were hand segmented by two expert pathologists. The algorithms considered here are evaluated on three pixel-wise accuracy measures: precision, recall, and F1-Score. The segmentation results obtained by combining HypoCS and HyperCS yield high F1-Score of 0.86 and 0.89 with respect to the ground truth. Conclusions: In this paper, we show that segmentation of breast histopathology image into hypocellular stroma and hypercellular stroma can be achieved using magnitude and phase spectra in the frequency domain. The segmentation leads to demarcation of tumor margins leading to improved accuracy of mitotic cell detection.
R. Evans, U. Naidu, N.M. Rajpoot, et al. “Toponome imaging system: multiplex biomarkers in oncology,” Trends in Molecular Medicine Cell, Dec. 2012, vol. 18 no. 12, p. 723-731. [Abstract] [doi]
Toponome imaging systems (TIS) can yield high-resolution subcellular colocalization images of multiple proteins within single cells and intact tissue sections, giving this technology significant potential for identifying multiplex biomarkers that simultaneously measure several aspects of a cell. The integral role of the microenvironment in malignant progression and the recently appreciated heterogeneity of cancer cells underscore the importance of characterizing complex molecular phenotypes and the large protein network structures of single cells within their preserved anatomical context. Here, we discuss the TIS technique and the potential for developing new sensitive and specific multiplex biomarkers for risk stratification and diagnosis, in addition to its utility for anticancer drug discovery by identifying ‘hub’ proteins that are essential regulators of protein networks.
N.M. Rajpoot, I.T. Butt, “A Multiresolution Framework for Local Similarity based Image Denoising,” Pattern Recognition, Aug. 2012, vol. 45 no. 8, p. 2938-2951. [Abstract] [doi]
In this paper, we present a generic framework for denoising of images corrupted with additive white Gaussian noise based on the idea of regional similarity. The proposed framework employs a similarity function using the distance between pixels in a multidimensional feature space, whereby multiple feature maps describing various local regional characteristics can be utilized, giving higher weight to pixels having similar regional characteristics. An extension of the proposed framework into a multiresolution setting using wavelets and scale space is presented. It is shown that the resulting multiresolution multilateral (MRM) filtering algorithm not only eliminates the coarse-grain noise but can also faithfully reconstruct anisotropic features, particularly in the presence of high levels of noise.
L. Zhou, S. Pelengaris, S. Abouna, et al., “Re-expression of IGF-II is important for beta cell regeneration in adult mice.,” PloS one, Sep. 2012, vol. 7(9), p. e43623. [Abstract] [doi]
The key factors which support re-expansion of beta cell numbers after injury are largely unknown. Insulin-like growth factor II (IGF-II) plays a critical role in supporting cell division and differentiation during ontogeny but its role in the adult is not known. In this study we investigated the effect of IGF-II on beta cell regeneration.
We employed an in vivo model of ‘switchable’ c-Myc-induced beta cell ablation, pIns-c-MycERTAM, in which 90% of beta cells are lost following 11 days of c-Myc (Myc) activation in vivo. Importantly, such ablation is normally followed by beta cell regeneration once Myc is deactivated, enabling functional studies of beta cell regeneration in vivo. IGF-II was shown to be re-expressed in the adult pancreas of pIns-c-MycERTAM/IGF-II+/+ (MIG) mice, following beta cell injury. As expected in the presence of IGF-II beta cell mass and numbers recover rapidly after ablation. In contrast, in pIns-c-MycERTAM/IGF-II+/− (MIGKO) mice, which express no IGF-II, recovery of beta cell mass and numbers were delayed and impaired. Despite failure of beta cell number increase, MIGKO mice recovered from hyperglycaemia, although this was delayed.
Our results demonstrate that beta cell regeneration in adult mice depends on re-expression of IGF-II, and supports the utility of using such ablation-recovery models for identifying other potential factors critical for underpinning successful beta cell regeneration in vivo. The potential therapeutic benefits of manipulating the IGF-II signaling systems merit further exploration.
S. Khan, T.S. Reese, N.M. Rajpoot, et al., “Spatiotemporal Maps of CaMKII in Dendritic Spines,” Journal of Computational Neuroscience, Aug. 2012, vol. 33 no. 1, p. 123-139. [Abstract] [doi]
The calcium calmodulin dependent kinase (CaMKII) is important for long-term potentiation at dendritic spines. Photo-activatable GFP (PaGFP) – CaMKII fusions were used to map CaMKII movements between and within spines in dissociated hippocampal neurons. Photo-activated PaGFP (GFP*) generated in the shaft spread uniformly, but was retained for about 1 s in spines. The differential localization of GFP*-CaMKII isoforms was visualized with hundred nanometer precision frame to frame using de-noising algorithms. GFP*-CaMKIIα localized to the tips of mushroom spines. The spatiotemporal profiles of native and kinase defective GFP*-CaMKIIβ, differed markedly from GFP*-CaMKIIα and mutant GFP*-CaMKIIβ lacking the association domain. CaMKIIβ bound to cortical actin in the dendrite and the stable actin network in spine bodies. Glutamate produced a transiently localized GFP*-CaMKIIα fraction and a soluble GFP*-CaMKIIβ fraction in spine bodies. Single molecule simulations of the interplay between diffusion and biochemistry of GFP* species were guided by the spatiotemporal maps and set limits on binding parameters. They highlighted the role of spine morphology in modulating bound CaMKII lifetimes. The long residence times of GFP*-CaMKIIβ relative to GFP*-CaMKIIα followed as consequence of more binding sites on the actin cytoskeleton than the post-synaptic density. These factors combined to retain CaMKII for tens of seconds, sufficient to outlast the calcium transients triggered by glutamate, without invoking complex biochemistry.
S.J. McKenna, D. Magee, N.M. Rajpoot, “Special issue on microscopy image analysis for biomedical applications,” Machine Vision and Applications, Jun. 2012, vol. 23 no. 4, p. 603-605. [doi]
S.E.A. Raza, A. Humayun, S. Abouna, et al., “RAMTaB: Robust Alignment of Multi-Tag Bioimages,” PLoS ONE, Feb. 2012, vol. 7 no. 2, p. e30894. [Abstract] [doi][Software]
Background: In recent years, new microscopic imaging techniques have evolved to allow us to visualize several different proteins (or other biomolecules) in a visual field. Co-location of the proteins is necessary to analyze the molecular structure of a sample at each point under observation. We present a novel approach to align images in a multi-tag fluorescence image stack. The proposed approach is applicable to multi-tag bioimaging systems which (a) acquire fluorescence images by sequential staining and (b) simultaneously capture a phase contrast image corresponding to each of the fluorescence images. To the best of our knowledge, there is no existing method in the literature which addresses simultaneous registration of multi-tag bioimages and selection of the reference image in order to maximize the overall overlap between the images. Methodology/Principal Findings: We employ a block-based method for registration which yields a confidence measure to indicate the accuracy of our registration results. We derive a shift metric in order to select the Reference Image with Maximal Overlap (RIMO), in turn minimizing the total amount of non-overlapping signal for a given number of tags. Experimental results show that the RAMTaB framework is robust to variations in contrast and illumination, yields sub-pixel accuracy, and successfully selects the reference image resulting in maximum overlap. The registration results are also shown to significantly improve any follow-up protein co-localization studies. Conclusions: For the discovery of protein complexes and of functional protein networks within a cell, alignment of the tag images in the multi-tag fluorescence image stack is a key pre-processing step. The proposed framework is shown to produce accurate alignment results on both real and synthetic data. Our future work will use the aligned multi-channel fluorescent image data for normal and diseased tissue specimens to analyze molecular co-expression patterns and functional protein networks.
M. Kuse, Y.F. Wang, V. Kalassannavar, et al., “Local Isotropic Phase Symmetry for Detection of beta Cells and Lymphocytes,” Journal of Pathology Informatics, Jan. 2012, vol. 2 no. 2. [Abstract] [doi][Software]
Diabetes can be associated with a reduction in functional β cell mass, which must be restored if the disease is to be cured or progress is to be arrested. To study the cell count, it is also necessary to determine the number of nuclei within the insulin stained area. It can take a single experimentalist several months to complete a single study of this kind, results of which may still be quite subjective. In this paper, we propose a framework based on a novel measure of local symmetry for detection of cells. The local isotropic phase symmetry measure (LIPSyM) is designed to give high values at or near the cell centers. We demonstrate the effectiveness of our algorithm for detection of two types of specific cells in histology images, cells in mouse pancreatic sections and lymphocytes in human breast tissue. Experimental results for these two problems show that our algorithm performs better than human experts for the former problem, and outperforms the best reported results for the latter.
A. Humayun, S.E.A. Raza, C. Waddington, S. Abouna, M. Khan, N. M. Rajpoot, “A Framework for Molecular Co-Expression Pattern Analysis in Multi-Channel Toponome Fluorescence Images,” Proceedings Microscopy Image Analysis with Applications in Biology (MIAAB), Sep. 2011, Heidelberg, Germany. [Abstract] [doi]
Bioimage computing is rapidly emerging as an important area in image based systems biology with an emphasis on spatiotemporal localization of subcellular bio-molecules, most importantly proteins. A key problem in this domain is analysis of protein co-localization or co expression of protein molecules. Imaging techniques, such as the Toponome Imaging System (TIS) , with the ability to localize several different proteins in the same tissue specimen are only becoming available recently. Traditional co-localization studies and some of the modern coexpression studies have serious limitations when analyzing this kind of data. Here we present a framework for the analysis of molecular co-expression patterns (MCEPs) in TIS image data.
S. Qureshi, A. Mirza, N.M. Rajpoot, M. Arif, “Hybrid Diversification Operator Based Evolutionary Approach towards Tomographic Image Reconstruction,” IEEE Transactions on Image Processing, Jul. 2011, vol. 27 no. 7, p. 1977–1990. [Abstract] [doi]
The proposed algorithm introduces a new and efficient hybrid diversification operator (HDO) in the evolution cycle to improve the tomographic image reconstruction and diversity in the population by using simulated annealing (SA), and the modified form of decreasing law of mutation probability. This evolutionary approach has been used for parallel-ray transmission tomography with the head and lung phantoms. The algorithm is designed to address the observation that the convergence of a genetic algorithm slows down as it evolves. The HDO is shown to yield a higher image quality as compared with the filtered back-projection (FBP), the multiscale wavelet transform, the SA, and the hybrid continuous genetic algorithm (HCGA) techniques. Various crossover operators including uniform, block, and image-row crossover operators have also been analyzed, and the latter has been generally found to give better image quality. The HDO is shown to yield improvements of up to 92% and 120% when compared with FBP in terms of PSNR, for 128 × 128 head and lung phantoms, respectively.
C. Loyek, N.M. Rajpoot, M. Khan, T. W. Nattkemper, “BioIMAX: A Web 2.0 approach for easy exploratory and collaborative access to multivariate bioimage data,” BMC Bioinformatics, Jul. 2011, vol. 12, p. 297–297. [Abstract] [doi]
Background Innovations in biological and biomedical imaging produce complex high-content and multivariate image data. For decision-making and generation of hypotheses, scientists need novel information technology tools that enable them to visually explore and analyze the data and to discuss and communicate results or findings with collaborating experts from various places. Results In this paper, we present a novel Web2.0 approach, BioIMAX, for the collaborative exploration and analysis of multivariate image data by combining the webs collaboration and distribution architecture with the interface interactivity and computation power of desktop applications, recently called rich internet application. Conclusions BioIMAX allows scientists to discuss and share data or results with collaborating experts and to visualize, annotate, and explore multivariate image data within one web-based platform from any location via a standard web browser requiring only a username and a password. BioIMAX can be accessed at http://ani.cebitec.uni-bielefeld.de/BioIMAXwith the username "test" and the password "test1" for testing purposes.
M. Arif, N.M. Rajpoot, T.W. Nattkemper, U. Technow, T. Chakraborty, N. Fische, N. Jensen, K. Niehaus, “Quantification of Cell Infection Caused by Listeria monocytogenes Invasion,” Journal of Biotechnology, Jun. 2011, vol. 154, p. 76–83. [Abstract] [doi][Software]
Listeria monocytogenes causes a life-threatening food-borne disease known as Listeriosis. Elderly, immunocompromised, and pregnant women are primarily the victims of this facultative intracellular Gram-positive pathogen. Since the bacteria survive intracellularly within the human host cells they are protected against the immune system and poorly accessed by many antibiotics. In order to screen pharmaceutical substances for their ability to interfere with the infection, persistence and release of L. monocytogenes a high content assay is required. We established a high content screen (HCS) using the RAW 264.7 mouse macrophage cell line seeded into 96-well glass bottom microplates. Cells were infected with GFP-expressing L. monocytogenes and stained thereafter with Hoechst 33342. Automated image acquisition was carried out by the ScanR software.
D. Langenkamper, J. Kolling, A. Humayun, S. Abouna, D. Epstein, M. Khan, N. M. Rajpoot, TW Nattkemper, “Towards Protein Network Analysis Using TIS Imaging and Exploratory Data Analysis,” Proceedings Workshop on Computational Systems Biology (WCSB), Zürich, Switzerland, Jun. 2011. [Abstract] [doi]
Identification, analysis and visualization of functional molecular networks are key objectives in systems biology and the logical extension of existing molecular profiling techniques. Here we used TIS (toponome imaging system) imaging to visualize co-location of proteins in tissue samples, thereby integrating two distinct information domains, morphology and molecular interaction. Using a library of 13 selected dye-conjugated antibodies, TIS recorded a stack of 13 fluorescence images, each showing the same visual field, with high fluorescence values indicating the presence of the corresponding bio-molecule or protein. We show first results obtained using machine learning approaches that allow the identification and spatial analysis of co-location patterns without manual thresholding. The authors believe that TIS imaging in combination with advanced visual data mining methods can contribute substantially to addressing several outstanding issues in systems biology where molecular co-location is involved.
S. Bhattacharya, G. Mathew, E. Ruban, E. Ruban, D. Epstein, A. Krusche, R. Hillert, W. Schubert, M. Khan, “Toponome imaging system: in situ protein network mapping in normal and cancerous colon from the same patient reveals more than five-thousand cancer specific protein clusters and their subcellular annotation by using a three symbol code.,” Journal of proteome research, Sep. 2010, vol. 9(12), p. 6112–6125. [Abstract] [doi]
In a proof of principle study, we have applied an automated fluorescence toponome imaging system (TIS) to examine whether TIS can find protein network structures, distinguishing cancerous from normal colon tissue present in a surgical sample from the same patient. By using a three symbol code and a power of combinatorial molecular discrimination (PCMD) of 221 per subcellular data point in one single tissue section, we demonstrate an in situ protein network structure, visualized as a mosaic of 6813 protein clusters (combinatorial molecular phenotype or CMPs), in the cancerous part of the colon. By contrast, in the histologically normal colon, TIS identifies nearly 5 times the number of protein clusters as compared to the cancerous part (32 009). By subcellular visualization procedures, we found that many cell surface membrane molecules were closely associated with the cell cytoskeleton as unique CMPs in the normal part of the colon, while the same molecules were disassembled in the cancerous part, suggesting the presence of dysfunctional cytoskeleton−membrane complexes. As expected, glandular and stromal cell signatures were found, but interestingly also found were potentially TIS signatures identifying a very restricted subset of cells expressing several putative stem cell markers, all restricted to the cancerous tissue. The detection of these signatures is based on the extreme searching depth, high degree of dimensionality, and subcellular resolution capacity of TIS. These findings provide the technological rationale for the feasibility of a complete colon cancer toponome to be established by massive parallel high throughput/high content TIS mapping.
S. Abouna, R. W. Old, S. Pelengaris, D. Epstein, V. Ifandi, I. Sweeney & M. Khan, “Non-β-cell progenitors of β-cells in pregnant mice.,” Organogenesis , Apr. 2010, vol. 6(2), p. 6112–6125. [Abstract] [doi]
Pregnancy is a normal physiological condition in which the maternal β-cell mass increases rapidly about two-fold to adapt to new metabolic challenges. We have used a lineage tracing of β-cells to analyse the origin of new β-cells during this rapid expansion in pregnancy. Double transgenic mice bearing a tamoxifen-dependent Cre-recombinase construct under the control of a rat insulin promoter, together with a reporter Z/AP gene, were generated. Then, in response to a pulse of tamoxifen before pregnancy, β-cells in these animals were marked irreversibly and heritably with the human placental alkaline phosphatase (HPAP). First, we conclude that the lineage tracing system was highly specific for β-cells. Secondly, we scored the proportion of the β-cells marked with HPAP during a subsequent chase period in pregnant and non-pregnant females. We observed a dilution in this labelling index in pregnant animal pancreata, compared to non-pregnant controls, during a single pregnancy in the chase period. To extend these observations we also analysed the labelling index in pancreata of animals during the second of two pregnancies in the chase period. The combined data revealed statistically-significant dilution during pregnancy, indicating a contribution to new beta cells from a non-β-cell source. Thus for the first time in a normal physiological condition, we have demonstrated not only β-cell duplication, but also the activation of a non-β-cell progenitor population. Further, there was no transdifferentiation of β-cells to other cell types in a two and half month period following labelling, including the period of pregnancy.
L. Wang, G. Zhao, N. M. Rajpoot, M. Nixon, “Special issue on new advances in video-based gait analysis and applications: challenges and solutions,” IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, Aug. 2010, vol. 40, p. 982-985. [doi]
- Our full list of publications is available here.
- Prof Rajpoot to lead the computational arm of the recently awarded £15m PathLAKE centre of excellence for AI in Pathology (Nov 2018)
- Prof Rajpoot and Prof Snead to co-chair the European Congress on Digital Pathology at Warwick (Apr 2019)
- Welcome to Rob Jewsbury, Muhammad Dawood and Ruoyu Wang, who join the lab as PhD students (Nov/Oct 2020)
- We welcome Adam Shephard as a new postdoc in the lab working on ANTICIPATE, our CRUK funded project on early detection of oral cancer (1 Oct 2020)
- Welcome to Amina Asif and Gozde Gunesli who have joined us as a postdoc Research Fellow (on the MRC funded project PredicTR2) and a PG research student, respectively (1 July 2020)
- Welcome to Dang Vu and Mieke Zwager, who joined our lab as PhD and Visiting students, respectively (1 Feb 2020)
- Congrats to Shaban on the acceptance of his paper on context-aware CNNs for histology image grading in IEEE TMI (23 Jan 2020)
- We extend a warm welcome to Dr Young Saeng Park, Dr Hadi Saki, and Dr Noorul Wahab who have joined the PathLAKE project (1 Nov 2019)
- We are delighted to welcome Dr Fayyaz Minhas as a new faculty in the group (11 Oct 2019)
- Welcome to Miss Wenqi Lu who started with us as a Research Fellow today (16 Sep 2019)
- Kudos to Simon on the publication of his jointly first-authored paper on HoVer-Net in Medical Image Analysis (16 Sep 2019)
- We welcome Dr Nima Hatami and Dr Mohsin Bilal as postdoc fellows in the lab (19 Aug 2019)
- Congrats to Shaban on the acceptance of his paper on a novel score of TIL abudance for predicting disease-free survival in oral cancer in Nature Sci Rep (31 July 2019)
- We welcome Dr Shan Raza as Assistant Professor in the group (1 Jun 2019)
- Congrats to Mary and Simon for the publication of their articles in Frontiers Bioengineering & Biotechnology (Apr 2019)
- Kudos to Talha on the acceptance of his paper on persistent homology for tumor segmentation in Medical Image Analysis (30 Mar 2019)
- Congrats to Talha on the acceptance of his paper on visual attention in computational pathology accepted for publication in IEEE Transactions on Medical Imaging (13 Mar 2019)
- Congrats to Shan and Simon on the acceptance of his Micro-Net paper in Medical Image Analysis (Feb 2019)
- Kudos to Huangjang (CUHK) and Simon for their paper on ScanNet accepted for publication in IEEE TMI (2 Jan 2019)
- Kudos to Shan and Simon for the acceptance of their paper on Micro-Net for publication in Medical Image Analysis (16 Dec 2018)
- Congrats to Simon and Shaban for the acceptance of their papers on MILD-Net and Deep Head Detector in Medical Image Analysis and Information Fusion, respectively (14 Dec 2018)
- Vacancy for Assistant Prof in Computational Pathology (see here for details; deadline for applications 13 Dec)
- Our lab is proud to host the European Congress on Digital Pathology at Warwick in 2019 (previously held in Helsinki in 2018 and Berlin in 2016)
- Congrats to Najah on successfully defending her PhD thesis (Nov 2018)
- Two MRC funded postdoc positions available in our lab - see the job ad (application deadline 20 Nov 2018)
- Congrats to Mike Song on the acceptance of his paper on simultaneous detection and classification of cells in bone marrow histology images in IEEE JBHI (Oct 2018)
- Kudos to Korsuk on the acceptance of his paper on digital tissue phenotype based signatures for distant metastasis in colorectal cancer in Nature Scientific Reports (Sep 2018)
- Congrats to Sajid, Moazam, Navid, and Saad on the acceptance of their papers in the MICCAI COMPAY workshop (20 July, 2018)
- Congratulations to Navid on the acceptance of his paper in the MICCAI MLMI workshop (18 July, 2018)
- Congrats to Simon on the acceptance of his paper for presentation at MIDL Amsterdam (16 May, 2018)
- Congrats to Najah, Ruqayya, and Moazam on the acceptance of their papers in MIUA (12 Apr, 2018)
- Kudos to Shaban on the acceptance of his abstract at the Annual ASCO meeting (2 Apr, 2018)
- Congrats to Ruqayya, Navid, Shaban, and Ania on the acceptance of their paper on breast cancer image classification in ICIAR (2 Apr, 2018)
- Congrats to Kat, Ruqayya, Najah, Ania, Navid, Shaban, Simon, and Talha for the acceptance of their abstracts for presentation at the ECDP (29 Mar, 2018)
- RA position in AI/ML for cancer pathology (24 months) - deadline 5 Mar 2018; Click here to apply [CLOSED]
- Congrats to Shaban, Ruqayya, Korsuk and Talha for the publication of the CAMELYON contest paper in JAMA (12 Dec, 2017)
- Kudos to Ruqayya & Korsuk for the acceptance of their paper in Nature Scientific Reports (Nov 14, 2017)
- Congrats to Najah and Simon on the acceptance of their papers on NSCLC image analysis for presentation at the SPIE Digital Pathology 2018 (Oct 6, 2017)
- Kudos to Talha on the acceptance of his paper on Her2 Scoring Contest (held with PathSoc 2016) in the Histopathology journal (July 30, 2017)
- Special congrats to Korsuk (now at the Harvard Med School) on the award of Science Faculty Best Thesis Prize 2017 (July 13, 2017)
- Kudos to Nick (now at the Inst Cancer Research) on the acceptance of his paper on a Hyper-Stain Inspector for publication in Nature Scientific Reports (May 31, 2017)
- Congrats to Mike Song, Talha Qaiser and Shan Raza on the acceptance of their papers in MIUA 2017 (Apr 24, 2017)
- Congrats to Mike Song on the acceptance of his paper in IEEE Transactions on Biomedical Engineering (Mar 24, 2017)
- Postdoc research fellow position available (deadline for applications Mar 30, 2017)
- Warwick to lead new research on cancer image analytics (Feb 2, 2017)
- Kudos to Guannan on his PhD graduation and to Guannan and Shan on the acceptance of their paper on luminal epithelium segmentation in the Medical Image Analysis journal (Jan 20, 2017)
- Congrats to Shan and Mike on the acceptance of their papers for presentation at IEEE ISBI 2017 (Jan 10, 2017)
- Congrats to Najah on the acceptance of her paper on wavelets based stain deconvolution in the PLoS ONE journal (Dec 30, 2016)
- Kudos to Nick on the successful defense of his PhD thesis (Nov 24, 2016)
- Congrats to Nick on the acceptance of his paper on simultaneous ER/PR scoring in Cytometry Part A (Nov 21, 2016)
- Warwick to conduct breakthrough research on oral cancers in Pakistan (Oct 6, 2016)
- Kudos to Violet on the acceptance of her paper on protein expression models in BMC Bioinformatics (Sep 7, 2016)
- Congrats to Korsuk on the acceptance of his GlaS challenge contest paper in the Medical Image Analysis journal (Aug 30, 2016)
- Funded PhD studentship available (deadline 1 Sep, 2016)
- Kudos to Talha on winning the Best Paper Award at the MIUA 2016 (July 8, 2016)
- Congrats to Nick & Violet on winning the Intel code acceleration contest held on campus (Apr 19, 2016)
- Congrats to Dr Korsuk Sirinukunwattana on successfully defending his PhD thesis (Mar 30, 2016)
- Kudos to Talha on the acceptance of his paper for Oral presentation at the European Congress on Digital Pathology to be held in Berlin in May (March 10, 2016)
- Congrats to Shan for the acceptance of his paper on normalization protocols for multiplexed bioimage data for publication in the BMC Biodata Mining (Feb 2, 2016)
- Congrats to Dr Guannan Li on successfully defending his PhD thesis (Jan 27, 2016)
- Kudos to Korsuk for the acceptance of his paper on cell detection and labeling in standard H&E images for publication in the IEEE Transactions on Medical Imaging (Jan 16, 2016)
- Congrats to Najah, Korsuk, and Shan for the acceptance of their papers for presentation at the IEEE ISBI (Dec 23, 2015)
- Congrats to Dr Violeta Kovacheva on the successful defense of her PhD thesis (Dec 2, 2015)
- Kudos to Korsuk for winning the Best Paper award at the MICCAI Patch-MI workshop (Oct 9, 2015)
- Congrats to Korsuk and Guannan for the acceptance of their papers on epithelium analysis for presentation at the MICCAI Patch-MI workshop (August 8, 2015)
- Kudos to Guannan for the acceptance of his paper on Myosin spot tracking in the Journal of Microscopy (July 10, 2015) [DOI]
- Kudos to Korsuk and Adnan for the acceptance of their paper on nuclear atypia scoring in the IEEE Journal of Biomedical and Health Informatics (June 13, 2015) [DOI]
- Kudos to Adnan for winning the Computer Science Faculty Thesis prize 2015 (May 28, 2015)
- Congratulations to Najah and Shan for the acceptance of their papers in MIUA'2015 (May 22, 2015)
- Kudos to Korsuk on the acceptance of his paper on gland segmentation in the IEEE Transactions on Medical Imaging (May 12, 2015) [DOI]
- We are organizing the Gland Segmentation (GlaS) challenge contest at MICCAI'2015. Check out the contest website here (Apr 21, 2015)
- Kudos to Violeta and Korsuk for the acceptance of their papers at the IEEE ISBI'2015 conference (Jan 24, 2015)
- Congrats to Dr Sam Jefferyes on the successful defense of his PhD thesis (Dec 8, 2014)
- Kudos to Shan, Korsuk, Mike Song, and Nick for the acceptance of their papers for presentation at the SPIE Digital Pathology 2015 conference (Oct 24, 2014)
- Congrats to Dr Adnan Khan on the successful defense of his PhD thesis (Sep 22, 2014)
- Kudos to Adnan Khan and Korsuk Sirinukunwattana for winning the MITOS-ATYPIA challenge contest held at ICPR'2014 in Stockholm (Aug 24, 2014)
- Kudos to Korsuk Sirinukunwattana and Guannan Li for the acceptance of extended version of their papers presented at the ECDP'2014 (Paris) in a special issue of CMIG on Breakthrough Technologies in Digital Pathology (August, 2014)
- Congrats to Dr Shan-e-Ahmed Raza on the successful defense of his PhD thesis (Feb 18, 2014)
- BBSRC-funded postdoc position (deadline 22 Nov, 2013)
- Our submission to the AMIDA challenge contest held at MICCAI'2013 ranked 3rd in terms of the average F1-score (Sep 22, 2013)
- RA position (deadline expired)
- CASE PhD Studentship (deadline expired)