A. Pennycuick, V. H Teixeira, K. AbdulJabbar, S. E. A. Raza, T. Lund, A. U. Akarca, R. Rosenthal, L. Kalinke, D. P. Chandrasekharan, C. P. Pipinikas, H. Lee-Six, R. E. Hynds, K. H.C. Gowers, J. Y Henry, F. R. Millar, Y. B Hagos, C. Denais, M. Falzon, D. A Moore, S. Antoniou, P. F. Durrenberger, A. J. S. Furness, B. Carroll, C. Marceaux, M.e Asselin-Labat, W. Larson, C. Betts, L. M. Coussens, R. M. Thakrar, J. George, C. Swanton, C. Thirlwell, P. J. Campbell, T. Marafioti, Y. Yuan, S. A. Quezada, N. McGranahan, S. M. Janes, “Immune surveillance in clinical regression of pre-invasive squamous cell lung cancer,” Cancer Discovery, July 2020. [Abstract] [doi]
Before squamous cell lung cancer develops, pre-cancerous lesions can be found in the airways. From longitudinal monitoring, we know that only half of such lesions become cancer, whereas a third spontaneously regress. While recent studies have described the presence of an active immune response in high-grade lesions, the mechanisms underpinning clinical regression of pre-cancerous lesions remain unknown. Here, we show that host immune surveillance is strongly implicated in lesion regression. Using bronchoscopic biopsies from human subjects, we find that regressive carcinoma in-situ lesions harbour more infiltrating immune cells than those that progress to cancer. Moreover, molecular profiling of these lesions identifies potential immune escape mechanisms specifically in those that progress to cancer: antigen presentation is impaired by genomic and epigenetic changes, CCL27/CCR10 signalling is upregulated, and the immunomodulator TNFSF9 is downregulated. Changes appear intrinsic to the CIS lesions as the adjacent stroma of progressive and regressive lesions are transcriptomically similar.
K. AbdulJabbar*, S. E. A. Raza*, R. Rosenthal, M. Jamal-Hanjani, S. Veeriah, A. Akarca, T. Lund, D. Moore, R. Salgado, M. Al Bakir, L. Zapata, C. Hiley, L. Officer, M. Sereno, C. Smith, S. Loi, A. Hackshaw, T. Marafioti, S. Quezada, N. McGranahan, J. Le Quesne, C. Swanton† & Y. Yuan†, “Geospatial immune variability illuminates differential evolution of lung adenocarcinoma,” Nature Medicine, May 2020, p. 1-9. [Abstract] [doi]
Remarkable progress in molecular analyses has improved our understanding of the evolution of cancer cells toward immune escape. However, the spatial configurations of immune and stromal cells, which may shed light on the evolution of immune escape across tumor geographical locations, remain unaddressed. We integrated multiregion exome and RNA-sequencing (RNA-seq) data with spatial histology mapped by deep learning in 100 patients with non-small cell lung cancer from the TRACERx cohort. Cancer subclones derived from immune cold regions were more closely related in mutation space, diversifying more recently than subclones from immune hot regions. In TRACERx and in an independent multisample cohort of 970 patients with lung adenocarcinoma, tumors with more than one immune cold region had a higher risk of relapse, independently of tumor size, stage and number of samples per patient. In lung adenocarcinoma, but not lung squamous cell carcinoma, geometrical irregularity and complexity of the cancer–stromal cell interface significantly increased in tumor regions without disruption of antigen presentation. Decreased lymphocyte accumulation in adjacent stroma was observed in tumors with low clonal neoantigen burden. Collectively, immune geospatial variability elucidates tumor ecological constraints that may shape the emergence of immune-evading subclones and aggressive clinical phenotypes.
R. M. S. Bashir, H. Mahmood, M. Shaban, S. E. A. Raza, M. M. Fraz, S. A. Khurram & N. M. Rajpoot, “Automated grade classification of oral epithelial dysplasia using morphometric analysis of histology images,” in Medical Imaging 2020: Digital Pathology, Houston, Texas, USA, vol. 11320, p. 1132011. [Abstract] [doi]
Oral dysplasia is a pre-malignant stage of oral epithelial carcinomas, e.g., oral squamous cell carcinoma, where significant changes in tissue layers and cells can be observed under the microscope. However, malignancy can be reverted or cured using proper medication or surgery if the grade of malignancy is assessed properly. The assessment of correct grade is therefore critical in patient management as it can change the treatment decisions and prognosis for the dysplastic lesion. This assessment is highly challenging due to considerable inter- and intraobserver variability in pathologists’ agreement, which highlights the need for an automated grading system that can predict more accurate and reliable grade. Recent advancements have made it possible for digital pathology (DP) and artificial intelligence (AI) to join forces from the digitization of tissue slides into images and using those images to train and predict more accurate grades using complex AI models. In this regard, we propose a novel morphometric approach exploiting the architectural features in dysplastic lesions i.e., irregular epithelial stratification where we measure the widths of different layers of the epithelium from the boundary layer i.e., keratin projecting inwards to the epithelium and basal layers to the rest of the tissue section from a clinically significant viewpoint.
S. Graham, Q. Dang, S. E. A. Raza, J.T. Kwak, N.M. Rajpoot, “Hover-Net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images,” Medical Image Analysis, Dec. 2019, vol. 58, p. 101563. [Abstract] [doi] [Data]
Nuclear segmentation and classification within Haematoxylin & Eosin stained histology images is a fundamental prerequisite in the digital pathology work-flow. The development of automated methods for nuclear segmentation and classification enables the quantitative analysis of tens of thousands of nuclei within a whole-slide pathology image, opening up possibilities of further analysis of large-scale nuclear morphometry. However, automated nuclear segmentation and classification is faced with a major challenge in that there are several different types of nuclei, some of them exhibiting large intra-class variability such as the nuclei of tumour cells. Additionally, some of the nuclei are often clustered together. To address these challenges, we present a novel convolutional neural network for simultaneous nuclear segmentation and classification that leverages the instance-rich information encoded within the vertical and horizontal distances of nuclear pixels to their centres of mass. These distances are then utilised to separate clustered nuclei, resulting in an accurate segmentation, particularly in areas with overlapping instances. Then, for each segmented instance the network predicts the type of nucleus via a devoted up-sampling branch. We demonstrate state-of-the-art performance compared to other methods on multiple independent multi-tissue histology image datasets. As part of this work, we introduce a new dataset of Haematoxylin & Eosin stained colorectal adenocarcinoma image tiles, containing 24,319 exhaustively annotated nuclei with associated class labels.
K. Zormpas-Petridis, H. Failmezger, S. E. A. Raza, et al., “Superpixel-based Conditional Random Fields (SuperCRF): Incorporating global and local context for enhanced deep learning in melanoma histopathology,” Frontiers in Oncology, Sep. 2019. [Abstract] [doi]
Computational pathology-based cell classification algorithms are revolutionizing the study of the tumor microenvironment and can provide novel predictive/prognosis biomarkers crucial for the delivery of precision oncology. Current algorithms used on hematoxylin and eosin slides are based on individual cell nuclei morphology with limited local context features. Here, we propose a novel multi-resolution hierarchical framework (SuperCRF) inspired by the way pathologists perceive regional tissue architecture to improve cell classification and demonstrate its clinical applications. We develop SuperCRF by training a state-of-art deep learning spatially constrained- convolution neural network (SC-CNN) to detect and classify cells from 105 high-resolution (20x) H&E-stained slides of The Cancer Genome Atlas melanoma dataset and subsequently, a conditional random field (CRF) by combining cellular neighborhood with tumor regional classification from lower resolution images (5x, 1.25x) given by a superpixel-based machine learning framework. SuperCRF led to an 11.85% overall improvement in the accuracy of the state-of-art deep learning SC-CNN cell classifier. Consistent with a stroma-mediated immune suppressive microenvironment, SuperCRF demonstrated that i) a high ratio of lymphocytes to all lymphocytes within the stromal compartment (p=0.026) and ii) a high ratio of stromal cells to all cells (p<0.0001 compared to p=0.039 for SC-CNN only) are associated with poor survival in patients with melanoma. SuperCRF improves cell classification by introducing global and local context-based information and can be implemented in combination with any single-cell classifier. SuperCRF provides valuable tools to study the tumor microenvironment and identify predictors of survival and response to therapy.
S. E. A. Raza, K. AbdulJabbar, M. Jamal-Hanjani, et al., “Deconvolving convolution neural network for cell detection,” IEEE International Symposium on Biomedical Imaging (ISBI), Apr. 2019, p. 891–894. [Abstract] [doi]
Automatic cell detection in histology images is a challenging task due to varying size, shape and features of cells and stain variations across large cohorts. Conventional deep learning methods regress the probability of each pixel belonging to the centre of a cell followed by detection of local maxima. We propose a three stage method (MapDe) to improve cell detection. (a) The dot annotations are convolved with a mapping filter to generate artificial labels. (b) A convolutional neural network (CNN) is modified to convolve its output with the same mapping filter. The mapping filter is fixed during training forcing the network to generate better probability maps. (c) Output of the trained CNN is deconvolved to generate points as cell detection. The results show that (1) local maxima performs better cell detection with probability maps generated using fixed convolution filter, (2) the results can be further improved by deconvolving the output with fewer parameters to tune.
S. E. A. Raza, L. Cheung, M. Shaban, et al., “Micro-Net: A unified model for segmentation of various objects in microscopy images,” Medical Image Analysis, Dec. 2018, vol. 52, p. 160–173. [Abstract] [doi] [Data]
Object segmentation and structure localization are important steps in automated image analysis pipelines for microscopy images. We present a convolution neural network (CNN) based deep learning architecture for segmentation of objects in microscopy images. The proposed network can be used to segment cells, nuclei and glands in fluorescence microscopy and histology images after slight tuning of input parameters. The network trains at multiple resolutions of the input image, connects the intermediate layers for better localization and context and generates the output using multi-resolution deconvolution filters. The extra convolutional layers which bypass the max-pooling operation allow the network to train for variable input intensities and object size and make it robust to noisy data. We compare our results on publicly available data sets and show that the proposed network outperforms recent deep learning algorithms.
P. L. Narayanan, S. E. A. Raza, A. Dodson, et al., “DeepSDCS: Dissecting cancer proliferation heterogeneity in Ki67 digital whole slide images,” in Medical Imaging with Deep Learning (MIDL) , 2018 [Abstract] [doi]
Ki67 is an important biomarker for breast cancer. Classification of positive and negative Ki67 cells in histology slides is a common approach to determine cancer proliferation status. However, there is a lack of generalizable and accurate methods to automate Ki67 scoring in large-scale patient cohorts. In this work, we have employed a novel deep learning technique based on hypercolumn descriptors for cell classification in Ki67 images. Specifically, we developed the Simultaneous Detection and Cell Segmentation (DeepSDCS) network to perform cell segmentation and detection. VGG16 network was used for the training and fine tuning to training data. We extracted the hypercolumn descriptors of each cell to form the vector of activation from specific layers to capture features at different granularity. Features from these layers that correspond to the same pixel were propagated using a stochastic gradient descent optimizer to yield the detection of the nuclei and the final cell segmentations. Subsequently, seeds generated from cell segmentation were propagated to a spatially constrained convolutional neural network for the classification of the cells into stromal, lymphocyte, Ki67-positive cancer cell, and Ki67-negative cancer cell. We validated its accuracy in the context of a large-scale clinical trial of oestrogen-receptor-positive breast cancer. We achieved 99.06% and 89.59% accuracy on two separate test sets of Ki67 stained breast cancer dataset comprising biopsy and whole-slide images.
N. Alsubaie, K. Sirinukunwattana, S. E. A. Raza, et al., “A bottom-up approach for tumour differentiation in whole slide images of lung adenocarcinoma,” in Medical Imaging : Digital Pathology, Mar. 2018, pp. 105810E, vol. 10581. [Abstract] [doi]
Analysis of tumour cells is essential for morphological characterisation which is useful for disease prognosis and survival prediction. Visual assessment of tumour cell morphology by expert human observers for prognostic purposes is subjective and potentially a tedious process. In this paper, we propose an automated and objective method for tumour cell analysis in whole slide images (WSI) of lung adenocarcinoma. Tumour cells are first extracted at higher magnification and then morphological, texture and spatial distribution features are computed for each cell. We investigated the biological impact of the nuclear features in the context of tumour grading. Results show that some of these features are correlated with tumour grade. We examine some of these features on the WSI where these features shows different distribution depends on the tumour grade.
S. E. A. Raza, L. Cheung, D. Epstein, et al., “Mimonet: Gland segmentation using multi-input-multi-output convolutional neural network,” In Medical Image Understanding and Analysis (MIUA), Jul. 2017, pp. 698–706. [Abstract] [doi]
Morphological assessment of glands in histopathology images is very important in cancer grading. However, this is labour intensive, requires highly trained pathologists and has limited reproducibility. Digitisation of tissue slides provides us with the opportunity to employ computers, which are very efficient in repetitive tasks, allowing us to automate the morphological assessment with input from the pathologist. The first step in automated morphological assessment is the segmentation of these glandular regions. In this paper, we present a multi-input multi-output convolutional neural network for segmentation of glands in histopathology images. We test our algorithm on the publicly available GLaS data set and show that our algorithm produces competitive results compared to the state-of-the-art algorithms in terms of various quantitative measures.
S. E. A. Raza, L. Cheung, D. Epstein, et al., “MIMO-Net: A multi-input multi-output convolutional neural network for cell segmentation in fluorescence microscopy images,” IEEE International Symposium on Biomedical Imaging (ISBI), Apr. 2017, p. 337-340. [Abstract] [doi]
We propose a novel multiple-input multiple-output convolution neural network (MIMO-Net) for cell segmentation in fluorescence microscopy images. The proposed network trains the network parameters using multiple resolutions of the input image, connects the intermediate layers for better localization and context and generates the output using multi-resolution deconvolution filters. The MIMO-Net allows us to deal with variable intensity cell boundaries and highly variable cell size in the mouse pancreatic tissue by adding extra convolutional layers which bypass the max-pooling operation. The results show that our method outperforms state-of-the-art deep learning based approaches for segmentation.
G. Li, S.E.A. Raza, N.M. Rajpoot, “Multi-Resolution Cell Orientation Congruence Descriptors for Epithelium Segmentation in Endometrial Histology Images,” Medical Image Analysis, Jan. 2017, vol. 37, p. 91–100. [Abstract] [doi]
It has been recently shown that recurrent miscarriage can be caused by abnormally high ratio of number of uterine natural killer (UNK) cells to the number of stromal cells in human female uterus lining. Due to high workload, the counting of UNK and stromal cells needs to be automated using computer algorithms. However, stromal cells are very similar in appearance to epithelial cells which must be excluded in the counting process. To exclude the epithelial cells from the counting process it is necessary to identify epithelial regions. There are two types of epithelial layers that can be encountered in the endometrium: luminal epithelium and glandular epithelium. To the best of our knowledge, there is no existing method that addresses the segmentation of both types of epithelium simultaneously in endometrial histology images. In this paper, we propose a multi-resolution Cell Orientation Congruence (COCo) descriptor which exploits the fact that neighbouring epithelial cells exhibit similarity in terms of their orientations. Our experimental results show that the proposed descriptors yield accurate results in simultaneously segmenting both luminal and glandular epithelium.
N. Alsubaie, N. Trahearn, S.E.A. Raza et al., “Stain Deconvolution Using Statistical Analysis of Multi-Resolution Stain Colour Representation,” PLoS One, Jan. 2017, vol. 12, no. 1, p.e0169875. [Abstract] [doi]
Stain colour estimation is a prominent factor of the analysis pipeline in most of histology image processing algorithms. Providing a reliable and efficient stain colour deconvolution approach is fundamental for robust algorithm. In this paper, we propose a novel method for stain colour deconvolution of histology images. This approach statistically analyses the multi-resolutional representation of the image to separate the independent observations out of the correlated ones. We then estimate the stain mixing matrix using filtered uncorrelated data. We conducted an extensive set of experiments to compare the proposed method to the recent state of the art methods and demonstrate the robustness of this approach using three different datasets of scanned slides, prepared in different labs using different scanners.
N. Alsubaie, S. E. A. Raza, and N. M. Rajpoot, “Stain Deconvoloution of Histology Images via Independent Component Analysis in the Wavelet Domain,” IEEE International Symposium on Biomedical Imaging (ISBI), Apr. 2016, p. 803-806. [Abstract] [doi]
With the ubiquity of digital slide scanners, histology image analysis is rapidly emerging as an active area of research. Several histology image analysis algorithms such as those for mitotic cell detection, nuclei segmentation and hormone receptors scoring depend on colour information obtained from images of the scanned slides. However, different standards followed by different labs and the technical variation among different scanners result in stain inconsistency in histology images. Thus, applications that use colour information may fail when they are applied to images with different appearance of stain colours. In this paper, we propose a novel method to estimate the so called stain matrix via independent component analysis in the wavelet domain for stain deconvolution in histology images. Experimental results demonstrate stable and more accurate stain deconvolution results as compared to other recently proposed algorithms.
M.N. Kashif, S.E.A. Raza, K. Sirinukunwattana, et al., “Handcrafted features with convolutional neural networks for detection of tumor cells in histology images,” IEEE International Symposium on Biomedical Imaging (ISBI), Apr. 2016, p. 1029-1032. [Abstract] [doi]
Detection of tumor nuclei in cancer histology images requires sophisticated techniques due to the irregular shape, size and chromatin texture of the tumor nuclei. Some very recently proposed methods employ deep convolutional neural networks (CNNs) to detect cells in H&E stained images. However, all such methods use some form of raw pixel intensities as input and rely on the CNN to learn the deep features. In this work, we extend a recently proposed spatially constrained CNN (SC-CNN) by proposing features that capture texture characteristics and show that although CNN produces good results on automatically learned features, it can perform better if the input consists of a combination of handcrafted features and the raw data. The handcrafted features are computed through the scattering transform which gives non-linear invariant texture features. The combination of handcrafted features with raw data produces sharp proximity maps and better detection results than the results of raw intensities with a similar kind of CNN architecture.
S. E. A. Raza, D. Langenkämper, K. Sirinukunwattana, D. B. A. Epstein, T. W. Nattkemper, and N. M. Rajpoot, “Robust Normalization Protocols for Multiplexed Fluorescence Bioimage Analysis,” BMC Biodata Min., Mar. 2016, vol. 9:11. [Abstract] [doi]
The study of mapping and interaction of co-localized proteins at a sub-cellular level is important for understanding complex biological phenomena. One of the recent techniques to map co-localized proteins is to use the standard immuno-fluorescence microscopy in a cyclic manner. Unfortunately, these techniques suffer from variability in intensity and positioning of signals from protein markers within a run and across different runs. Therefore, it is necessary to standardize protocols for preprocessing of the multiplexed bioimaging (MBI) data from multiple runs to a comparable scale before any further analysis can be performed on the data. In this paper, we compare various normalization protocols and propose on the basis of the obtained results, a robust normalization technique that produces consistent results on the MBI data collected from different runs using the Toponome Imaging System (TIS). Normalization results produced by the proposed method on a sample TIS data set for colorectal cancer patients were ranked favorably by two pathologists and two biologists. We show that the proposed method produces higher between class Kullback-Leibler (KL) divergence and lower within class KL divergence on a distribution of cell phenotypes from colorectal cancer and histologically normal samples.
K. Sirinukunwattana, S.E.A. Raza, Y.-W. Tsang, D. Snead, I. Cree, and N.M. Rajpoot, “Locality Sensitive Deep Learning for Detection and Classification of Nuclei in Routine Colon Cancer Histology Images,” IEEE Trans. Med. Imaging, pp. 1–1, Jan. 2016. [Abstract] [doi] [Data]
Detection and classification of cell nuclei in histopathology images of cancerous tissue stained with the standard hematoxylin and eosin stain is a challenging task due to cellular heterogeneity. Deep learning approaches have been shown to produce encouraging results on histopathology images in various studies. In this paper, we propose a Spatially Constrained Convolutional Neural Network (SC-CNN) to perform nucleus detection. SC-CNN regresses the likelihood of a pixel being the center of a nucleus, where high probability values are spatially constrained to locate in the vicinity of the center of nuclei. For classification of nuclei, we propose a novel Neighboring Ensemble Predictor (NEP) coupled with CNN to more accurately predict the class label of detected cell nuclei. The proposed approaches for detection and classification do not require segmentation of nuclei. We have evaluated them on a large dataset of colorectal adenocarcinoma images, consisting of more than 20,000 annotated nuclei belonging to four different classes. Our results show that the joint detection and classification of the proposed SC-CNN and NEP produces the highest average F1 score as compared to other recently published approaches. Prospectively, the proposed methods could offer benefit to pathology practice in terms of quantitative analysis of tissue constituents in whole-slide images, and could potentially lead to a better understanding of cancer.
K. Sirinukunwattana, S. E. A.Raza, Y.-W. Tsang, D. Snead, I. Cree, and N.M. Rajpoot, “A Spatially Constrained Deep Learning Framework for Detection of Epithelial Tumor Nuclei in Cancer Histology Images,” in 1st International Workshop on Patch-based Techniques in Medical Imaging, MICCAI, Oct. 2015, pp. 154–162. [Abstract] [doi]
Detection of epithelial tumor nuclei in standard Hematoxylin & Eosin stained histology images is an essential step for the analysis of tissue architecture. The problem is quite challenging due to the high chromatin texture of the tumor nuclei and their irregular size and shape. In this work, we propose a spatially constrained convolutional neural network (CNN) for the detection of malignant epithelial nuclei in histology images. Given an input patch, the proposed CNN is trained to regress, for every pixel in the patch, the probability of being the center of an epithelial tumor nucleus. The estimated probability values are topologically constrained such that high probability values are concentrated in the vicinity of the center of nuclei. The location of local maxima is then used as a cue for the final detection. Experimental results show that the proposed network outperforms the conventional CNN with center-pixel-only regression for the task of epithelial tumor nuclei detection.
G. Li, S. E. A. Raza, and N.M. Rajpoot, “A Novel Cell Orientation Congruence Descriptor for Superpixel Based Epithelium Segmentation in Endometrial Histology Images,” in 1st International Workshop on Patch-based Techniques in Medical Imaging, MICCAI, Oct. 2015, pp. 172–179. [Abstract] [doi]
Recurrent miscarriage can be caused by an abnormally high number of Uterine Natural Killer (UNK) cells in human female uterus lining. Recently a diagnosis protocol has been developed based on the ratio of UNK cells to stromal cells in endometrial biopsy slides immunohistochemically stained with Haematoxylin for all cells and CD56 as a marker for the UNK cells. The counting of UNK cells and stromal cells is an essential process in the protocol. However, the cell counts must not include epithelial cells from glandular structures and UNK cells from epithelium. In this paper, we propose a novel superpixel based epithelium segmentation algorithm based on the observation that neighbouring epithelial cells packed at the boundary of glandular structures or background tend to have similar local orientations. Our main contribution is a novel cell orientation congruence descriptor in a machine learning framework to differentiate between epithelial and non-epithelial cells.
S.E.A. Raza, V. Sanchez, G. Prince, J. Clarkson, and N. M. Rajpoot, “Registration of thermal and visible light images of diseased plants using silhouette extraction in the wavelet domain,” Pattern Recognit., vol. 48, pp. 2119–2128, Jul. 2015. [Abstract] [doi][Software]
The joint analysis of thermal and visible light images of plants can help to increase the accuracy of early disease detection. Registration of thermal and visible light images is an important pre-processing operation to perform this joint analysis correctly. In the case of diseased plants, registration using common methods based on mutual information is particularly challenging since the plant texture in the thermal image significantly differs from the corresponding texture in the visible light image. Registration methods based on silhouette extraction are therefore more appropriate. This paper proposes an algorithm for registration of thermal and visible light images of diseased plants based on silhouette extraction. The algorithm is based on a novel multi-scale method that employs the stationary wavelet transform to extract the silhouette of diseased plants in thermal images, in which common gradient-based methods usually fail due to the high noise content. Experimental results show that silhouettes extracted using this method can be used to register thermal and visible light images with high accuracy.
S. E. A. Raza and N. M. Rajpoot, “Cell Nuclei Segmentation in Variable Intensity Fluorescence Microscopy Images,” in Medical Image Understanding and Analysis, Jul. 2015, pp. 28–33. [Abstract] [doi]
We propose a method for automatic segmentation of variable intensity cell nuclei in the presence of highly variable noise in fluorescence microscopy images by adding novel texture information in the wavelet domain. The proposed method calculates the Hessian matrix using the stationary wavelet transform and uses eigenvalues of the Hessian matrix to obtain the underlying texture of nuclei and visual debris. The texture of chromatin nuclei helps to obtain the nucleus boundary in the presence of variable intensities and texture of the image noise helps to remove the noise. We demonstrate that our method produces better overlap with the hand-labelled ground truth on a publicly available data set with two different collections as compared to the state-of-the-art.
N. Alsubaie, N. Trahearn, S. E. A. Raza, and N. M. Rajpoot, “A Discriminative Framework for Stain Deconvolution of Histopathology Images in the Maxwellian Space,” in Medical Image Understanding and Analysis, Jul. 2015, pp. 132–137. [Abstract] [doi]
Histopathology image analysis has received a lot of attention since the advent of whole slide scanners. Digitisation of tissue slides lends itself to the automation of histopathology image analysis algorithms such as mitotic cell detection, nuclei segmentation and hormone receptors scoring. Most of these algorithms depend on the stain expression of scanned tissue slides. However, different standards followed by different labs and the technical variations among different scanners result in stain colour inconsistency in histopathology images across different labs. Thus, applications that rely on stain colour intensity might fail when they are applied to images with different colour appearance. In this paper, we present an effective method of stain deconvolution of histopathology images, which is a fast and reliable method of deriving the stain matrix. We propose a discriminative framework in the Maxwellian space to achieve reliable estimation of the stain matrix. We compare the proposed method with one of the state-of-the-art stain deconvolution methods and show that the proposed method estimates stain matrix with high accuracy.
S. E. A. Raza, G. Prince, J. Clarkson, and N. M. Rajpoot, “Automatic Detection of Diseased Tomato Plants using Thermal and Stereo Visible Light Images,” PLoS One, Apr. 2015. [Abstract] [doi]
Accurate and timely detection of plant diseases can help mitigate the worldwide losses experienced by the horticulture and agriculture industries each year. Thermal imaging provides a fast and non-destructive way of scanning plants for diseased regions and has been used by various researchers to study the effect of disease on the thermal profile of a plant. However, thermal image of a plant affected by disease has been known to be affected by environmental conditions which include leaf angles and depth of the canopy areas accessible to the thermal imaging camera. In this paper, we combine thermal and visible light image data with depth information and develop a machine learning system to remotely detect plants infected with the tomato powdery mildew fungus Oidium neolycopersici. We extract a novel feature set from the image data using local and global statistics and show that by combining these with the depth information, we can considerably improve the accuracy of detection of the diseased plants. In addition, we show that our novel feature set is capable of identifying plants which were not originally inoculated with the fungus at the start of the experiment but which subsequently developed disease through natural transmission.
S. E. A. Raza, M. Q. Marjan, M. Arif, F. Butt, F. Sultan, and N. M. Rajpoot, “Anisotropic tubular filtering for automatic detection of acid-fast bacilli in Ziehl-Neelsen stained sputum smear samples,” in SPIE Medical Imaging, Feb. 2015, vol. 9420, p. 942005. [Abstract] [doi]
One of the main factors for high workload in pulmonary pathology in developing countries is the relatively large proportion of tuberculosis (TB) cases which can be detected with high throughput using automated approaches. TB is caused by Mycobacterium tuberculosis, which appears as thin, rod-shaped acid-fast bacillus (AFB) in Ziehl-Neelsen (ZN) stained sputum smear samples. In this paper, we present an algorithm for automatic detection of AFB in digitized images of ZN stained sputum smear samples under a light microscope. A key component of the proposed algorithm is the enhancement of raw input image using a novel anisotropic tubular filter (ATF) which suppresses the background noise while simultaneously enhancing strong anisotropic features of AFBs present in the image. The resulting image is then segmented using color features and candidate AFBs are identified. Finally, a support vector machine classifier using morphological features from candidate AFBs decides whether a given image is AFB positive or not. We demonstrate the effectiveness of the proposed ATF method with two different feature sets by showing that the proposed image analysis pipeline results in higher accuracy and F1-score than the same pipeline with standard median filtering for image enhancement.
A.M. Khan, S.E.A. Raza, M. Khan, et al., “Cell Phenotyping in Multi-Tag Fluorescent Bioimages,” Neurocomputing, Jun. 2014, vol. 134 no. 1 p. 254-261. [Abstract] [doi]
Multi-tag bioimaging systems have recently emerged as powerful tools which provide spatiotemporal localization of several different proteins in the same tissue specimen. The analysis of such multivariate bioimages requires sophisticated analytical methods that extract a molecular signature of various types of cells and assist in analyzing interaction behaviors of functional protein complexes. Previous studies were mainly focused on pixel-level analysis which essentially ignore cellular structures as units which can be crucial when analyzing cancerous cells. In this paper, we present a framework in order to overcome these limitations by incorporating cell-level analysis. We use this framework to identify cell phenotypes based on their high-dimensional co-expression profiles contained within the images generated by the robotically controlled TIS microscope installed at Warwick. The proposed paradigm employs a refined cell segmentation algorithm followed by a locality preserving nonlinear embedding algorithm which is shown to produce significantly better cell classification and phenotype distribution results as compared to its linear counterpart.
S.E.A Raza, H. Smith, G.J.J. Clarkson, et al., “Automatic Detection of Regions in Spinach Canopies Responding to Soil Moisture Deficit Using Combined Visible and Thermal Imagery,” PLoS ONE, Jun. 2014, vol. 9 no. 6 p. e97612. [Abstract] [doi]
Thermal imaging has been used in the past for remote detection of regions of canopy showing symptoms of stress, including water deficit stress. Stress indices derived from thermal images have been used as an indicator of canopy water status, but these depend on the choice of reference surfaces and environmental conditions and can be confounded by variations in complex canopy structure. Therefore, in this work, instead of using stress indices, information from thermal and visible light imagery was combined along with machine learning techniques to identify regions of canopy showing a response to soil water deficit. Thermal and visible light images of a spinach canopy with different levels of soil moisture were captured. Statistical measurements from these images were extracted and used to classify between canopies growing in well-watered soil or under soil moisture deficit using Support Vector Machines (SVM) and Gaussian Processes Classifier (GPC) and a combination of both the classifiers. The classification results show a high correlation with soil moisture. We demonstrate that regions of a spinach crop responding to soil water deficit can be identified by using machine learning techniques with a high accuracy of 97%. This method could, in principle, be applied to any crop at a range of scales.
S.E.A. Raza, A. Humayun, S. Abouna, et al., “RAMTaB: Robust Alignment of Multi-Tag Bioimages,” PLoS ONE, Feb. 2012, vol. 7 no. 2, p. e30894. [Abstract] [doi][Software]
Background: In recent years, new microscopic imaging techniques have evolved to allow us to visualize several different proteins (or other biomolecules) in a visual field. Co-location of the proteins is necessary to analyze the molecular structure of a sample at each point under observation. We present a novel approach to align images in a multi-tag fluorescence image stack. The proposed approach is applicable to multi-tag bioimaging systems which (a) acquire fluorescence images by sequential staining and (b) simultaneously capture a phase contrast image corresponding to each of the fluorescence images. To the best of our knowledge, there is no existing method in the literature which addresses simultaneous registration of multi-tag bioimages and selection of the reference image in order to maximize the overall overlap between the images. Methodology/Principal Findings: We employ a block-based method for registration which yields a confidence measure to indicate the accuracy of our registration results. We derive a shift metric in order to select the Reference Image with Maximal Overlap (RIMO), in turn minimizing the total amount of non-overlapping signal for a given number of tags. Experimental results show that the RAMTaB framework is robust to variations in contrast and illumination, yields sub-pixel accuracy, and successfully selects the reference image resulting in maximum overlap. The registration results are also shown to significantly improve any follow-up protein co-localization studies. Conclusions: For the discovery of protein complexes and of functional protein networks within a cell, alignment of the tag images in the multi-tag fluorescence image stack is a key pre-processing step. The proposed framework is shown to produce accurate alignment results on both real and synthetic data. Our future work will use the aligned multi-channel fluorescent image data for normal and diseased tissue specimens to analyze molecular co-expression patterns and functional protein networks.
A. M. Khan, A. Humayun, S. E. A. Raza, et al., “A Novel Paradigm for Mining Cell Phenotypes in Multi-tag Bioimages Using a Locality Preserving Nonlinear Embedding,” Proceedings Neural Information Processing. ICONIP, Lecture Notes in Computer Science , vol. 7666, 2012. [Abstract] [doi]
Multi-tag bioimaging systems such as the toponome imaging system (TIS) require sophisticated analytical methods to extract molecular signatures of various types of cells. In this paper, we present a novel paradigm for mining cell phenotypes based on their high-dimensional co-expression profiles contained within the images generated by the robotically controlled TIS microscope installed at Warwick. The proposed paradigm employs a refined cell segmentation algorithm followed by a locality preserving nonlinear embedding algorithm which is shown to produce significantly better cell classification and phenotype distribution results as compared to its linear counterpart.
A. Humayun, S.E.A. Raza, C. Waddington, et al., “A Framework for Molecular Co-Expression Pattern Analysis in Multi-Channel Toponome Fluorescence Images,” Proceedings Microscopy Image Analysis with Applications in Biology (MIAAB), Sep. 2011, Heidelberg, Germany. [Abstract] [doi]
Bioimage computing is rapidly emerging as an important area in image based systems biology with an emphasis on spatiotemporal localization of subcellular bio-molecules, most importantly proteins. A key problem in this domain is analysis of protein co-localization or co expression of protein molecules. Imaging techniques, such as the Toponome Imaging System (TIS) , with the ability to localize several different proteins in the same tissue specimen are only becoming available recently. Traditional co-localization studies and some of the modern coexpression studies have serious limitations when analyzing this kind of data. Here we present a framework for the analysis of molecular co-expression patterns (MCEPs) in TIS image data.