Please read our student and staff community guidance on COVID-19
Skip to main content Skip to navigation

Micro-Net: Fluorescence Cell Segmentation Dataset

Abstract

Object segmentation and structure localization are important steps in automated image analysis pipelines for microscopy images. We present a convolution neural network (CNN) based deep learning architecture for segmentation of objects in microscopy images. The proposed network can be used to segment cells, nuclei, and glands in fluorescence microscopy and histology images after slight tuning of input parameters. The network trains at multiple resolutions of the input image, connects the intermediate layers for better localization and context and generates the output using multi-resolution deconvolution filters. The extra convolutional layers which bypass the max-pooling operation allow the network to train for variable input intensities and object size and make it robust to noisy data. We compare our results on publicly available data sets and show that the proposed network outperforms recent deep learning algorithms.

Highlights

  • A unified deep learning framework for segmentation of objects (cell nuclei, cells, and multi-cellular objects such as glandular structures) in two main types of microscopy images: fluorescence and histology.
  • The proposed Micro-Net is aimed at better object localization in the face of varying intensities and is robust to noise.
  • Detailed experimentation & comparative evaluation on publicly available data sets and a new image dataset that is made public with this paper.
  • Demonstration of robustness of the algorithm to high levels of noise.

Keywords

Cell segmentation; Nuclear segmentation; Gland segmentation; Convolution neural networks; Microscopy image analysis; Digital pathology

Publication

S.E.A. Raza, L. Cheung, M. Shaban, S. Graham, D. Epstein, S. Pelengaris, M. Khan, and N. M. Rajpoot. "Micro-Net: A unified model for segmentation of various objects in microscopy images." Medical Image Analysis vol. 52, pp. 160–173, Feb. 2019. [doi]

Dataset

Please download the dataset from this link.