Next Article in Journal
1.0 V-0.18 µm CMOS Tunable Low Pass Filters with 73 dB DR for On-Chip Sensing Acquisition Systems
Next Article in Special Issue
Machine Learning Methods for Preterm Birth Prediction: A Review
Previous Article in Journal
A Proposal of Accessibility Guidelines for Human-Robot Interaction
Previous Article in Special Issue
Machine Learning for Predictive Modelling of Ambulance Calls
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Machine Learning Methods for Histopathological Image Analysis: A Review

by
Jonathan de Matos
1,2,*,
Steve Tsham Mpinda Ataky
1,
Alceu de Souza Britto
2,3,
Luiz Eduardo Soares de Oliveira
4 and
Alessandro Lameiras Koerich
1
1
École de Technologie Superiéure, Université du Québec, Montréal, QC H3C 1K3, Canada
2
Department of Informatics, State University of Ponta Grossa, Ponta Grossa PR 84030-900, Brazil
3
Department of Computer Science, Pontifical Catholic University of Paraná, Curitiba PR 80215-901, Brazil
4
Department of Informatics, Federal University of Paraná, Curitiba PR 81531-990, Brazil
*
Author to whom correspondence should be addressed.
Electronics 2021, 10(5), 562; https://doi.org/10.3390/electronics10050562
Submission received: 1 January 2021 / Revised: 15 February 2021 / Accepted: 19 February 2021 / Published: 27 February 2021

Abstract

:
Histopathological images (HIs) are the gold standard for evaluating some types of tumors for cancer diagnosis. The analysis of such images is time and resource-consuming and very challenging even for experienced pathologists, resulting in inter-observer and intra-observer disagreements. One of the ways of accelerating such an analysis is to use computer-aided diagnosis (CAD) systems. This paper presents a review on machine learning methods for histopathological image analysis, including shallow and deep learning methods. We also cover the most common tasks in HI analysis, such as segmentation and feature extraction. Besides, we present a list of publicly available and private datasets that have been used in HI research.

1. Introduction

Current hardware capabilities and computing technologies provide the ability of computers to solve problems in many fields. The medical field nobly employs technologies as a means of improving populations’ health and life quality. Medical computer-aided diagnosis is one of the suitable examples thereof. Among the diagnoses, image-based diagnoses such as magnetic resonance imaging (MRI), X-rays, computed tomography (CT), and ultrasound have been attracting the growing interest of scientists and academics. Likewise, histopathological images (HIs) are another kind of medical imaging obtained through microscopy of tissues from biopsies, which brings to the specialists their ability to observe tissue characteristics on a cell basis (Figure 1).
Cancer is a disease with high mortality rates in developed and developing countries. In addition to causing death, related treatment costs are high and impact public and private healthcare systems, penalizing, therefore, the government and the population. According to Torre et al. [1], the mortality rate among high-income countries is stabilizing or even decreasing due to programs regarding the risk factors reduction (e.g., smoking, over-weighting, physical inactivity) and due to treatment improvements. In low and middle-income countries, mortality rates are rising due to the increase in risk factors. One of the critical points of progress in treatment is the early detection of tumors. In fact, in 140 out of 184 countries, breast cancer is the most prevalent type of cancer among women [2]. Imaging exams like mammography, ultrasound, or CT can diagnose the presence of masses growing in breast tissue, notwithstanding the confirmation of which type of tumor can only be accomplished through a biopsy. Biopsies, in turn, take more time to provide a result due to the acquisition procedure (e.g., fine-needle aspiration or open surgical biopsy), the tissue processing (creation of slide with the staining process), and finally, pathologist visual analysis. Naturally, pathologist analysis is a highly specialized and time-consuming task prone to inter and intra-observer discordance [3].
Furthermore, the staining process can increase variance in the process of analysis. Hematoxylin and eosin (H&E), although both are the most common and accessible type of stain, they can nevertheless produce different color intensities depending on the brand, storage time, and temperature. Therefore, computer-aided diagnosis (CAD) can increase pathologists’ throughput and improve the confidence of results by adding reproducibility to the diagnosis process and reducing observer subjectivity.
The observation of nuclei is an essential feature in cancer diagnosis. Tumors like ductal carcinoma and lobular carcinoma present an irregular growth on epithelial cells. A high number of nuclei or mitotic cells in a small region can indicate uneven tissue growth, representing a tumor. An HI can capture this feature, but besides the nuclei, it will grab other healthy tissues that can be seen in images of benign tumors. Stroma is a type of tissue that shows the same characteristics in parts of malignant and benign images. Selecting more relevant patches could improve the classification processes.
In the last years, we have experienced an increased use of machine learning (ML) methods in CAD and HI analysis. ML methods have been used to diagnose cancer in different tissues or organs such as the breast, prostate, skin, brain, bones, liver, etc. ML methods also have potential advantages in HI analysis. ML methods have been widely used in segmentation, feature extraction, and classification of HIs. HIs have rich geometric structures and complex textures, which are different from the visual characteristics of macro vision images used in other machine learning tasks such as object recognition, face recognition, scene reconstruction, or event detection.
This review attempts to capture the last decade’s most relevant works that employ ML methods for HI analysis. We present a comprehensive overview of ML methods for HI analysis, including segmentation, feature extraction, and classification. The motivation is to understand ML methods’ development and use in HI analysis and discover ML methods’ future potential in HI analysis. Furthermore, this review aims to address the following three research questions:
  • Which ML methods have been used for HI classification and how HIs are provided to the ML methods (raw images or preprocessed images or extracted features)? This question aims at identifying which monolithic classifiers, ensembles of classifiers, or DL methods have been frequently used to classify HIs.
  • Which elements of HIs are considered the most important ones and how they are obtained? This question aims at identifying which types of tissues or structures can be identified using ML methods.
  • What are the trends that have been dominating HI analysis? This question aims to identify the most promising ML methods for HI analysis for the near future.
The main contributions of this paper are: (i) it covers a period of exponential change in the computer vision, from the handcrafted features to representation learning methods; (ii) it is a comprehensive review, which does not focus on HIs of specific tissues or organs; (iii) it categorizes the works according to the task: segmentation, feature extraction, classification, and representation learning and classification. There are several survey and review papers related to HI analysis presented in Section 7. Unlike previous studies and surveys on HIs that only focus on HIs of specific tissues or organs or on a single learning modality (supervised, unsupervised, or DL techniques), this review covers different approaches, methodologies, datasets, and experimental results so readers can identify possible opportunities for future research in HI analysis.
This paper is organized as follows. Section 2 proposes a taxonomy to categorize the ML methods used in HIs as well as an overview of the process of selecting journals and proceedings. Section 3 presents the segmentation methods that attempt to identify important structures in HIs, which may help diagnose. Section 4 presents the feature extraction methods that have been used to represent HIs for further classification. Section 5 presents the shallow methods that have been used for classifying the main types of tissues and tumors in HIs. Given the importance and the growing interest in DL methods, Section 6 is devoted to present the recent approaches for HI analysis that employ such methods. Section 7 brings together other studies and survey papers that have been published recently and a compilation of several HI datasets that have been used in the last decade. Finally, the last section presents the conclusions and perspectives of future works.

2. Taxonomy and Overview

Based on the three research questions presented previously, we have created a search query ((histology AND image) or (histopathology AND image) or (eosin AND hematoxylin)) and ((“machine learning”) or (“artificial intelligence”) or (“image processing”)), which was slightly adapted to each search engine. We have searched for references comprising the period between 2008 and 2020 into five research portals (engines): IEEE Xplore, ACM Digital Library, Science Direct, Web of Science, and Scopus. Table 1 presents the number of results obtained with the search query. We have searched based on the title, abstract, and keywords for all search engines, except for Science Direct. In this case, we added the full-text search because the number of relevant works was very low.
Based on these results, the first exclusion criterion was based on the title and abstract. Most of the exclusions in this step were due to papers that mentioned “image processing” in the text, but the term’s sense was associated with digitizing HIs for visual analysis by pathologists. The presence of terms eosin, hematoxylin, and histopathology was another criterion to exclude types of medical images, such as CT, MRI, or radiology, which are out of the scope of this review. Finally, we have eliminated the duplicated articles, and we ended up with 363 articles. The second exclusion criterion was based on the full-text reading to evaluate the paper’s contents’ adherence to this review’s goal. Such a criterion excluded almost 50% of the papers retained by the first filtering. Therefore, we have ended up with 185 articles. Besides the papers selected using the search query, we have also included several papers related to the HI datasets used in many references cited in this review and some other references that discuss specific ML methods and techniques that are also referred to in many of the selected references.
This review focuses on ML methods for HI analysis. Therefore, we have categorized the ML methods according to the most common ML tasks as shown in Figure 2. The top-level categories are segmentation, feature extraction, shallow methods, and deep methods. Notwithstanding DL approaches can be employed for both segmentation and classification, we proposed this division to highlight how the recent advances in DL have impacted HI analysis research, causing a paradigm shift to DL methods over traditional ML methods.
Segmentation of HIs was a trendy category during the first years covered by this review. Most of the works were based on image processing techniques, such as filtering, thresholding, and contour detection techniques. In contrast, others rely on ML methods, such as classification and unsupervised learning at the pixel level. Besides, since the annotation for segmentation is a very time-consuming task, it is common to find unsupervised methods along with supervised ones. Most of the early works used segmentation to highlighting information in HIs to specialists. Feature extraction aims at finding discriminative characteristics in HIs and at aggregating them into a feature vector to train ML algorithms. Most shallow classifiers and ensemble methods use such a vector representation to learn linear or non-linear decision boundaries. We divided the category of shallow methods into two subcategories: monolithic classifiers and ensemble methods. Ensemble methods combine several diverse base models to reduce bias and variance in predictions and improve predictions’ accuracy. The works that fall within both subcategories require a previous step of feature extraction.
Finally, deep methods also include works focused on supervised and unsupervised learning of different architectures of deep neural networks. Most of the works within this category are end-to-end learning approaches, which integrate representation learning and decision-making.
The number of publications related to the field of this research is presented in Figure 3. Based on Figure 3 it is possible to notice that the research on the topic has been increasing in the last few years. It is also possible to note a significant increase in DL methods, while ensembles and feature extraction kept their rates. Table 2 shows the number of publications per journal between 2008 and 2020 and Table 3 shows the top 15 journals in terms of the number of publications.

3. Segmentation Methods for HIs

Typically, pathologists look for tissue regions relevant to the diagnose of diseases. HI segmentation usually aims to label regions of pixels according to the structure they represent. For instance, the identification of nuclei structures can be used to extract morphological features, such as the number of nuclei per region, their size, and format, which may help diagnose a tumor. The main challenges in HI segmentation rely on the segmentation of low-level and high-level structures. The former focuses on nuclei segmentation, and it was the focus of early works, which usually aims to identify mitosis and pleomorphism. Early works have focused on low-level structures mainly due to the hardware limitations in loading and processing high-resolution HIs. On the other hand, recent works have focused on high-level segmentation, aiming to identify tissue types on HIs of high resolution. Besides, there is also a rise of larger datasets focusing on high-level structures in the most recent years, such as the ICIAR BACH Challenge dataset. Finally, we can also highlight the segmentation using the stain color, usually employing colorspace manipulation, image processing methods, and low-cost machine learning algorithms.
This section presents several approaches for segmenting HIs, where most of them are based either on supervised or unsupervised ML methods. The former requires HI datasets with region annotation, while the latter does not require any annotation.

3.1. Unsupervised Approaches

The k-means algorithm is an unsupervised ML method for clustering that has been used for the segmentation of pixel regions. In the context of this review, it represents the core of fourteen segmentation methods, as shown in Table 4. Fatakdawala et al. [5] proposed a methodology based on the expectation-maximization of the geodesic active contour for detecting lymphocyte nuclei, which can identify four structures: lymphocyte nuclei, stroma, cancer nuclei, and background. The process initiates with segmentation by a k-means algorithm, which clusters pixels of similar intensities, and afterward, such clusters are improved with an expectation-maximization algorithm. The contours are identified based on the magnetic interaction theory. After the definition of contours, an algorithm searches for contours’ concavity, meaning nuclei are overlapping. The experiments were conducted using a breast cancer dataset. A multiscale segmentation with k-means is the subject of study of Roullier et al. [6]. This work uses the same idea of the pathologist to analyze a whole slide image (WSI). The segmentation starts at a lower magnification factor and finishes at a higher magnification, where it is easier to identify mitotic cells. The result of the clustering algorithm aims to identify regions of interest in each magnification. Rahmadwati et al. [7] employed the k-means algorithm to help classify HIs. Although the focus is not on the k-means but Gabor filters, this clustering method is essential in the segmentation process. Peng et al. [8] used k-means and principal component analysis (PCA) to split HIs into four types of structures: glandular lumen, stroma, epithelial-cell cytoplasm, and cell nuclei. Subsequently, morphological operations of closing and filling are performed. He et al. [9] used a mixture of local region-scalable fitting and k-means to segment cervix HIs. Fatima et al. [10] used k-means for segmentation followed by skeletonization and shock graphs to identify nuclei in the previously segmented image. If the shock graph provides a confidence value smaller than 0.5 for nucleus identification, the second attempt of identification is made using a multilayer perceptron (MLP). This hybrid approach achieves 92.5% of accuracy in nucleus identification.
Mazo et al. [11] also used k-means to segment cardiac images in three classes: connective tissues, light areas, and epithelial tissue. A flooding algorithm processes light areas to merge its result with epithelial regions and improve the final result. Finally, the plurality rule was used to assign cells into flat, cubic, and cylindrical. This method achieved a sensitivity of 85%. This work was extended in Mazo et al. [12]. Tosun et al. [13] proposed segmentation based on k-means that clusters all pixels into three categories (purple, pink, white), which are further divided into three subcategories. The object-level segmentation based on clustering achieved 94.89% of accuracy against 86.78% for pixel-level segmentation. Nativ et al. [14] presented a k-means clustering based on morphological features of lipid droplets previously segmented using active contours models. A decision tree (DT) was used to verify the rules that lead to the classes obtained by the clustering. The correlation with pathologist evaluations reached 97%. A two-step k-means is used by Shi et al. [15] to segment follicular lymphoma HI. The first step segments nuclei and other types of tissues into two clusters. The next step segments “another type tissue” area from the previous step into three classes (nuclei, cytoplasm, and extracellular spaces). The final step is a watershed algorithm to extract better contours of nuclei. The difference between manual segmentation and automated was around 1%. Brieu et al. [16] presented a segmentation approach based on k-means. The result of k-means segmentation is improved and simplified using a sequence of thresholds that attempt to preserve the form of objects. The key point of such a method is not the segmentation but nucleus detection. Shi et al. [17] used k-means to cluster pixels represented in the CIELAB color space using pixel neighborhood statistics. A thresholding step improves contours detection of fat droplets, and human specialists analyze morphological information related to the droplets to come up with a diagnosis. Shi et al. [15] proposed a segmentation method that considers the local correlation of each pixel. A first clustering performed by a k-means algorithm generates a poorly segmented cytoplasm, and a second clustering that does not consider the nuclei identified by the first clustering is performed. Finally, a watershed transform is applied to complete the segmentation.
Other clustering algorithms have also been used to segment HIs. The work proposed by Liu et al. [18] used the iterative self-organizing data analysis technique (ISODATA) to cluster cell images and create prototypes. Hafiane et al. [19] studied two strategies for initialization of clustering methods: geodesic active contours and multi-phase vector level sets. The last one proved to be more efficient when using spatial constraint fuzzy c-means, with accuracy values of 68.1% and 67.9% respectively, and k-means achieved 60.6% in this case. He et al. [20] presented segmentation based on Gaussian mixture models. Their methodology uses the stain color features (hematoxylin with blue color and eosin in pink and red) to apply two segmentation steps in the red channel and other channels subsequently. It does not present ground truth comparison, only visual results compared to k-means. [21] presented a quasi-supervised approach based on nearest neighbors to cluster an unlabeled dataset based on itself and another labeled dataset. A comparison between the quasi-supervised approach and support vector machine (SVM) has shown that SVM presents a better performance, but it requires labeled data. Yang et al. [22] proposed a system for content recovery based on a three-step method that uses histogram features. The first two steps use dissimilarity measures of histograms to find candidate images. The last step uses mean shift clustering. The area under the curve (AUC) of the proposed method is 0.87, which is better than 0.84 achieved by the method based on local binary patterns (LBP) features. A mitotic cell detection system using a dictionary of cells is presented by Sirinukunwattana et al. [23]. A shrinkage/thresholding method groups intensity features represented by a sparse coding to create a dictionary. This method achieved 80.5% and 77.9% of F-score on Asperio and Hamatsu subsets of MITOS dataset, respectively. Huang [24] proposed a semi-supervised method based on exclusive component analysis (XCA) that uses the separation of stains to improve the performance. This method needs a small interaction of the user, who must provide a set of references from nuclei and from the cytoplasm. Finally, it is worth mentioning that unsupervised methods based on DL approaches have also been proposed for segmenting HIs. We will present some recent works in Section 6.

3.2. Supervised Approaches

In this section, we present the works related to HI segmentation which are based on supervised ML approaches. Most of the works presented in this section are based on classification algorithms and therefore, they require labeled datasets in which pixels or pixel regions are annotated. Table 5 summarizes the recent publications on supervised ML methods used for the segmentation purpose, where eight out of fourteen works are based on SVM classifiers.
Yu and Ip [25] presented an approach to encode HIs using a patching procedure and a method called spatial hidden Markov model (SHMM). Each patch is represented by a feature vector that uses a mixture of Gabor energy and gray-level features. The SHMM showed improvements from 4% to 17% in multiple tissues in comparison to a hidden Markov model. The work of Arteta et al. [26] uses the concept of extremal regions on gray-scale images to identify nuclei on HIs. To identify the threshold of extremal regions, which are organized in an overlap tree, they used an SVM classifier. This approach achieved 88.5% of F1-score against 69.8% achieved by the state-of-the-art, considering the number of cells found after segmentation. Janssens et al. [27] presented a segmentation procedure to identify muscular cells. First a segmentation based on thresholding identifies connective tissues and cells. Then, an SVM receives the segmented regions and classifies them recursively into three classes (connective tissue, the clump of cells, and cells) until only connective and cell tissues appear. This approach achieved an F-score of 62%, which was state-of-the-art at that time. Saraswat and Arya [28] proposed a segmentation procedure with a non-dominated sorted genetic algorithm (NSGA-II) and a threshold classifier. The NSGA-II generates the threshold for feature values from ground-truth images. The comparison between learned thresholds and feature values generates the segmentation. Breast cancer prognosis is the subject of the study of Qu et al. [29]. They used an SVM to perform pixel-wise classification to separate nuclei from the stroma. A second step based on a watershed algorithm identifies nuclei. The approach achieved 72% of accuracy using pixel-level, object-level, and semantic-level features. Salman et al. [30] proposed a segmentation method based on k-NN to analyze WSIs. The method computes histograms from patches of 64 × 64 pixels extracted from the H&E channels obtained by color deconvolution. The best accuracy was 73.2% using histograms of both H&E channels. Chen et al. [31] proposed a method based on pixel-wise SVM to identify stroma and tumor nests. Nuclei segmentation is carried out by a watershed algorithm, which results in 314 object-level features and 16 semantic-level features. The feature dimensionality was reduced using the analysis of feature importance. Geessink et al. [32] used a normal density-based quadratic discriminant classifier (QDA) to segment colorectal images. The segmentation uses the CIELAB color space with a threshold to eliminate background pixels and HSV color space to classify the remaining pixels. After classification, errors are corrected based on histological constraints. The algorithm produced an error rate of 0.6% for tumor quantification which, according to the authors, is lower than the error of pathologists (4.4%).
Zarella et al. [33] trained an SVM to distinguish stained pixels from unstained pixels. For such an aim, they selected manually positively stained pixels and negatively stained pixels from a set of representative images in HSV color space. The SVM identifies regions of interest for further analyses. Santamaria-Pang et al. [34] proposed an algorithm to enhance and improve general segmentation methods by utilizing a cell shape ranking function. The shape of the cells detected by the watershed transform is used to train an SVM, which discriminates real cells from false positives.
Wang et al. [35] proposed the use of wavelet decomposition, region growing, double strategy splitting model, and curvature scale space to highlight nucleus regions for further classification. Textural and shape features are extracted from nuclei and feature selection is carried out based on genetic algorithms and SVM. The best results were 91.5% and 91.6% for sensitivity and specificity, respectively. Arteta et al. [36] improved the post-processing step of the method proposed by Arteta et al. [26]. Nucleus regions are refined using a surface. Two nucleus regions have their optimal area defined by a smoothness factor. The improvement provided 91% of F1-score in the same dataset. A nuclei segmentation was proposed by Brieu and Schmidt [37] based on an adaptive neighborhood provided by a regression tree. A comparison showed an improvement of 9% relative to a nuclei segmentation without adaptive thresholding. Finally, Song et al. [38] presented a nuclear segmentation as a cascade of a two-class classification problem. An effective learning formulation was proposed by adapting sparse convolutional models across the different layers to estimate the latent morphology information. For improving the region probabilities, low-level appearance and high-level contextual features from original images and probability maps estimated, respectively, are integrated into a new sequence of probabilistic binary DTs. The outcome led to a reliable contour set for each nucleus and final complete contour inferences. The experimental results over 26,500 nuclei from the Farsight, KIRC, and Kumar datasets showed that the proposed method achieved better performance than other automated segmentation approaches. Again, it is worth mentioning that supervised methods based on DL approaches have also been proposed for segmenting HIs. We will present such recent works in Section 6.
Despite intense research efforts, segmentation of high-resolution HIs such as WSI, is still challenging. However, the increasing computational power, especially the massive parallelism of graphic cards, that made possible the use of deep learning models, will also drive new segmentation algorithms. Segmentation algorithms are also dependent on the increasing availability of public datasets, allowing researchers to work on the topic and even allowing the knowledge transfer of tissue and other structures between datasets.

4. Feature Extraction for HIs

Supervised shallow methods depend on features extracted from raw data before performing classification. Feature extraction methods process images and provide a reasonable number of features summarizing the image’s information. Such methods aim to reduce the dimensionality of the image and highlight relevant information related to the problem, such as the presence or absence of specific structures, number of individual elements, texture, and shapes of structures. Ideally, features should be independent of translation, scale, and rotation.
The main challenges in extracting features from HIs are the extraction of morphological characteristics from structures present in such images and the search for higher-level representations that allow capturing information that is relevant for medical diagnosis. The morphological characteristics are associated with identifying cellular changes, such as deformed nuclei due to some problem or mitotic phases, or tissue changes, such as density or abnormal quantity of cells. The morphological characteristics are related to the way pathologists analyze HIs, looking for specific justifications for categorizing the HI. On the other hand, high-level features are abstractions of all structures in HIs, not only the cell structures. For this reason, researchers usually exploit texture descriptors or representations in the frequency domain. Several features have been used with HIs, such as shape, size, texture, fractal, or even combination of these features. Table 6 summarizes the articles related to feature extraction.
Object-level and morphometric features like shape and size are essential for disease grading and diagnosis. Ballaro et al. [39] proposed the segmentation of HIs to identify unhealthy or healthy megakaryocytes, structures from which morphometric features are extracted. Petushi et al. [40] employed the Otsu algorithm to highlight nuclei and then extracted different features such as inside radial contact, inside line contact, area, perimeter, area-perimeter ratio, curvature, aspect ratio, and major axis alignment. Feature vectors are built by the concatenation of histograms of all these features. Madabhushi et al. [41] presented an approach for predicting disease outcome from multiple modalities, including MRI, digital pathology, and protein expression. For histopathology images, they used graph-based features such as Voronoi diagram (total area of all polygons, polygon area, polygon perimeter, polygon chord length), Delaunay triangulation (triangle side length, triangle area), minimum spanning tree (edge length), and nuclear statistics (density of nuclei, distance to the nearest nuclei in different pixel radii) to represent the spatial arrangement of nuclei. Song et al. [42] applied thresholding and watershed transform to extract features like cystic cytoplasm length, cystic mucin production, and cystic cell density. These three features are used to train different classifiers. The experimental results showed that these three features outperformed morphological features (shape and size), achieving 90% of accuracy against 64%. Besides that, the combination of these features with morphological features achieved only 85% of accuracy. Gorelick et al. [43] uses a segmentation step to identify superpixels for prostate cancer detection and classification. Morphometric and geometric features represent the segmented images. The cytological analysis and breast cancer diagnosis framework presented by Filipczuk et al. [44] employed morphometric features. After isolation of nuclei from the images, for each nucleus, they calculated area, perimeter, eccentricity, major and minor axis length, luminance mean and variance, and distance to the centroid of all nuclei. Ozolek et al. [45] performed the classification of follicular lesions on thyroid tissue. After a preprocessing step for nucleus segmentation, the chromatin texture of nuclei with linear optimal transport provides features for the final classification.
Fukuma et al. [46] compared spatial-level and object-level descriptors like Voronoi tessellation, Delaunay triangulation, minimum spanning tree, elliptical, convex hull, bounding box, and boundaries. Object-level features reached 99.07% of accuracy in the best case against 82.88% achieved by the spatial ones. Morphometric features can also be found in other structures like glands, which are easier to identify due to the difference between the lumen and other cellular structures. This is the subject of the work presented by Loeffler et al. [47] which uses inverse compactness and inverse solidness as measures for gland alteration on prostate cancer. The features were obtained based on the area (object and convex hull area), and perimeter of threshold highlighted objects. Marugame et al. [48] used morphometric features extracted from image objects indicating nuclear aggregations to represent three categories of ductal carcinomas in breast HIs. The number of pixels, length, and thickness of the objects reflect their size and shape. Osborne et al. [49] employed four geometrical features extracted from nuclei after segmentation to melanoma diagnoses in skin HIs. The four features are the ratio of the area of nuclei to the area of cytoplasm, the ratio of the perimeter of a nucleus to its area, the ratio of the major axis length of a nucleus to its minor axis length, and the ratio of the number of nuclei to the area of cytoplasm. The multi-view approach to detect prostate cancer presented by Kwak and Hewitt [50] extracted morphological and intensity features from multiple resolutions. Features like area, compactness, smoothness, roundness, convex hull ratio, major-minor axis ratio, extent, bounding circle ratio, distortion, and shape context are extracted from lumens and epithelial nuclei, as well as other relational features between them. Olgun et al. [51] introduced a feature extractor for HIs, which is based on the local distributions of objects, which are segmented by color intensity. The feature extractor measures the distance between an object and its neighborhood. The proposed method outperformed the other thirteen methods that use textural and structural features.
Texture descriptors have become quite popular in HI analysis due to the different types of textures found in HIs. For instance, high/low concentrations of nuclei and stroma present quite different patterns of textures. For this reason, several researchers have been investigating a broad spectrum of textural descriptors for HI classification. Several authors had used descriptors based on GLCM to represent textures in HI. Kuse et al. [52] used GLCM as features with a pre-segmentation process based on unsupervised mean-shift clustering. Such a method reduces color variety to segment the image using thresholds. After this process, nuclei are identified and have the overlapping removed by a contour and area restrictions. Finally, GLCM features are extracted from the segmented image used for classification. Caicedo et al. [53] combined seven feature extraction methods, including GLCM, and create a kernel-based representation of the data on each feature type. Kernels are used inside an SVM to find similarities between data and to implement a content retrieval mechanism. Fernández-Carrobles et al. [54] presented a feature extraction method based on the frequency and spatial textons. The use of textons implies that a reduced vocabulary of textures represents images. Features used for the classification are histograms of textons and GLCM features extracted from texton maps. They also evaluated the impact of different colormaps on these features. Combination of six color models and GLCM for textons achieved the best classification results (98.1%). The conversion of H&E color images to gray-level is affected by the staining color variability, affecting the GLCM.
Another descriptor that is very often used to represent texture is the local binary pattern (LBP). Mazo et al. [12] proposed the classification of cardiac tissues into five categories using a patching approach that aims to optimize the patch size to improve the representation. Texture in HIs was described using LBP, LBP rotation invariant (LBPri), Haralick features, and different concatenations. Haralick features include contrast, angular second moment, homogeneity, correlation, entropy, and first and second correlation measures. Peyret et al. [55] applied LBP in the context of multispectral HIs. They used an SVM to evaluate the proposed LBP, which aligns all spectra and uses pixels from all other bands. It also uses a multiscale kernel size. This feature extractor reached 99% of accuracy compared to 88.3% achieved by the standard LBP and 95.8% reached by the concatenated spectra LBP. Bruno et al. [56] used a curvelet transform to handle multiscale HIs. The LBP algorithm extracts features from curvelet coefficients, which are reduced by an ANOVA analysis. The algorithm proposed by Phoulady et al. [57] uses adaptive and iterative thresholding to find nuclei area and extracts texture information using LBP and histograms of oriented gradients (HOG). The proposed method achieved 93.3% of accuracy against 92.3% of the second-best method. The work presented by Reis et al. [58] focused on the stroma maturity to evaluate breast cancer. The stroma features are basic image features (BIF), obtained by convolving images with a bank of derivatives-of-Gaussian filters, and LBP with multiple scales for the neighborhood. Gertych et al. [59] presented a system for prostate cancer classification, which also uses LBP features. The best accuracy was 68.4% for cancer detection. Balazsi et al. [60] presented an invasive ductal breast carcinoma detector that extracts patches by tesselation without the square shape constraint. A set of 16,128 features derived from multiple histograms and LBP (multiple radii) using CIELAB, gray-scale and RGB color spaces represent each patch. Atupelage et al. [61] extracted features using fractal geometry analysis, and compare them with Gabor filter bank, Leung-Malik filter bank, LBP, and GLCM features. The proposed approach outperformed the other methods achieving 95% of accuracy.
Huang et al. [62] proposed a two-step feature extraction approach composed of a receptive field for detecting regions of interest and sparse coding. Sparse coding groups features from patches of the same region. The mean and covariance matrix of receptive fields and sparse coding are the final filters. Noroozi and Zakerolhosseini [63] proposed an automated method for discriminating basal cell carcinoma tumor from squamous cell carcinoma tumor in skin HIs using Z-transform features, which are obtained from the combination of Fourier transform features. Wan et al. [64] used a dual-tree complex wavelet transform (DT-CWT) to represent the images in the context of mitosis detection in breast cancer detection. Generalized Gaussian distribution and symmetric alpha-stable distribution parameters were used as features. Chan and Tuszynski [65] also used fractal dimension features for breast cancer detection. These features perform well for an HI magnification of 40× to distinguish between malignant and benign tumors.
Recently, deep features have become very popular in several image classification tasks, including HIs. Niazi et al. [66] presented a CAD system for bladder cancer that focuses on extracting epithelium features with segmentation using an automatic color deconvolution matrix construction. Spanhol et al. [67] used deep features from a pre-trained AlexNet to classify breast benign and malignant tumors. Vo et al. [68] presented a method for feature extraction based on the combination of CNNs and boosting tree classifiers (BTC). This method utilizes an ensemble of inception CNNs to extract visual features from multiscale images. In the first stage, data augmentation methods were employed. Afterward, ensembles of CNNs were trained to extract multi-context information from multiscale images. The latter stage extracted both global and local features of breast cancer tumors. George et al. [69] proposed an approach for breast cancer diagnosis, which extracts features from nuclei based on CNNs. The methodology consists of different approaches for extracting nucleus features from HIs and select the most discriminative spatially sparse nucleus patches. A pre-trained set of CNNs was used to extract features from such patches. Subsequently, features belonging to individual images are fused using 3-norm pooling to obtain image-level features.
Finally, several works use or combine different feature categories to capture information from both textures and geometrical structures found in HIs. Leo et al. [70] presented a method for quantifying instability of features across four prostate cancer datasets with known variations due to staining, preparation, and scanning platforms. They evaluated five families of features: graph-based features, which include first- and second-order descriptors of Voronoi diagrams, Delaunay triangulations, minimum spanning trees, and gland density; gland shape features, which measure the average shape of all the glands in an image and include the lumen boundaries and the resulting area, perimeter, distance, smoothness, and Fourier descriptors; co-occurring gland tensor features, which capture the disorder of neighborhoods of glands as measured by the entropy of orientation of the major axes of glands within a local neighborhood; subgraph features, which describe the connectivity and clustering of small gland neighborhoods using gland centroids; Haralick texture features. Yu et al. [71] investigated the best features for characterizing lung cancer. The authors extracted objective quantitative image features such as Haralick texture features of the nuclei (sum entropy, InfoMeas, difference variance, angular second moment), edge intensity of the nuclei, texture features of the cytoplasm and intensity distribution of the cytoplasm, Zernike shape, texture and radial distribution of intensity. Caicedo et al. [72] proposed a low-level to high-level mapping to facilitate imaging retrieval. This mapping process consists of gray and color histograms, LBP, Tamura texture histogram, Sobel histogram, and invariant feature histograms. Pang et al. [73] proposed a CAD system for lung cancer detection, which uses textural features such as LBP, GLCM, and Tamura, shape features such as SIFT, global features, and morphological features. Kruk et al. [74] used morphometric, textural, and statistical (histogram) features to describe nuclei for clear-cell renal carcinoma grading. Genetic algorithm and Fisher discriminant were used to select essential features. Basavanhally et al. [75] proposed a multi field-of-view (FOV) classification scheme to recognize low versus high-grade ductal carcinoma from breast HIs. It uses a multiple patch size procedure for WSI to analyze whether morphological or textural, or graph-based features are the most relevant to each patch size. Tashk et al. [76] presented a complete framework for breast HI classification that estimates mitotic pixels in CIELAB color space. A combination of LBP, morphometric, and statistical features are extracted from mitotic candidates. Cruz-Roa et al. [77] proposed a patching method on HI slides to create small regions and extract scale-invariant feature transform (SIFT), luminance level, and discrete cosine transform features to create a bag-of-words. Semantic features are high-level information that can be associated with HIs to aid their classification.
Orlov et al. [78] compared three color spaces (RGB, CIELAB, and gray-scale) with H&E representation and eleven features such as Zernike, Chebychev, Chebyshev-Fourier, color histograms, GLCM, Tamura, Gabor, Haralick, edge statistics, and others to represent lymph node HIs. De et al. [79] propose a fusion of several feature types for uterine cervical cancer HI classification. They used a 62-dimensional feature vector based on GLCM, Delaunay triangulation, and weighted density distribution. Vanderbeck et al. [80] used morphological, textural, and pixel neighboring statistics features to represent seven categories of white regions of liver HIs. Kandemir et al. [81] proposed a MIL approach to detect Barrett’s cancer in HIs. They used cell-level morphometric features such as central power sums, area, radius, perimeter, and roundness of segments, maximum, mean, and minimum intensity, and intensity covariance, variance, skewness, and kurtosis within regions and patch-level features such as LBP, SIFT, and color histograms from segmented images using the watershed algorithm. Coatelen et al. [82,83] proposed a feature selection method of liver HI classification based on morphometric features such as area, compactness, perimeter, aspect ratio, Zernick moment, etc., textural features such as GLCM, LBP, fractal dimension, Fourier distance, etc., and structural or graph-based features such as the number of nodes/edges, modularity, pi, eta, theta, beta, alpha, gamma, and Shimbel indexes, etc. Two greedy algorithms (fselector and in-house recursive) selected features in a pool of 200 features where an SVM classifier implemented the fitness function. Michail et al. [84] highlighted nuclei using connected-component labeling to classify centroblast and non-centroblasts cells. Morphometric, textural, and color features are used as features. Das et al. [85] proposed the so-called geometric- and texture-aware features based on Hu moments and fractal dimensional, respectively. Such a set of features was applied to detect geometrical and textural changes in nuclei to discriminate between mitotic and non-mitotic cells. The method proposed by Kong et al. [86] classifies neuroblastomas using textural and morphological features. It considers that pathologists use morphological features for their analysis, and textural features can be easily extracted. They also use GLCM features and sequential floating forward selection to select features.
In summary, given the rich geometric structures and complex textures found in HIs, most of the works combine different types of features. Morphometric features are essential to characterize geometric structures, but they are more complex to obtain since they require complex preprocessing, e.g., find the contour of nuclei to count them. On the other hand, textural features such as LBP and GLCM usually do not require a previous segmentation of HIs. Finally, the most recent methods of feature extraction are focused on deep features. They can be interpreted as a sequence of filters that can detect both geometric structures and textures. Therefore, deep features and deep methods seem to be up-and-coming methods for HI analysis.
The future of feature extraction methods detaches from morphological information and tends to seek high-level representations, mainly due to the challenge presented by the availability and possibility of capturing high-resolution images such as WSIs. Among the high-level characteristics, there is a growing interest in using deep learning methods as feature extractors. In addition to allowing the transfer of knowledge, they make extensive use of parallelism, presenting a good performance for their application in large images. Another advantage of these methods is the possibility of being trained and adapting to specific characteristics of the problem to be solved, leading to representation learning.

5. Shallow Methods for HI Classification

ML algorithms trained in a supervised fashion can accomplish different HI analyses such as identifying types of tumors and tissues, nucleus features (e.g., mitosis phases), or specific characteristics in some organs (e.g., fat inside the liver or the size of epithelial tissue on the cervix). HI classification methods have as main challenges the classification of tumors, identification of tissues, identification of mitosis, and evaluation of tumor degrees. The classification of tumors involves the evaluation between benign and malignant and the identification of tumor types. The tumor degree assessment is less frequent, but it represents a real challenge because it highly depends on pathologists’ labeling. Tissue type identification can be used both for selecting pieces of WSIs and for assisting in segmentation. Mitosis identification is related to the presence of tumors because the extensive proliferation of cells characterizes it. There is no consensus among the studies that lead to a general assessment that one algorithm is better than another. Besides, the classification methods are strongly associated with the quality of feature vectors. For this reason, ensemble methods are usually preferred for these problems, using both homogeneous and heterogeneous systems and extensive feature vectors with data from multiple extractors. Another challenge present in HI classification is the reduced amount of data for training, favoring shallow methods over deep learning methods.
This section presents ML methods based on shallow classifiers. We start by introducing some works that employ single (monolithic) classifiers followed by classification methods based on ensemble (multiple) of classifiers. Both shallow and ensemble methods depend on a previous feature extraction stage because they rely on handcrafted feature vectors to learn discriminant functions. Therefore, most of the feature extraction methods presented in Section 4 can be used together with the methods presented in this section.

5.1. Monolithic Classifiers

Different ML methods for supervised learning have been employed in HI analysis, such as support vector machines (SVM), decision trees (DT), naïve Bayes (NB), k-nearest neighbors (k-NN), multilayer perceptron (MLP), among others. Table 7 summarizes the works reviewed in this section in terms of the classification algorithm, tissue, or organ from where the HI was obtained and the publication year.
SVMs are the most used classification algorithm for HIs. Several works have employed SVM with different feature categories. Mazo et al. [12] proposed the classification of cardiac tissues into five categories using a patching approach that aims to optimize the patch size to improve the representation. A cascade of linear SVMs separates tissues into four classes, followed by a polynomial SVM, which classifies one of these four classes into two sub-classes. Osborne et al. [49] employed segmentation and morphological features with an SVM classifier to melanoma diagnoses in skin HIs. The proposed approach achieved 90% of accuracy. Malon et al. [87] compared the agreement of three pathologists and an ML method that uses deep features to train an SVM classifier to locate mitotic nuclei in HIs. The accuracy achieved by the SVM was 63.6% and 98.6% for positive and negative cases, respectively, which was close to two pathologists’ performance. Only one pathologist performed 99.2% and 94.5% on positive and negative samples, respectively. Atupelage et al. [61] used fractal features and an SVM to classify non-neoplastic tissues and tumors and grade hepatocellular carcinoma HIs into five classes. The proposed approach achieved 95% of correct classification rate for five classes and outperformed other methods that use texture features. Olgun et al. [51] introduced an approach for the representation and classification of colon tissue HIs, which is based on the local distributions of objects. This approach was evaluated using an SVM and compared with other 13 methods that use textural and structural features. It outperformed all methods achieving an accuracy of 93%.
Wan et al. [64] used a dual-tree complex wavelet transform (DT-CWT) to represent breast HIs for mitosis detection. Generalized Gaussian distribution and symmetric alpha-stable distribution parameters were the features used for classification with an SVM. The proposed method achieved 73% of F-score, outperforming most of the other methods compared in their study. Chan and Tuszynski [65] used fractal features and an SVM classifier for breast cancer detection. They achieved 97.9% of F-score for an HI magnification of 40× to distinguish between malignant and benign tumors. On the other hand, on a multiclass problem, they reached an F-score of only 55.6%. Caicedo et al. [72] proposed a low-level to high-level mapping to facilitate imaging retrieval. This mapping uses color, texture, and shape features to train 18 SVMs. The experimental results compared the low-level and high-level (semantic) features, which obtained 67% against 80% of precision, respectively, showing that the mapping from low to semantic-level features contributes favorably to the classification process. Vanderbeck et al. [80] used SVM to classify white regions of liver HIs among seven classes. The combination of all features into a 413-dimensional feature vector achieved the best accuracy of 93.5%. They also compared the results based on images labeled by different pathologists. In an extension of the work of De et al. [79], Guo et al. [88] presented an automatic orientation detection for the epithelium with more features and used an SVM classifier. Harai and Tanaka [89] proposed a colorectal CAD system, which starts with an Otsu thresholding of red channel to separate nuclei, background, and stroma. An SVM classifier achieves 78.3% of accuracy against 67% of a method based on texture features.
Peikari et al. [90] proposed a nucleus segmentation pipeline based on multi-level thresholding and watershed algorithm on the CIELAB color space. The nucleus classification uses a cascade of SVMs. The cascade phase initially separates lymphocytes from epithelial tissue and then classifies epithelial in benign and malignant. An interesting comparison with two pathologists’ evaluations shows that the agreement between pathologists was 89% and between the automated system was 74% and 75%. The classification of ovarian cancer is the subject of study of BenTaieb et al. [91]. The proposed method localizes cancer regions in WSI using a multiscale mechanism considering that each tumor type has specific characteristics that are better detected at different scales. The method automatically selects an ROI based on multiple scales. The latent variable of the latent SVM (LSVM) used for classification is the presence of a patch at a particular scale on that region’s classification. The LSVM approach achieved an accuracy of 76.2%, which outperforms CNNs by 26%. Zhang et al. [92] proposed a multiscale classification that uses sparse coding with Fisher discriminant analysis to construct a visual dictionary of SIFT features. The multiscale approach using SVMs achieved an accuracy of 81.6%, which outperformed the state-of-the-art (79.5%). Korkmaz and Poyraz [93] proposed a classification framework based on minimum redundancy, maximum relevance feature selection, and least square SVM (LSSVM). They claimed the accuracy of approximately 100% with only four false negatives for benign tumors in a three-class problem. No further comparisons were performed.
Bayesian, DT, NN, k-NN, and other supervised ML algorithms have also been used to classify HIs. Several works have employed such classifiers with different feature categories. Marugame et al. [48] proposed a simple classifier based on Bayes theory to classify ductal carcinomas into three categories. Specialists consulted by authors claimed that the simple classifier provides, together with the morphological features, a better way to understand the results. Spanhol et al. [67] used deep features from an AlexNet to classify breast benign and malignant tumors. The deep features were used with a logistic regression classifier. This approach achieved 90.3% of correct classification rate for 200× magnification factor and outperformed a baseline (87.8%) that used texture features. De et al. [79] proposed grading of uterine cervical cancer using an LDA classifier. A specialist manually segmented the images to identify tumors and split them into ten segments for feature extraction. A voting strategy combined results from the segments. The best grading result was 70.5% for the whole epithelium against 62.3% for the vertically segmented epithelium. Mete and Topaloglu [94] evaluated eleven different color spaces for representing HIs. Combining a spherical coordinate transform and DT achieved the best accuracy, outperforming SVM and NB classifiers.
Sidiropoulos et al. [95] proposed a classification algorithm based on a probabilistic neural network (PNN) implemented on GPUs to grade rare brain tumors cases. The advantage is the reduced processing time that allows an exhaustive feature combination search. For demonstration purposes, a comparison of CPU- and GPU-based algorithms showed that the GPU version takes 278 times less computation time than the CPU version for a feature vector with 20 attributes. The work presented by Michail et al. [96] classifies follicular lymphomas using a preprocessing step to segment images based on intensity thresholds and an expectation-maximization algorithm. The segmented cells are classified by LDA, achieving a detection rate of 82.6%. A random kitchen sink (RKS) classifier is used by Beevi et al. [97] to identify mitotic nuclei on breast cancer HIs. Nuclei are identified using thresholding in the red channel, and local active contour models them. The approach achieved an F-score of 96.0% for RKS and 83.4% for RFs on MITOS 2014 dataset. A CAD system proposed by Jothi and Rajam [98] used HI converted to gray-scale, giving priority to the red channel. Otsu thresholding guided by particle swarm optimization segments gray-scale images and segments with noise are reduced using area constraints based on the nuclei size. The closest matching rule (CMR) classifier achieved an accuracy of 100% against 99.5% for NB. Awan et al. [99] studied the classification of mitosis on breast cancer using a dataset labeled with the four major mitosis phases. Classes are imbalanced, posing a challenge for the classification. They proposed a data augmentation method based on PCA and its eigenvectors and compared it to the synthetic minority over-sampling technique. Barker et al. [100] used a patching procedure based on a grid over the WSI to grade brain tumor type. Each patch has general features clustered using k-means. The final classification is performed over nuclei features identified in the clustering step using an elastic net model. The proposed model outperformed the methods from the 2014 MICCAI Pathology Classification Challenge.
Multiple instance learning (MIL) has also been used for the classification of HIs. Several works have employed such MIL methods with different classifiers and feature categories. MIL is a weakly supervised learning paradigm that considers that instances are naturally grouped in labeled bags, without the need that all the instances of each bag have individual labels. Kandemir et al. [81] compared three MIL SVMs: SIL-SVM, MI-SVM, and mi-SVM with mi-Graph. mi-Graph achieved an accuracy of 87% against 69% of mi-SVM. The proposed methodology is based on patching. All images are previously segmented using the watershed algorithm. Another work from the same research group carried out a benchmark of MIL SVM methods, finding out that MILBoost gives better accuracy for instance-level approach (66.7%) and mi-Graph performs better in bag-level prediction (72.5%) [101]. A stain separation is performed in Cosatto et al. [102] using a support vector regressor (SVR) to identify regions of interest (ROI), which is a high occurrence of hematoxylin in low-level magnification. This work uses a MIL approach because ROIs are not labeled but the WSIs, so all ROIs from a positive slide receives positive labels. MIL uses MLP for classification, but it requires a modified loss function to represent the one-positive rule for a slide, which means that if in the prediction only one ROI appears as positive, the entire slide is positive. In the comparison between the MIL approach and SVM classification, the SVM required ROI labeling. The AUC of MIL was 0.95 against 0.94 of SVM, with the advantage of reducing labeling efforts. Xu et al. [103] introduced MCIL, a MIL-based method that uses a patching procedure to create instance-level Gaussian classifiers, which are clustered using the k-means algorithm. The work performs comparisons with standard image-level classification methods and MIL methods. The fully supervised method presented an F-score of 76.6% (using patch labeling), and the proposed method achieved 69.9%. MCIL achieved 71.7% and 60.1% in another dataset (not patch labeled) with constrained and unconstrained MCIL, respectively, against 25.3% of MIL-Boosting. Sudharshan et al. [104] compared different MIL approaches to the diagnostic of breast cancer patients. In this approach, every patient is seen as a bag, which is labeled with her diagnosis. Therefore, HIs do not need to be individually labeled, as they can share the bag label. Instances are patches extracted from the corresponding HIs, considering different magnification factors (40×, 100×, 200× and 400×). The hypothesis is that a bag-based (patches) analysis is valuable for analyzing HIs compared to a single instance (entire image). The experiments were carried out on the BreakHis database using CNN, 1-NN, QDA, RF, and SVM classifiers, and the best accuracy of 92.1% was achieved for 40× magnification by non-parametric MIL.
Finally, several works compared the performance of different monolithic classifiers on HIs without combining their predictions. Ballaro et al. [39] proposed the segmentation of HIs to identify unhealthy or healthy megakaryocytes. A k-NN and DTs carry out the classification. Song et al. [42] trained different classifiers such as SVM, k-NN, neural networks, and Naïve Bayes (NB) on morphometric features. The experimental results showed that these three features outperform morphological features achieving 90% of accuracy against 64%. Besides that, the combination of these features with morphological features achieved only 85% of accuracy. Bruno et al. [56] used a curvelet transform to handle multiscale HIs. Texture features extracted from curvelet coefficients are reduced by an ANOVA analysis and evaluated using DTs, SVM, RF, and polynomial classifiers. They achieved an AUC of 1.00, which is higher than the best previous method (0.986). Pang et al. [73] proposed a CAD system for lung cancer detection. Sparse contribution analysis selects non-redundant features, which are used to train SVM, RF, and extreme learning machines (ELM). Another contribution is the concave-convex variation, which consists of measuring all nuclei’s concavity in an image and using such a measurement to weigh the classifiers’ probabilities. This method achieved an accuracy of 98.74%, which is slightly better than RFs (97.68%).
Orlov et al. [78] compared three color spaces (RGB, CIELAB, and gray-scale) with H&E representation. A weighted k-NN achieved the best results (99%), followed by an RBF network and NB with 99% and 90% of accuracy, respectively. The best results were achieved for a color space called eosin representation. Irshad et al. [105] presented a multimodal approach with multispectral images focusing on selecting the best spectral bands to classify mitotic cells on the MITOS 2012 dataset. SVM, DT, and MLP are used for classification purposes. SVM achieved the best F-score (63.7%) using only eight best bands, which is higher than the state-of-the-art (59%). WSI is the core of the work proposed by Homeyer et al. [106], which compares k-NN, NB, and RFs for the classification of slides based on a patching procedure and textural and intensity features. RFs with a group of all features achieved the best result (94.7%). Khan et al. [107] proposed a framework for malignant cell classification in breast cytology images. Selected features train SVM, NB, and RF classifiers. In the end, an ensemble method is employed to combine the classifiers based on the majority voting. The experiments have shown an accuracy of 98.0% in the detection and classification of malignant cells. Kurmi et al. [108] presented an approach consisting of nuclei localization in HIs and further classification as benign or malignant using MLP and SVM models. MLP achieved the best average accuracy of 95.03%.

5.2. Ensembles Approaches

Ensembles approaches combine the predictions of multiple base classifiers in an attempt to improve generalization and robustness over a single classifier. Several researchers have proposed combining classifiers for enhancing the performance of HI approaches. Table 8 summarizes the works reviewed in this section in terms of the type of base classifier and combination strategy, tissue, or organ from where the HI was obtained and the publication year.
Zarella et al. [33] employed classification using an ensemble of SVMs on ROIs segmented from WSI. Multiple “weak” classifiers trained with subsets of features and different parameters combined with a weighted sum (WS) function achieved an accuracy of 88.6%. Daskalakis et al. [109] used a preprocessing step of segmentation to enhance nuclei and extract morphological and textural features. A multiclassifier approach combines k-NN, linear least squares minimum distance (LLSMD), statistical quadratic Bayesian, SVM, and PNN using majority vote, minimum, maximum, average, and product rules. PNN achieved the best accuracy of 89.6% for a base classifier, while the ensemble method achieved 95.7% with the majority vote rule. The method proposed by Kong et al. [86] classifies neuroblastomas using textural and morphological features. An ensemble approach combining k-NN, LDA, NB, and SVM classifiers using the weighted voting (WV) rule achieved an accuracy of 87.8%. Meng et al. [110] proposed an ensemble of principal component classifiers (PCC). This ensemble classified 25 patches of each image, which are represented by 50 features. The accuracy achieved on a liver dataset was 96.41% using the majority vote (MV) rule compared to 95.09% achieved by a 3-NN. The same approach achieved 99.4% of accuracy on lymphoma classification against 92.08% achieved by the Adaboost approach. A CAD system composed of a staining separation module, densitometric and texture feature extraction, and an AdaBoost algorithm was proposed by Wang and Yu [111]. The proposed method achieved an accuracy of 94.37% against 86.44% of the best base classifier (k-NN) trained on raw H&E images.
The system described by Gorelick et al. [43] uses a segmentation step to identify superpixels. An Adaboost algorithm classifies the segmented images represented by morphometric and geometric features. The system achieved an accuracy of 85%. A framework for cytological analysis is presented by Filipczuk et al. [44]. Morphometric features represent nuclei obtained after segmentation. The proposed method uses a combination of random subspaces with perceptrons to create an ensemble. The comparison showed that the ensemble approach achieved an accuracy of 96.0% compared to 90% achieved by a boosting algorithm. Vink et al. [112] proposed a nucleus detection method based on two Adaboost stages. The first step is based on features extracted from stain-separated images. The second Adaboost refines the result of the first with line-based features. An optimal active contour refines the results from the second ensemble achieving an accuracy of 95.02%. Phoulady et al. [113] proposed an ensemble of Otsu thresholding algorithms with specific constraints and morphological operations. Four segmentation algorithms are responsible for the segmentation, but each image can have characteristics that would require different parameters for the segmentation algorithms. The final result is one among 18 segmentation algorithms with most parameters shared with the set of segmentation algorithms that presented less difference in the segmentation. This approach achieved an accuracy of 84.3% compared to 77.4% achieved by other methods.
Di Franco et al. [114] used an ensemble of SVM classifiers, where each model is trained with a variation of images preprocessed by Gaussian filters and color spaces. The classifiers are combined using the average rule and the best AUC value achieved was 0.978. Albashish et al. [115] proposed a feature selection method that uses the entropy of a feature related to a class as a redundancy criterion and constraints in this value and the inter-feature entropy. SVM classifiers specialized in one subtype of tissue derived from prior segmentation are combined using the sum rule. The ensemble approach’s performance using 37 features (94.08%) is only 0.2% better than the best SVM with the recursive feature learning (RFE) method using 46 features. A comparison of multiple classifiers and features is presented by Huang and Kalaw [116]. A set of monolithic classifiers is compared with Adaboost implemented with SVM, DT, and RF. Adaboost achieved 97.8% of accuracy. Fernández-Carrobles et al. [117] presented a classification framework for WSI with a bagging of DTs and GLCM features, which achieved 0.995 for AUC and 98.13% for true positives. The multi-view approach presented by Kwak and Hewitt [50] extracts features from multiple resolutions. A boosting algorithm combining linear SVMs and the features from multiple views achieved 0.98 of AUC compared to 0.96 of the concatenation of views. Kruk et al. [74] used morphometric, textural, and statistical features to describe nuclei for classification. An ensemble made up of SVM and RF classifiers and trained with a subset of features resulting from the feature selection achieved an accuracy of 96.7%, which was higher than the state-of-the-art (93.1%) and the best single SVM classifier (91.1%). An Adaboost ensemble is used by Romo-Bucheli et al. [118] to grade skin cancer. The ensemble classifies images described by features created with graph theory to represent the nuclei distribution. The ensemble achieved 72% of accuracy. A multi field-of-view (FOV) classification scheme is proposed by Basavanhally et al. [75]. It uses a multiple patch size procedure for WSI that analyzes which features are the most relevant to each patch size. After that, it uses an RF to aggregate multiple FOV patches. They do not present a baseline for accuracy comparison, only the AUC result, showing better values for nucleus architecture features to recognize low versus high-grade ductal carcinoma.
Tashk et al. [76] presented a complete framework for HI classification. They employ maximum likelihood estimation to obtain the mitotic pixels in CIELAB color space. A cascading classification is performed firstly with SVM and next with RFs. A comparison shows that this method achieves an accuracy of 96.5% against 82.4% of the best previous result in the MITOS 2012 dataset. Gertych et al. [59] presented a system for prostate cancer classification, which consists of SVM and RF classifiers. SVM separates the stroma and epithelium, and the RF identifies benign/normal and carcinogenic tissue. The best accuracy was 68.4% for cancer detection. Balazsi et al. [60] extended the work described in [122]. The authors used simple linear iterative clustering (SLIC) to extract patches by tesselation. A set of multiple histograms and texture features are extracted from the CIELAB, gray-scale and RGB color spaces of each patch. This number of features is suitable for an RF classifier, which achieved 79.51% of F-score for tessellated patches, compared to 77.57% of squared patches and 71.80% of the baseline. SLIC is also applied by Wright et al. [120] in a pipeline for colorectal cancer to initially segment images. Histogram and texture features extracted from the HSV color space; likewise, statistics from H&E channels were extracted, in addition to GLCM as features. A comparison showed that the proposed work achieved the accuracy of 79% against 75% from their previous work with RFs. Valkonen et al. [121] presented a system for the classification of WSI. The segmentation step uses Otsu, morphological operations, and histological constraints. Classification algorithms such as RF, SVM, k-NN, and logistic regression were trained with textural, morphometric, and statistical features extracted from random patches of segmented images. RF achieved the best accuracy (93%). A comparison between different ensemble approaches to classify WSI patches is presented in [119]. A set of 114 features were selected and ranked using RFs. Based on the selected and ranked features, multiple linear and RBF SVMs and RF classifiers were built. The majority vote rule’s aggregation achieved AUCs of 0.955, 0.951, and 0.948 for RBF SVM, RF, and linear SVM, respectively. The best previous result was 0.935.
There is a clear tendency to use multiple classifiers as well as their combination with feature extractors based on deep learning. Multiple instance learning methods have also been explored, many related to WSI classification and patching methods. The steady growth in the availability of HIs and the progress of classification methods make the diagnosis with less human intervention a future challenge. That requires algorithms to be highly accurate in the early stages of the biopsy and diagnosis by processing WSIs directly from microscopy without intervention or manual preprocessing by specialists.

6. Methods Based on Deep Learning (DL)

DL methods are gaining the scientific community’s attention due to recent achievements in solving complex machine learning problems on large datasets. A convolutional neural network (CNN) can learn in a single optimization process, both a representation and a decision boundary. However, CNNs usually require a massive amount of data for adequate training to avoid overfitting problems. Still, most of the HI datasets have only a few patients and hundreds of images, which can limit the use of DL. Data augmentation [123,124] and transfer learning [125] are two possible approaches to circumvent the lack of data in HI datasets. For instance, ImageNet, which has more than 14 million images, is one of the most common datasets used for training CNNs for object recognition. Data augmentation generates new HIs from existing ones by using affine transformations or morphological operations. Patching HIs is another common way of data augmentation. Patching produces the effect of selecting pieces of a HI with the same structure, but that belong to different classes. On the other hand, the transfer learning method reuses CNNs previously trained in large datasets, which usually belong to a different domain from the target problem. The pre-trained CNNs can be used in two ways: to extract features from HIs and use these features with shallow classifiers, as already described in Section 4 and Section 5; to fine-tune such CNNs on an HI dataset, which means that filters learned on a large dataset will be adapted to the HI dataset.
Despite the success of DL methods in image classification, the literature has shown that CNNs are not much suitable for classifying textures and results in only moderate accuracy. HIs present different structures, one of which is texture. Recently, several works have attempted to overcome these challenges to employ DL methods in HI analysis. Table 9 summarizes the works reviewed in this section in terms of network architecture, tissue, or organ from where the HI was obtained and the publication year.
Malon et al. [87] were one the first authors to employ DL methods in HI analysis. They used a classical LeNet-5, a 7-layer CNN architecture proposed by Lecun et al. [126], which in 1998 to learn a representation from HIs previously segmented with an SVR. An SVM classified the features extracted by the CNN to find mitotic nuclei. The remarkable aspect of this work is the comparison between machines and three pathologists. The pathologists showed a Cohen Kappa factor of 0.13 and 0.44 in the best case, emphasizing the inter-observer problem. Kainz et al. [127] presented two CNNs based on the LeNet-5 architecture for segmentation and classification of glands in the tissue of benign and malignant colorectal cancer. The first CNN separates glands from the background, while the second CNN identifies gland-separating structures. Experimental results on Warwick-QU colon adenocarcinoma and GlaS@MICCAI2015 challenge datasets showed a tissue classification accuracy of 98% and 95%, respectively.
Some works used CNNs based on the AlexNet architecture proposed by Krizhevsky et al. [128] in 2012. AlexNet is similar to LeNet-5, but it has 12 layers, with more filters per layer, and stacked convolutional layers. Stanitsas et al. [129] employed the AlexNet CNN to classify breast cancer HIs. They compared the CNN results with some handcrafted feature extractors and shallow classifiers, and they concluded that the CNN was not able to outperform the shallow methods. Spanhol et al. [130] evaluated architectures based on AlexNet CNN for the problem of breast cancer HI classification. The experimental results on the BreaKHis dataset showed that the CNN achieved mean accuracy rates between 81.7% and 88.6%, depending on the magnification, at patient-level, which is better than other shallow ML approaches with textural features. Sharma et al. [131] also used an AlexNet CNN and other custom CNN architectures to classify benign and malignant tumors. Due to the small sample size, authors had to carry out data augmentation by patching, and affine transforms. For cancer classification, 11 WSIs produced 231,000 images. For necrosis detection, four WSIs produced 47,130 images for training. Both the AlexNet and custom CNN architectures compared favorably to most handcrafted features and an RF classifier. Budak et al. [132] proposed an end-to-end model based on a pre-trained AlexNet CNN and a bidirectional LSTM (BLSTM) for detecting breast cancer in HIs. The convolutional layers are used to encode HIs into a high-level representation, which is flattened and fed into the BLSTM. Experimental results on the BreaKHis dataset showed that the proposed model achieved the best average accuracy of 96.32% for the magnification factor of 200×. Moreover, for the magnification factor of 40×, 100×, and 400×, the average accuracy was 95.69%, 93.61%, and 94.29%, respectively.
Some works use CNNs based on the inception architecture proposed by Szegedy et al. [133]. The inception modules have parallel paths where the image is passed through filters of different dimensions (1×1, 3×3, 5×5). Additionally, max pooling is also performed. The outputs are concatenated and sent to the next inception module. GoogleLeNet, a.k.a Inception-V1 [133] has 9 such inception modules stacked linearly. It has 27 layers and employs global average pooling at the end of the last inception module. Inception-V2 and Inception-V3 [134] used an upgraded inception module and auxiliary outputs, increasing accuracy and reducing computational complexity. Another architecture is the Inception-ResNet, which combines the inception model with the ResNet model [135]. Li et al. [136] compared AlexNet and Inception-V1, handcrafted features and SVM, and features extracted by CNNs to classify regions of colon histology images as either gland or non-gland. The combination of handcrafted features with an SVM and the prediction of a CNN showed the best results. They used data augmentation with rotations and mirroring for handcrafted features and CNNs. Yan et al. [137] integrated a pre-trained Inception-V3 with a BLSTM for classifying breast cancer HIs into normal, benign, in situ carcinoma, or invasive carcinoma. The method consists of dividing HIs into 12 small patches on average. Afterward, a fine-tuned Inception-V3 CNN extracts features from the patches, where a 5,376-dimensional feature vector is made up of the concatenation of the weights of the last three layers of the CNN. Such feature vectors are the input of a 4-layer BLSTM that fuses features from 12 small patches and come up to an image-wise classification. The experiments show that such an approach achieved an average accuracy of 91.3%. de Matos et al. [125] proposed a classification approach for breast cancer HIs that uses transfer learning to extract features from HIs using an Inception-V3 CNN pre-trained with the ImageNet dataset. The proposed approach improved the classification accuracy by 3.7% using the feature extraction transfer learning and an additional 0.7% using the irrelevant patch elimination.
Deep residual neural network (ResNet) [138] is another architecture that has been used in the classification of HIs. The residual block alleviates the problem of training very deep networks. Khosravi et al. [139] evaluated the versatility of CNNs on eight different datasets of breast, lung, and bladder tissues with H&E and immunohistochemistry images (IHC). Such an evaluation included Inception and ResNet CNNs, the combination of both CNNs, and the concept of transfer learning. Results showed a good performance despite using the raw images without any preprocessing. Vizcarra et al. [140] fused CNN and SVM outputs for HI classification. The pipeline consists of extracting SURF features for the shallow learner (SVM) and color normalization (Reinhard method) and image resizing (downsampling) for fine-tuning Inception-V3 and Inception-ResNet-V2 CNNs, pre-trained on the ImageNet dataset. CNN. The outputs from both shallow and deep learners are fused for final prediction. Experimental results on the BACH dataset showed a moderate accuracy of 79% and 81% achieved by the SVM and the CNN, respectively. On the other hand, the fusion of SVM and CNN outputs outperformed the individual learners, achieving an accuracy of 92%. Zerhouni et al. [141] proposed using a wide residual CNN to classify mitotic and non-mitotic pixels in breast HIs. The CNN is trained on mitotic and non-mitotic patches extracted from the ground truth images. Experimental results on the MICCAI TUPAC Challenge dataset showed that the wide residual CNN outperformed most of the other approaches.
Gandomkar et al. [142] proposed the MuDeRN framework to classify HIs into benign or malignant, and next into four subtypes. In the first stage, a ResNet with 152 layers has been trained to classify HI patches of different magnification factors as benign or malignant. Afterward, the results thereof were subdivided into four subcategories of malignant and benign likewise. Lastly, for each patient, the diagnosis was conducted by combining the ResNet’s output using a meta-DT. MuDeRN achieved at the first stage an accuracy of 98.52%, 97.90%, 98.33%, and 97.66% for 40×, 100×, 200×, and 400× magnification factors, respectively. In the second stage, MuDeRN achieved an accuracy of 95.40%, 94.90%, 95.70%, and 94.60% for 40×, 100×, 200×, and 400× magnification factors, respectively. For patient-level diagnosis, in turn, MuDeRN achieved an accuracy of 96.25%, considering the eight classes. Brancati et al. [143] also used a ResNet to detect invasive ductal carcinoma as well as to classify lymphoma subtypes. First, convolutional layers are trained without supervision to learn a latent representation to reconstruct the input image. On the other hand, the fully connected layers are trained in a supervised way. In both cases, the softmax classifier produces a probability of the input image belonging to a given class. Talo [144] presented an approach based on pre-trained ResNet-50 and DenseNet-161 CNN models for automatic classification of gray-scale and color HIs. The results achieved by both CNNs outperformed the existing studies in the literature, with 95.79% of total accuracy for the gray-scale images. ResNet-50 achieved 97.77% of total accuracy to classify color HIs.
Another CNN architecture used in HI classification is the VGG-net, a very uniform architecture with 16 convolutional layers with a large number of 3 × 3 filters. Bejnordi et al. [145] used a VGG-net CNN [146] for the classification of tissue into epithelium, stroma, and fat, followed by a VGG16 CNN for classifying stroma into normal stroma or tumor-associated stroma. The first CNN achieved a pixel-level accuracy of 95.5%, while the second CNN achieved a pixel-level binary accuracy of 92.0%. The authors employed data augmentation by randomly rotating and flipping patches and randomly jittering the hue and saturation of pixels in the HSV color space. Xu et al. [147] used 3 CNNs to segment and distinguish glands. The approach combines a fully convolutional network (FCN) for the foreground segmentation channel, a faster region-based CNN (R-CNN) for the object detection channel, and a holistically-nested edge detector CNN for the edge detection channel. All three CNNs are based on the VGG16 CNN. The results of these three CNNs feed another CNN that outputs a segmented image. Data augmentation by affine and elastic transformation is carried out to enhance performance and avoid overfitting. The proposed approach achieved state-of-the-art results on the dataset from the MICCAI 2015 Gland Segmentation Challenge. Kumar et al. [148] developed a variant of VGG16 CNN architecture, which replaces the fully connected layers with different classifiers. The approach consists of stain normalization and data augmentation, which uses images with and without normalized stain. The augmented dataset is applied to the fused VGG16, where features are taken at the global average pooling layer. Finally, the binary classification is carried out by SVM and RF classifiers. Experiments were conducted on the canine mammary tumor (CMTHis) dataset and breast cancer HI (BreakHis) dataset, which are both randomly split into training (70%) and test (30%) sets. The approach achieved an accuracy of 97%, and 93% on BreakHis and CMTHis datasets, respectively.
Other CNN architectures have also been used in HI classification, such as DenseNet [149] and MobileNet [150]. Kassani et al. [151] proposed an approach for the classification of breast cancer HIs based on an ensemble of three pre-trained CNNs, namely VGG19, MobileNet, and DenseNet. Stain normalization, data augmentation, fine-tuning, and hyperparameter tuning were used to improve the performance of the CNNs. The multi-model ensemble method achieved better performance than single classifiers with an accuracy of 98.13%, 95.00%, 94.64%, and 83.10% for BreakHis, ICIAR, PatchCamelyon, and Bioimaging datasets, respectively. Yang et al. [152] introduced the use of additional region-level supervision for classifying breast cancer HIs with a DenseNet-169 CNN pre-trained on ImageNet. For this purpose, ROIs are localized and used to guide the attention of the classification network simultaneously. This process activates neurons in regions relevant to diagnose while suppressing activation in irrelevant and noisy areas. Hence, the network’s prediction is based on the regions on which a pathologist expects the network to focus. Such an approach achieved an accuracy of 93% on the BACH dataset.
Finally, several works proposed custom CNN architectures for HI classification, which are usually based on some well-known architectures. The authors attempt to optimize mainly the number and the dimension of kernels and the number of layers. Bayramoglu et al. [153] proposed two different CNN architectures, both with ten layers, for breast cancer HI classification. The first CNN predicts only malignancy, while the second one predicts both the malignancy and the image magnification level simultaneously. Experimental results on the BreaKHis dataset showed that the magnification independent CNN approach improved the magnification-specific model’s performance and that the results are comparable with previous state-of-the-art results obtained by handcrafted features. They also used data augmentation based on affine transformations.
Albarqouni et al. [154] introduced a CNN for aggregating annotations from crowds in conjunction with learning a model for a challenging classification task. During learning from the crowd annotations phase, the CNN architecture is augmented with an aggregation layer to aggregate the ground-truth from the crowd vote matrix. Experimental results on the AMIDA13 dataset showed that the proposed CNN architecture was robust to noisy labels and positively influenced the performance. Cruz-Roa et al. [122] proposed a custom 3-layer CNN to classify patches of WSI as invasive ductal carcinoma (breast cancer) or not. Patches ended up labeled due to the region labeling. Some WSI regions, such as background and adipose cells, were excluded manually and were not patched. Patches were preprocessed using color normalization and the YUV color space. CNN outperformed an RF trained on the best handcraft feature extractor by 4%. Compared to other works, this one has a simple protocol and uses a small network, but it was one of the precursors of CNNs to analyze HIs. Ciompi et al. [155] proposed an 11-layer CNN to analyze the impact of stain normalization in the training and evaluation pipeline of an automatic system for CRC tissue classification. Experimental results on the CRC dataset validated the performance of the proposed CNN and the role of stain normalization in CRC tissue classification. Kwak and Hewitt [156] proposed a 6-layer CNN to identify prostate cancer and compared it with other CNNs (AlexNet, VGG, GoogLeNet, ResNet) as well as with shallow classifiers such as SVM, RF, k-NN and NB. Experimental results on four tissue microarrays showed that the 6-layer CNN achieved an AUC of 0.974. It outperformed all other approaches, either based on handcrafted features with shallow classifiers or different CNN architectures.
Roy et al. [157] proposed five custom CNN architectures to classify patches of breast cancer HIs. The approach consists of extracting patches, classify them and compare the result of individual patches with the one of the whole image. The output is considered correct if there is an agreement between the class labels of all extracted patches and the HI’s class label. They have also boosted the number of training samples per class using affine transformation for data augmentation. Experimental results on the ICIAR 2018 challenge dataset showed that a 14-layer CNN achieved the best results: patch-wise classification accuracy of 77.4% and 84.7% for four and two classes, respectively; image-wise classification accuracy of 90.0% and 92.5% for four and two classes, respectively. de Matos et al. [123] proposed a 7-layer CNN architecture based on texture filters that have fewer parameters than traditional CNNs but can capture the difference between malignant and benign tissues with relative accuracy. The experimental results on the BreakHis dataset showed that the proposed texture CNN achieves 85% of accuracy for classifying benign and malignant tissues. The authors also employed data augmentation based on composed random affine transforms, including flipping, rotation, and translation. Ataky et al. [124] proposed a novel approach for augmenting an HI dataset considering the inter-patient variability through image blending using the Gaussian-Laplacian pyramid. Experimental results on the BreakHis dataset with a texture CNN [123] have shown promising gains vis-à-vis the majority of DA techniques presented in the literature. The research carried out by Gecer et al. [158] presented a method for breast diagnosis based on WSIs. This method aims at classifying images into five categories. At first, a salience detection was performed by a pipeline consisting of four sequential 9-layer CNNs based on the VGG-net [146] architecture for multiscale processing of the WSIs, considering different magnifications for localization of diagnostically pertinent ROIs. Afterward, a patch-based multiclass CNN is trained on representative ROIs resulting from three experienced pathologists’ consensus. Finally, the final slide-level diagnosis is obtained by fusing the salience detector and the CNN for pixel-wise labeling of the WSIs by a majority vote rule. They claimed that the CNNs used for both detection and classification outperformed competing methods that used handcrafted features and statistical classifiers. Moreover, the proposed method achieved results comparable to the diagnoses provided by 45 pathologists on the same dataset. Experiments using 240 WSIs showed a five-class slide-level accuracy of 55%.
Wang et al. [159] employed a bilinear CNN (BCNN), which consists of two individual CNNs, whose outputs of the convolutional layers are multiplied with an outer product at each corresponding spatial location, resulting in the quadratic number of feature maps. The input of both CNNs is H&E images with the H and E channels separated in a preprocessing stage by a color decomposition algorithm. The proposed BCNN-based algorithm achieves the best performance with a mean classification accuracy of 92.6%. Compared to other CNN-based algorithms, BCNN improves at least 2.4% on classification accuracy on the CRC dataset. Li et al. [160] presented an automatic method for mitosis detection based on semantic segmentation that uses a CNN. The CNN has a novel label with concentric circles instead of a single-pixel representation of mitosis. The inner-circle represents a mitotic region, whereas the ring around the inner circle is a “middle ground.” This concentric loss allows training the semantic segmentation CNN with weakly annotated mitosis data. The semantic segmentation employed on breast cancer HIs to seek out mitotic cells achieved the F-score of 0.562, 0.673, 0.669, on ICPR2014, MITOSIS, AMIDA13, and TUPAC16 datasets, respectively. Hou et al. [161] proposed a semi-supervised approach that uses a sparse convolutional autoencoder (CAE). The CAE has a crosswise constraint that decomposes patches from HIs into the foreground (e.g., nuclei) and background (e.g., cytoplasm). Such a CAE initializes a supervised CNN, which carries out nucleus detection, feature extraction, and classification/segmentation in an end-to-end fashion. The experimental results showed that the proposed approach outperformed other approaches and the crosswise constraint’s noteworthiness in boosting performance. The proposed CAE-CNN achieved results comparable to the state-of-the-art using only 5% of training data needed by other methods. Sheikh et al. [162] proposed a four-input 24-layer custom CNN for the classification of HIs that fuses multi-resolution hierarchical feature maps at different layers. The proposed model learns different scale image patches to account for cells’ overall structures and texture features. Experimental results on ICIAR2018 and BreaKHis datasets showed that the proposed model outperformed existing state-of-the-art models.

7. Reviews, Surveys and Datasets

This section brings a summary of the reviews and surveys related to HIs and ML methods. As shown in Table 10, we have found nineteen works in this category. Reviews and surveys published between 2012 and 2015 highlight mainly the approaches for nucleus segmentation and classification. On the other hand, recent publications are focused on the classification of whole medical images. The reviews presented by Saha et al. [163], Nawaz and Yuan [164], Chen et al. [165] and Robertson et al. [166] were published in medical journals and provided a deeper view of the histology information. However, such publications overlooked aspects related to ML methods. For instance, Nawaz and Yuan [164] analyzed the characteristics of tumors and presented a brief study on how computational methods can deal with HIs. Komura and Ishikawa [167] presented the use of ML methods in HI and several HI datasets. Litjens et al. [168] reviewed DL methods applied to a variety of medical images, including HIs. Zhou et al. [169] presented a comprehensive overview of breast HI analysis techniques based on classical and DL methods and publicly HI datasets. Finally, Krithiga and Geetha [170] presented a systematic review of breast cancer detection, segmentation, and classification on HIs focused on the performance evaluation of ML and DL techniques to predict breast cancer recurrence rates.
Given the importance of datasets for HI research, we have also compiled in Table 11 and Table 12, a list of the datasets that have been used in experiments by several works covered in this review. We included the dataset reference, year of creation, their contents in terms of the number of images and patients, and references to the related papers.

8. Conclusions

In this paper, we have presented a review of the ML methods usually employed to analyze HIs. This review revealed an increasing interest in the classification task, while the interest in other tasks such as segmentation and feature extraction are in an evident decline in the last years, as shown in Table 4, Table 5, Table 6, Table 7 and Table 8, where the related works are arranged in ascending chronological order. We point out that the main reason for such a change is the introduction of DL methods, which can deal with raw HIs with a little or even without any preprocessing step. Normalization is one of the most used preprocessing. Still, in the early years, other preprocessing methods such as thresholding, filtering, and color models were also used to improve the quality of HIs for subsequent tasks such as segmentation and feature extraction, or even classification.
In the years preceding the broad adoption of DL methods, several works had focused on identifying nuclei in HIs, which are important structures to the cancer diagnosis. Therefore, that leads to the exploitation of different segmentation approaches as reviewed in Section 3. Some works used the concept of semantic features, based on the e.g. counting of nuclei, its relation to the stroma, the distance between nuclei. Stain normalization is also a recurrent topic that has shown up in several works across the years covered by this review. Such an image processing method, which reduces the color and intensity variations present in stained images, has been widely used even in conjunction with DL methods. Feature extraction methods were the focus of interest of researchers between 2008 and 2016. Morphometric feature and textural features such as GLCM, LBP, and their variants have been the most frequent features used in HI analysis, either alone or in combination with other feature types. It is important to note that the shallow classifiers require a feature extraction method. Again, the adoption of DL methods, which can learn representation and decision boundaries in a single optimization process, is probably the leading cause of declining interest in feature extraction methods from 2016. Furthermore, pre-trained CNNs can also be used as feature extractors for HIs. Several works removed the fully connected layers of pre-trained CNNs and used the last convolutional layer’s output as feature vectors to feed shallow classifiers. Comparing Table 7, Table 8, Table 9 we can say that DL approaches are becoming prevalent over shallow approaches in the last five years. Although studies are still necessary for understanding how these networks learn data representation, especially concerning HIs.
Finally, Table 11 and Table 12 also help us to understand the increasing interest in HI analysis in the last years. We have found that most of the early works are based on small private datasets, making it difficult for other researchers who do not have access to such HI datasets to carry out research in this area and reproduce the scientific results. On the other hand, most recent works are based on public HI datasets, which contribute to science as they provide a way to researchers to develop new methods and compare their performance with the existing ones. However, there is still a lack of large-scale supervised WSI datasets.
In conclusion, this review shows the evolution of HI analysis and the recent shift over DL methods. This review also provides valuable information to researchers in the field about datasets and other reviews and surveys.

Author Contributions

Conceptualization, J.d.M., A.d.S.B.J., L.E.S.d.O. and A.L.K.; Methodology, J.d.M. and S.T.M.A.; Writing–original draft preparation, J.d.M. and S.T.M.A.; Writing–review and editing, A.L.K.; Supervision, A.d.S.B.J., L.E.S.d.O. and A.L.K.; Funding acquisition, A.d.S.B.J. and A.L.K.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery grant number RGPIN-2016-04855 and by École de Technologie Supérieure, grant Développement de Collaborations Internationales de Recherche.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AUCArea under the curve
CADComputer-aided diagnosis
CNNConvolutional neural network
CTComputed tomography
DLDeep learning
DNNDeep neural network
DTDecision tree
ELMExtreme learning machine
GLCMGray-level co-occurrence matrix
HIHistophatologic image
H&EHematoxylin and eosin
HOGHistogram of oriented gradients
IHCImmunohistochemistry images
ImgImages
LBPLocal binary patterns
MLMachine learning
MILMultiple instance learning
MLPMultilayer perceptron
MRIMagnetic resonance imaging
NSGANon-dominated sorted genetic algorithm
PatPatients
PCAPrincipal component analysis
RCNNRecurrent convolutional neural network
RFRandom forest
ROIRegion of interest
SHMMSpatial hidden Markov model
SIFTScale-invariant feature transform
SNNSynergistic neural network
SVMSupport vector machine
WSIWhole slide image
XCAExclusive component analysis

References

  1. Torre, L.A.; Bray, F.; Siegel, R.L.; Ferlay, J.; Lortet-Tieulent, J.; Jemal, A. Global cancer statistics. CA Cancer J. Clin. 2015, 65, 87–108. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Torre, L.A.; Islami, F.; Siegel, R.L.; Ward, E.M.; Jemal, A. Global Cancer in Women: Burden and Trends. CEBP Focus Glob. Cancer Women 2017, 26, 444–457. [Google Scholar] [CrossRef] [Green Version]
  3. Bellocq, J.P.; Anger, E.; Camparo, P.; Capron, F.; Chenard, M.P.; Chetritt, J.; Chigot, J.P.; Cochand-Priollet, B.; Coindre, J.M.; Copin, M.C.; et al. Sécuriser le diagnostic en anatomie et cytologie pathologiques en 2011. L’erreur diagnostique: Entre discours et réalité. Ann. Pathol. 2011, 31, S92–S94. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Aresta, G.; Araújo, T.; Kwok, S.; Chennamsetty, S.S.; Safwan, M.; Alex, V.; Marami, B.; Prastawa, M.; Chan, M.; Donovan, M.; et al. BACH: Grand challenge on breast cancer histology images. Med. Image Anal. 2019, 56, 122–139. [Google Scholar] [CrossRef]
  5. Fatakdawala, H.; Xu, J.; Basavanhally, A.; Bhanot, G.; Ganesan, S.; Feldman, M.; Tomaszewski, J.E.; Madabhushi, A. Expectation-Maximization-Driven Geodesic Active Contour With Overlap Resolution (EMaGACOR): Application to Lymphocyte Segmentation on Breast Cancer Histopathology. IEEE Trans. Biomed. Eng. 2010, 57, 1676–1689. [Google Scholar] [CrossRef] [PubMed]
  6. Roullier, V.; Lézoray, O.; Ta, V.T.; Elmoataz, A. Multi-resolution graph-based analysis of histopathological whole slide images: Application to mitotic cell extraction and visualization. Comput. Med. Imaging Graph. 2011, 35, 603–615. [Google Scholar] [CrossRef]
  7. Rahmadwati; Naghdy, G.; Ros, M.; Todd, C.; Norahmawati, E. Cervical Cancer Classification Using Gabor Filters. In Proceedings of the IEEE 1st International Conference on Healthcare Informatics, Imaging and Systems Biology, San Jose, CA, USA, 26–29 July 2011; pp. 48–52. [Google Scholar] [CrossRef]
  8. Peng, Y.; Jiang, Y.; Eisengart, L.; Healy, M.A.; Straus, F.H.; Yang, X.J. Segmentation of prostatic glands in histology images. In Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Chicago, IL, USA, 30 March–2 April 2011; pp. 2091–2094. [Google Scholar] [CrossRef]
  9. He, L.; Long, L.R.; Antani, S.; Thoma, G.R. Multiphase Level Set Model with Local K-means Energy for Histology Image Segmentation. In Proceedings of the IEEE 1st International Conference on Healthcare Informatics, Imaging and Systems Biology, San Jose, CA, USA, 26–29 July 2011; pp. 32–39. [Google Scholar] [CrossRef]
  10. Fatima, K.; Arooj, A.; Majeed, H. A New Texture and Shape Based Technique for Improving Meningioma Classification. Microsc. Res. Tech. 2014, 77, 862–873. [Google Scholar] [CrossRef]
  11. Mazo, C.; Trujillo, M.; Alegre, E.; Salazar, L. Automatic recognition of fundamental tissues on histology images of the human cardiovascular system. Micron 2016, 89, 1–8. [Google Scholar] [CrossRef] [PubMed]
  12. Mazo, C.; Alegre, E.; Trujillo, M. Classification of cardiovascular tissues using LBP based descriptors and a cascade SVM. Comput. Methods Programs Biomed. 2017, 147, 1–10. [Google Scholar] [CrossRef]
  13. Tosun, A.B.; Kandemir, M.; Sokmensuer, C.; Gunduz-Demir, C. Object-oriented texture analysis for the unsupervised segmentation of biopsy images for cancer detection. Pattern Recognit. 2009, 42, 1104–1112. [Google Scholar] [CrossRef] [Green Version]
  14. Nativ, N.I.; Chen, A.I.; Yarmush, G.; Henry, S.D.; Lefkowitch, J.H.; Klein, K.M.; Maguire, T.J.; Schloss, R.; Guarrera, J.V.; Berthiaume, F.; et al. Automated image analysis method for detecting and quantifying macrovesicular steatosis in hematoxylin and eosin-stained histology images of human livers. Liver Transplant. 2014, 20, 228–236. [Google Scholar] [CrossRef] [PubMed]
  15. Shi, P.; Zhong, J.; Huang, R.; Lin, J. Automated quantitative image analysis of hematoxylin-eosin staining slides in lymphoma based on hierarchical Kmeans clustering. In Proceedings of the 8th International Conference on Information Technology in Medicine and Education, Fuzhou, China, 23–25 December 2016; pp. 99–104. [Google Scholar] [CrossRef]
  16. Brieu, N.; Pauly, O.; Zimmermann, J.; Binnig, G.; Schmidt, G. Slide-Specific Models for Segmentation of Differently Stained Digital Histopathology Whole Slide Images. In Proceedings of the SPIE Medical Imaging 2016: Image Processing, San Diego, CA, USA, 21 March 2016; Volume 9784. [Google Scholar] [CrossRef]
  17. Shi, P.; Chen, J.; Lin, J.; Zhang, L. High-throughput fat quantifications of hematoxylin-eosin stained liver histopathological images based on pixel-wise clustering. Sci. China Inf. Sci. 2017, 60. [Google Scholar] [CrossRef]
  18. Liu, B.; Liu, Y.; Zhang, J.; Zeng, Y.; Wang, W. Application of the synergetic algorithm on the classification of lymph tissue cells. Comput. Biol. Med. 2008, 38, 650–658. [Google Scholar] [CrossRef] [PubMed]
  19. Hafiane, A.; Bunyak, F.; Palaniappan, K. Evaluation of level set-based histology image segmentation using geometric region criteria. In Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Boston, MA, USA, 28 June–1 July 2009; pp. 1–4. [Google Scholar] [CrossRef]
  20. He, L.; Long, L.R.; Antani, S.; Thoma, G.R. Local and global Gaussian mixture models for hematoxylin and eosin stained histology image segmentation. In Proceedings of the 10th International Conference on Hybrid Intelligent Systems, Atlanta, GA, USA, 23–25 August 2010; pp. 223–228. [Google Scholar] [CrossRef]
  21. Onder, D.; Sarioglu, S.; Karacali, B. Automated labelling of cancer textures in colorectal histopathology slides using quasi-supervised learning. Micron 2013, 47, 33–42. [Google Scholar] [CrossRef] [Green Version]
  22. Yang, L.; Qi, X.; Xing, F.; Kurc, T.; Saltz, J.; Foran, D.J. Parallel content-based sub-image retrieval using hierarchical searching. Bioinformatics 2014, 30, 996–1002. [Google Scholar] [CrossRef] [Green Version]
  23. Sirinukunwattana, K.; Khan, A.M.; Rajpoot, N.M. Cell words: Modelling the visual appearance of cells in histopathology images. Comput. Med. Imaging Graph. 2015, 42, 16–24. [Google Scholar] [CrossRef] [PubMed]
  24. Huang, C.H. Semi-supervised color decomposition for histopathological images using exclusive component analysis. In Proceedings of the IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP), Boston, MA, USA, 17–20 September 2015. [Google Scholar] [CrossRef]
  25. Yu, F.; Ip, H.H.S. Semantic content analysis and annotation of histological images. Comput. Biol. Med. 2008, 38, 635–649. [Google Scholar] [CrossRef]
  26. Arteta, C.; Lempitsky, V.; Noble, J.A.; Zisserman, A. Learning to Detect Cells Using Non-Overlapping Extremal Regions; Lecture Notes in Computer Science; Medical Image Computing and Computer-Assisted Intervention: Nice, France, 2012; Volume 7510, pp. 348–356. [Google Scholar]
  27. Janssens, T.; Antanas, L.; Derde, S.; Vanhorebeek, I.; den Berghe, G.V.; Grandas, F.G. Charisma: An integrated approach to automatic H&E-stained skeletal muscle cell segmentation using supervised learning and novel robust clump splitting. Med. Image Anal. 2013, 17, 1206–1219. [Google Scholar] [CrossRef] [Green Version]
  28. Saraswat, M.; Arya, K.V. Supervised leukocyte segmentation in tissue images using multi-objective optimization technique. Eng. Appl. Artif. Intell. 2014, 31, 44–52. [Google Scholar] [CrossRef]
  29. Qu, A.; Chen, J.; Wang, L.; Yuan, J.; Yang, F.; Xiang, Q.; Maskey, N.; Yang, G.; Liu, J.; Li, Y. Two-step segmentation of Hematoxylin-Eosin stained histopathological images for prognosis of breast cancer. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine, Belfast, UK, 2–5 November 2014; pp. 218–223. [Google Scholar] [CrossRef]
  30. Salman, S.; Ma, Z.; Mohanty, S.; Bhele, S.; Chu, Y.T.; Knudsen, B.; Gertych, A. A machine learning approach to identify prostate cancer areas in complex histological images. Adv. Intell. Syst. Comput. 2014, 283, 295–306. [Google Scholar] [CrossRef]
  31. Chen, J.M.; Qu, A.P.; Wang, L.W.; Yuan, J.P.; Yang, F.; Xiang, Q.M.; Maskey, N.; Yang, G.F.; Liu, J.; Li, Y. New breast cancer prognostic factors identified by computer-aided image analysis of HE stained histopathology images. Sci. Rep. 2015, 5. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Geessink, O.G.F.; Baidoshvili, A.; Freling, G.; Klaase, J.M.; Slump, C.H.; Van Der Heijden, F. Toward automatic segmentation and quantification of tumor and stroma in whole-slide images of H&E stained rectal carcinomas. In Proceedings of the Progress in Biomedical Optics and Imaging—Proceedings of SPIE, Orlando, FL, USA, 17 March 2015; Volume 9420. [Google Scholar] [CrossRef]
  33. Zarella, M.D.; Breen, D.E.; Reza, M.A.; Milutinovic, A.; Garcia, F.U. Lymph Node Metastasis Status in Breast Carcinoma Can Be Predicted via image Analysis of Tumor Histology. Anal. Quant. Cytopathol. Histopathol. 2015, 37, 273–285. [Google Scholar]
  34. Santamaria-Pang, A.; Rittscher, J.; Gerdes, M.; Padfield, D. Cell Segmentation and Classification by Hierarchical Supervised Shape Ranking. In Proceedings of the IEEE 12th International Symposium on Biomedical Imaging, Brooklyn, NY, USA, 16–19 April 2015; pp. 1296–1299. [Google Scholar]
  35. Wang, P.; Hu, X.; Li, Y.; Liu, Q.; Zhu, X. Automatic cell nuclei segmentation and classification of breast cancer histopathology images. Signal Process. 2016, 122, 1–13. [Google Scholar] [CrossRef]
  36. Arteta, C.; Lempitsky, V.; Noble, J.A.; Zisserman, A. Detecting overlapping instances in microscopy images using extremal region trees. Med. Image Anal. 2016, 27, 3–16. [Google Scholar] [CrossRef]
  37. Brieu, N.; Schmidt, G. Learning Size Adaptive Local Maxima Selection for Robust Nuclei Detection in Histopathology Images. In Proceedings of the IEEE 14th International Symposium on Biomedical Imaging, Melbourne, VIC, Australia, 18–21 April 2017; pp. 937–941. [Google Scholar]
  38. Song, J.; Xiao, L.; Molaei, M.; Lian, Z. Multi-layer boosting sparse convolutional model for generalized nuclear segmentation from histopathology images. Knowl. Based Syst. 2019, 176, 40–53. [Google Scholar] [CrossRef]
  39. Ballarò, B.; Florena, A.M.; Franco, V.; Tegolo, D.; Tripodo, C.; Valenti, C. An automated image analysis methodology for classifying megakaryocytes in chronic myeloproliferative disorders. Med. Image Anal. 2008, 12, 703–712. [Google Scholar] [CrossRef]
  40. Petushi, S.; Zhang, J.; Milutinovic, A.; Breen, D.E.; Garcia, F.U. Image-based histologic grade estimation using stochastic geometry analysis. In Proceedings of the Progress in Biomedical Optics and Imaging- Proceedings of SPIE, Orlando, FL, USA, 9 March 2011; Volume 7963. [Google Scholar] [CrossRef]
  41. Madabhushi, A.; Agner, S.; Basavanhally, A.; Doyle, S.; Lee, G. Computer-aided prognosis: Predicting patient and disease outcome via quantitative fusion of multi-scale, multi-modal data. Comput. Med. Imaging Graph. 2011, 35, 506–514. [Google Scholar] [CrossRef]
  42. Song, J.W.; Lee, J.H.; Choi, J.H.; Chun, S.J. Automatic differential diagnosis of pancreatic serous and mucinous cystadenomas based on morphological features. Comput. Biol. Med. 2013, 43, 1–15. [Google Scholar] [CrossRef]
  43. Gorelick, L.; Veksler, O.; Gaed, M.; Gomez, J.A.; Moussa, M.; Bauman, G.; Fenster, A.; Ward, A.D. Prostate histopathology: Learning tissue component histograms for cancer detection and classification. IEEE Trans. Med. Imaging 2013, 32, 1804–1818. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Filipczuk, P.; Krawczyk, B.; Woźniak, M. Classifier ensemble for an effective cytological image analysis. Pattern Recognit. Lett. 2013, 34, 1748–1757. [Google Scholar] [CrossRef]
  45. Ozolek, J.A.; Tosun, A.B.; Wang, W.; Chen, C.; Kolouri, S.; Basu, S.; Huang, H.; Rohde, G.K. Accurate diagnosis of thyroid follicular lesions from nuclear morphology using supervised learning. Med. Image Anal. 2014, 18, 772–780. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Fukuma, K.; Prasath, V.B.S.; Kawanaka, H.; Aronow, B.J.; Takase, H. A Study on Nuclei Segmentation, Feature Extraction and Disease Stage Classification for Human Brain Histopathological Images. Procedia Comput. Sci. 2016, 96, 1202–1210. [Google Scholar] [CrossRef] [Green Version]
  47. Loeffler, M.; Greulich, L.; Scheibe, P.; Kahl, P.; Shaikhibrahim, Z.; Braumann, U.D.; Kuska, J.P.; Wernert, N. Classifying Prostate Cancer Malignancy by Quantitative Histomorphometry. J. Urol. 2012, 187, 1867–1875. [Google Scholar] [CrossRef]
  48. Marugame, A.; Kiyuna, T.; Ogura, M.; Saito, A. Categorization of HE stained breast tissue samples at low magnification by nuclear aggregations. In Proceedings of the IFMBE Proceedings, Munich, Germany, 7–12 September 2009; Volume 25, pp. 173–176. [Google Scholar] [CrossRef]
  49. Osborne, J.D.; Gao, S.; Chen, W.b.; Andea, A.; Zhang, C. Machine Classification of Melanoma and Nevi from Skin Lesions. In Proceedings of the ACM Symposium on Applied Computing, TaiChung, Taiwan, 21–24 March 2011; pp. 100–105. [Google Scholar] [CrossRef]
  50. Kwak, J.T.; Hewitt, S.M. Multiview boosting digital pathology analysis of prostate cancer. Comput. Methods Programs Biomed. 2017, 142, 91–99. [Google Scholar] [CrossRef] [PubMed]
  51. Olgun, G.; Sokmensuer, C.; Gunduz-Demir, C. Local object patterns for the representation and classification of colon tissue images. IEEE J. Biomed. Health Inform. 2014, 18, 1390–1396. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Kuse, M.; Sharma, T.; Gupta, S. A classification scheme for lymphocyte segmentation in H&E stained histology images. Lect. Notes Comput. Sci. 2010, 6388, 235–243. [Google Scholar] [CrossRef]
  53. Caicedo, J.C.; González, F.A.; Romero, E. Content-based histopathology image retrieval using a kernel-based semantic annotation framework. J. Biomed. Inform. 2011, 44, 519–528. [Google Scholar] [CrossRef] [Green Version]
  54. Fernández-Carrobles, M.M.; Bueno, G.; Déniz, O.; Salido, J.; García-Rojo, M.; Gonzández-López, L. Frequential versus spatial colour textons for breast TMA classification. Comput. Med. Imaging Graph. 2015, 42, 25–37. [Google Scholar] [CrossRef]
  55. Peyret, R.; Bouridane, A.; Khelifi, F.; Tahir, M.A.; Al-Maadeed, S. Automatic classification of colorectal and prostatic histologic tumor images using multiscale multispectral local binary pattern texture features and stacked generalization. Neurocomputing 2018, 275, 83–93. [Google Scholar] [CrossRef]
  56. Bruno, D.O.T.; do Nascimento, M.Z.; Ramos, R.P.; Batista, V.R.; Neves, L.A.; Martins, A.S. LBP operators on curvelet coefficients as an algorithm to describe texture in breast cancer tissues. Expert Syst. Appl. 2016, 55, 329–340. [Google Scholar] [CrossRef] [Green Version]
  57. Phoulady, H.A.; Zhou, M.; Goldgof, D.B.; Hall, L.O.; Mouton, P.R. Automatic quantification and classification of cervical cancer via Adaptive Nucleus Shape Modeling. In Proceedings of the IEEE International Conference on Image Processing, Phoenix, AZ, USA, 25–28 September 2016; pp. 2658–2662. [Google Scholar] [CrossRef]
  58. Reis, S.; Gazinska, P.; Hipwell, J.H.; Mertzanidou, T.; Naidoo, K.; Williams, N.; Pinder, S.; Hawkes, D.J. Automated Classification of Breast Cancer Stroma Maturity from Histological Images. IEEE Trans. Biomed. Eng. 2017, 64, 2344–2352. [Google Scholar] [CrossRef]
  59. Gertych, A.; Ing, N.; Ma, Z.; Fuchs, T.J.; Salman, S.; Mohanty, S.; Bhele, S.; Velásquez-Vacca, A.; Amin, M.B.; Knudsen, B.S. Machine learning approaches to analyze histological images of tissues from radical prostatectomies. Comput. Med. Imaging Graph. 2015, 46, 197–208. [Google Scholar] [CrossRef] [Green Version]
  60. Balazsi, M.; Blanco, P.; Zoroquiain, P.; Levine, M.D.; Burnier, M.N., Jr. Invasive ductal breast carcinoma detector that is robust to image magnification in whole digital slides. J. Med. Imaging 2016, 3. [Google Scholar] [CrossRef] [Green Version]
  61. Atupelage, C.; Nagahashi, H.; Yamaguchi, M.; Abe, T.; Hashiguchi, A.; Sakamoto, M. Computational grading of hepatocellular carcinoma using multifractal feature description. Comput. Med. Imaging Graph. 2013, 37, 61–71. [Google Scholar] [CrossRef]
  62. Huang, C.H.; Veillard, A.; Roux, L.; Loménie, N.; Racoceanu, D. Time-efficient sparse analysis of histopathological whole slide images. Comput. Med. Imaging Graph.s 2011, 35, 579–591. [Google Scholar] [CrossRef] [Green Version]
  63. Noroozi, N.; Zakerolhosseini, A. Computer assisted diagnosis of basal cell carcinoma using Z-transform features. J. Vis. Commun. Image Represent. 2016, 40, 128–148. [Google Scholar] [CrossRef]
  64. Wan, T.; Zhang, W.; Zhu, M.; Chen, J.; Achim, A.; Qin, Z. Automated mitosis detection in histopathology based on non-gaussian modeling of complex wavelet coefficients. Neurocomputing 2017, 237, 291–303. [Google Scholar] [CrossRef] [Green Version]
  65. Chan, A.; Tuszynski, J.A. Automatic prediction of tumour malignancy in breast cancer with fractal dimension. R. Soc. Open Sci. 2016, 3. [Google Scholar] [CrossRef] [Green Version]
  66. Niazi, M.K.K.; Parwani, A.V.; Gurcan, M.N. Computer-Assisted bladder cancer grading: α-shapes for color space decomposition. In Proceedings of the Progress in Biomedical Optics and Imaging—Proceedings of SPIE, San Diego, CA, USA, 23 March 2016; Volume 9791. [Google Scholar] [CrossRef]
  67. Spanhol, F.A.; Oliveira, L.S.; Cavalin, P.R.; Petitjean, C.; Heutte, L. Deep features for breast cancer histopathological image classification. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics, Banff, AB, Canada, 5–8 October 2017; pp. 1868–1873. [Google Scholar]
  68. Vo, D.M.; Nguyen, N.Q.; Lee, S.W. Classification of breast cancer histology images using incremental boosting convolution networks. Inf. Sci. 2019, 482, 123–138. [Google Scholar] [CrossRef]
  69. George, K.; Faziludeen, S.; Sankaran, P.; Paul, J.K. Deep Learned Nucleus Features for Breast Cancer Histopathological Image Analysis based on Belief Theoretical Classifier Fusion. In Proceedings of the IEEE Region 10 Conference (TENCON), Kochi, India, 17–20 October 2019; pp. 344–349. [Google Scholar]
  70. Leo, P.; Lee, G.; Shih, N.N.C.; Elliott, R.; Feldman, M.D.; Madabhushi, A. Evaluating stability of histomorphometric features across scanner and staining variations: Prostate cancer diagnosis from whole slide images. J. Med. Imaging 2016, 3. [Google Scholar] [CrossRef] [Green Version]
  71. Yu, K.H.; Zhang, C.; Berry, G.J.; Altman, R.B.; Re, C.; Rubin, D.L.; Snyder, M. Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features. Nat. Commun. 2016, 7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  72. Caicedo, J.C.; Gonzalez, F.A.; Romero, E. A semantic content-based retrieval method for histopathology images. Inf. Retr. Technol. 2008, 4993, 51–60. [Google Scholar]
  73. Pang, W.; Jiang, H.; Li, S. Sparse Contribution Feature Selection and Classifiers Optimized by Concave-Convex Variation for HCC Image Recognition. BioMed Res. Int. 2017, 2017. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  74. Kruk, M.; Kurek, J.; Osowski, S.; Koktysz, R.; Swiderski, B.; Markiewicz, T. Ensemble of classifiers and wavelet transformation for improved recognition of Fuhrman grading in clear-cell renal carcinoma. Biocybern. Biomed. Eng. 2017, 37, 357–364. [Google Scholar] [CrossRef]
  75. Basavanhally, A.; Ganesan, S.; Feldman, M.; Shih, N.; Mies, C.; Tomaszewski, J.; Madabhushi, A. Multi-Field-of-View Framework for Distinguishing Tumor Grade in ER+ Breast Cancer From Entire Histopathology Slides. IEEE Trans. Biomed. Eng. 2013, 60, 2089–2099. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  76. Tashk, A.; Helfroush, M.S.; Danyali, H.; Akbarzadeh-jahromi, M. Automatic detection of breast cancer mitotic cells based on the combination of textural, statistical and innovative mathematical features. Appl. Math. Model. 2015, 39, 6165–6182. [Google Scholar] [CrossRef]
  77. Cruz-Roa, A.; Caicedo, J.C.; González, F.A. Visual pattern mining in histology image collections using bag of features. Artif. Intell. Med. 2011, 52, 91–106. [Google Scholar] [CrossRef]
  78. Orlov, N.V.; Chen, W.W.; Eckley, D.M.; Macura, T.J.; Shamir, L.; Jaffe, E.S.; Goldberg, I.G. Automatic Classification of Lymphoma Images With Transform-Based Global Features. IEEE Trans. Inf. Technol. Biomed. 2010, 14, 1003–1013. [Google Scholar] [CrossRef] [Green Version]
  79. De, S.; Stanley, R.J.; Lu, C.; Long, R.; Antani, S.; Thoma, G.; Zuna, R. A fusion-based approach for uterine cervical cancer histology image classification. Comput. Med. Imaging Graph. 2013, 37, 475–487. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  80. Vanderbeck, S.; Bockhorst, J.; Komorowski, R.; Kleiner, D.E.; Gawrieh, S. Automatic classification of white regions in liver biopsies by supervised machine learning. Hum. Pathol. 2014, 45, 785–792. [Google Scholar] [CrossRef] [PubMed]
  81. Kandemir, M.; Feuchtinger, A.; Walch, A.; Hamprecht, F.A. Digital pathology: Multiple instance learning can detect Barrett’s cancer. In Proceedings of the IEEE 11th International Symposium on Biomedical Imaging, Beijing, China, 29 April–2 May 2014; pp. 1348–1351. [Google Scholar] [CrossRef]
  82. Coatelen, J.; Albouy-Kissi, A.; Albouy-Kissi, B.; Coton, J.P.; Sifre, L.; Joubert-Zakeyh, J.; Dechelotte, P.; Abergel, A. A feature selection based framework for histology image classification using global and local heterogeneity quantification. In Proceedings of the 36th Annual International Conferenceof the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 1937–1940. [Google Scholar] [CrossRef]
  83. Coatelen, J.; Albouy-Kissi, A.; Albouy-Kissi, B.; Coton, J.P.; Maunier-Sifre, L.; Joubert-Zakeyh, J.; Dechelotte, P.; Abergel, A. A subset-search and ranking based feature-selection for histology image classification using global and local quantification. In Proceedings of the International Conference on Image Processing Theory, Tools and Applications (IPTA), Orleans, France, 10–13 November 2015; pp. 313–318. [Google Scholar] [CrossRef]
  84. Michail, E.; Dimitropoulos, K.; Koletsa, T.; Kostopoulos, I.; Grammalidis, N. Morphological and textural analysis of centroblasts in low-thickness sliced tissue biopsies of follicular lymphoma. In Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 3374–3377. [Google Scholar]
  85. Das, D.K.; Mitra, P.; Chakraborty, C.; Chatterjee, S.; Maiti, A.K.; Bose, S. Computational approach for mitotic cell detection and its application in oral squamous cell carcinoma. Multidimens. Syst. Signal Process. 2017, 28, 1031–1050. [Google Scholar] [CrossRef]
  86. Kong, J.; Sertel, O.; Shimada, H.; Boyer, K.L.; Saltz, J.H.; Gurcan, M.N. Computer-aided evaluation of neuroblastoma on whole-slide histology images: Classifying grade of neuroblastic differentiation. Pattern Recognit. 2009, 42, 1080–1092. [Google Scholar] [CrossRef] [Green Version]
  87. Malon, C.; Brachtel, E.; Cosatto, E.; Graf, H.P.; Kurata, A.; Kuroda, M.; Meyer, J.S.; Saito, A.; Wu, S.; Yagi, Y. Mitotic figure recognition: Agreement among pathologists and computerized detector. Anal. Cell. Pathol. 2012, 35, 97–100. [Google Scholar] [CrossRef]
  88. Guo, P.; Banerjee, K.; Stanley, R.J.; Long, R.; Antani, S.; Thoma, G.; Zuna, R.; Frazier, S.R.; Moss, R.H.; Stoecker, W.V. Nuclei-Based Features for Uterine Cervical Cancer Histology Image Analysis With Fusion-Based Classification. IEEE J. Biomed. Health Inform. 2016, 20, 1595–1607. [Google Scholar] [CrossRef]
  89. Harai, Y.; Tanaka, T. Automatic Diagnosis Support System Using Nuclear and Luminal Features. In Proceedings of the International Conference on Digital Image Computing: Techniques and Applications, Adelaide, SA, Australia, 23–25 November 2015; pp. 1–8. [Google Scholar] [CrossRef]
  90. Peikari, M.; Salama, S.; Nofech-Mozes, S.; Martel, A.L. Automatic cellularity assessment from post-treated breast surgical specimens. Cytom. Part A 2017, 91, 1078–1087. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  91. BenTaieb, A.; Li-Chang, H.; Huntsman, D.; Hamarneh, G. A structured latent model for ovarian carcinoma subtyping from histopathology slides. Med. Image Anal. 2017, 39, 194–205. [Google Scholar] [CrossRef] [PubMed]
  92. Zhang, R.; Shen, J.; Wei, F.; Li, X.; Sangaiah, A.K. Medical image classification based on multi-scale non-negative sparse coding. Artif. Intell. Med. 2017, 83, 44–51. [Google Scholar] [CrossRef] [PubMed]
  93. Korkmaz, S.A.; Poyraz, M. Least Square Support Vector Machine and Minumum Redundacy Maximum Relavance for Diagnosis of Breast Cancer from Breast Microscopic Images. Procedia Soc. Behav. Sci. 2015, 174, 4026–4031. [Google Scholar] [CrossRef] [Green Version]
  94. Mete, M.; Topaloglu, U. Statistical comparison of color model-classifier pairs in hematoxylin and eosin stained histological images. In Proceedings of the IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology, Nashville, TN, USA, 30 March–2 April 2009; pp. 284–291. [Google Scholar] [CrossRef]
  95. Sidiropoulos, K.; Glotsos, D.; Kostopoulos, S.; Ravazoula, P.; Kalatzis, I.; Cavouras, D.; Stonham, J. Real time decision support system for diagnosis of rare cancers, trained in parallel, on a graphics processing unit. Comput. Biol. Med. 2012, 42, 376–386. [Google Scholar] [CrossRef]
  96. Michail, E.; Kornaropoulos, E.N.; Dimitropoulos, K.; Grammalidis, N.; Koletsa, T.; Kostopoulos, I. Detection of centroblasts in H&E stained images of follicular lymphoma. In Proceedings of the 22nd Signal Processing and Communications Applications Conference, Trabzon, Turkey, 23–25 April 2014; pp. 2319–2322. [Google Scholar] [CrossRef]
  97. Beevi, S.K.; Nair, M.S.; Bindu, G.R. Detection of Mitotic Nuclei in Breast Histopathology Images using Localized ACM and Random Kitchen Sink based Classifier. In Proceedings of the 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Orlando, FL, USA, 16–20 August 2016; pp. 2435–2439. [Google Scholar]
  98. Jothi, J.A.A.; Rajam, V.M.A. Effective segmentation and classification of thyroid histopathology images. Appl. Soft Comput. 2016, 46, 652–664. [Google Scholar] [CrossRef]
  99. Awan, R.; Aloraidi, N.; Qidwai, U.; Rajpoot, N. How divided is a cell? Eigenphase nuclei for classification of mitotic phase in cancer histology images. In Proceedings of the IEEE-EMBS International Conference on Biomedical and Health Informatics, Las Vegas, NV, USA, 24–27 February 2016; pp. 70–73. [Google Scholar] [CrossRef]
  100. Barker, J.; Hoogi, A.; Depeursinge, A.; Rubin, D.L. Automated classification of brain tumor type in whole-slide digital pathology images using local representative tiles. Med. Image Anal. 2016, 30, 60–71. [Google Scholar] [CrossRef] [Green Version]
  101. Kandemir, M.; Hamprecht, F.A. Computer-aided diagnosis from weak supervision: A benchmarking study. Comput. Med. Imaging Graph. 2015, 42, 44–50. [Google Scholar] [CrossRef]
  102. Cosatto, E.; Laquerre, P.F.; Malon, C.; Graf, H.P.; Saito, A.; Kiyuna, T.; Marugame, A.; Kamijo, K. Automated gastric cancer diagnosis on H&E-stained sections; ltraining a classifier on a large scale with multiple instance machine learning. In Proceedings of the Progress in Biomedical Optics and Imaging-Proceedings of SPIE, Orlando, FL, USA, 29 March 2013; Volume 8676. [Google Scholar] [CrossRef]
  103. Xu, Y.; Zhu, J.Y.; Chang, E.I.C.; Lai, M.; Tu, Z. Weakly supervised histopathology cancer image segmentation and classification. Med. Image Anal. 2014, 18, 591–604. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  104. Sudharshan, P.; Petitjean, C.; Spanhol, F.; Oliveira, L.E.; Heutte, L.; Honeine, P. Multiple instance learning for histopathological breast cancer image classification. Expert Syst. Appl. 2019, 117, 103–111. [Google Scholar] [CrossRef]
  105. Irshad, H.; Gouaillard, A.; Roux, L.; Racoceanu, D. Multispectral band selection and spatial characterization: Application to mitosis detection in breast cancer histopathology. Comput. Med. Imaging Graph. 2014, 38, 390–402. [Google Scholar] [CrossRef] [Green Version]
  106. Homeyer, A.; Schenk, A.; Arlt, J.; Dahmen, U.; Dirsch, O.; Hahn, H.K. Practical quantification of necrosis in histological whole-slide images. Comput. Med. Imaging Graph. 2013, 37, 313–322. [Google Scholar] [CrossRef] [PubMed]
  107. Khan, S.U.; Islam, N.; Jan, Z.; Din, I.U.; Khan, A.; Faheem, Y. An e-health care services framework for the detection and classification of breast cancer in breast cytology images as an IoMT application. Future Gener. Comput. Syst. 2019, 98, 286–296. [Google Scholar] [CrossRef]
  108. Kurmi, Y.; Chaurasia, V.; Ganesh, N.; Kesharwani, A. Microscopic images classification for cancer diagnosis. Signal Image Video Process. 2019, 14, 665–673. [Google Scholar] [CrossRef]
  109. Daskalakis, A.; Kostopoulos, S.; Spyridonos, P.; Glotsos, D.; Ravazoula, P.; Kardari, M.; Kalatzis, I.; Cavouras, D.; Nikiforidis, G. Design of a multi-classifier system for discriminating benign from malignant thyroid nodules using routinely H&E-stained cytological images. Comput. Biol. Med. 2008, 38, 196–203. [Google Scholar] [CrossRef]
  110. Meng, T.; Lin, L.; Shyu, M.L.; Chen, S.C. Histology Image Classification Using Supervised Classification and Multimodal Fusion. In Proceedings of the IEEE International Symposium on Multimedia, Taichung, Taiwan, 13–15 December 2010; pp. 145–152. [Google Scholar] [CrossRef] [Green Version]
  111. Wang, C.W.; Yu, C.P. Automated morphological classification of lung cancer subtypes using H&E tissue images. Mach. Vis. Appl. 2013, 24, 1383–1391. [Google Scholar] [CrossRef]
  112. Vink, J.P.; Van Leeuwen, M.B.; Van Deurzen, C.H.M.; De Haan, G. Efficient nucleus detector in histopathology images. J. Microsc. 2013, 249, 124–135. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  113. Phoulady, H.A.; Chaudhury, B.; Goldgof, D.; Hall, L.O.; Mouton, P.R.; Hakam, A.; Siegel, E.M. Experiments with large ensembles for segmentation and classification of cervical cancer biopsy images. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, San Diego, CA, USA, 5–8 October 2014; pp. 870–875. [Google Scholar] [CrossRef]
  114. Di Franco, M.D.; Reynolds, H.L.M.; Mitchell, C.; Williams, S.; Allan, P.; Haworth, A. Performance assessment of automated tissue characterization for prostate H&E stained histopathology. In Proceedings of the Medical Imaging 2015: Digital Pathology-Proceedings of SPIE, Orlando, FL, USA, 27 April 2015; Volume 9420. [Google Scholar] [CrossRef]
  115. Albashish, D.; Sahran, S.; Abdullah, A.; Adam, A.; Abd Shukor, N.; Pauzi, S.H.M. Multi-scoring Feature selection method based on SVM-RFE for prostate cancer diagnosis. In Proceedings of the 5th International Conference on Electrical Engineering and Informatics, Bali, Indonesia, 10–11 August 2015; pp. 682–686. [Google Scholar]
  116. Huang, C.H.; Kalaw, E.M. Automated classification for pathological prostate images using AdaBoost-based Ensemble Learning. In Proceedings of the IEEE Symposium Series on Computational Intelligence, Athens, Greece, 6–9 December 2016; pp. 1–4. [Google Scholar] [CrossRef]
  117. Fernández-Carrobles, M.M.; Serrano, I.; Bueno, G.; Déniz, O. Bagging Tree Classifier and Texture Features for Tumor Identification in Histological Images. Procedia Comput. Sci. 2016, 90, 99–106. [Google Scholar] [CrossRef] [Green Version]
  118. Romo-Bucheli, D.; Corredor, G.; Garcia-Arteaga, J.D.; Arias, V.; Romero, E. Nuclei Graph Local Features for Basal Cell Carcinoma Classification in Whole Slide Images. In Proceedings of the 12th International Symposium on Medical Information Processing and Analysis, Tandil, Argentina, 26 January 2017; Volume 10160. [Google Scholar] [CrossRef]
  119. DiFranco, M.D.; O’Hurley, G.; Kay, E.W.; Watson, R.W.G.; Cunningham, P. Ensemble based system for whole-slide prostate cancer probability mapping using color texture features. Comput. Med. Imaging Graph. 2011, 35, 629–645. [Google Scholar] [CrossRef]
  120. Wright, A.I.; Magee, D.; Quirke, P.; Treanor, D. Incorporating Local and Global Context for Better Automated Analysis of Colorectal Cancer on Digital Pathology Slides. Procedia Comput. Sci. 2016, 90, 125–131. [Google Scholar] [CrossRef] [Green Version]
  121. Valkonen, M.; Kartasalo, K.; Liimatainen, K.; Nykter, M.; Latonen, L.; Ruusuvuori, P. Metastasis detection from whole slide images using local features and random forests. Cytom. Part A 2017, 91A, 555–565. [Google Scholar] [CrossRef]
  122. Cruz-Roa, A.; Basavanhally, A.; Gonzalez, F.; Gilmore, H.; Feldman, M.; Ganesan, S.; Shih, N.; Tomaszewski, J.; Madabhushi, A. Automatic detection of invasive ductal carcinoma in whole slide images with Convolutional Neural Networks. In Proceedings of the Medical Imaging 2014: Digital Pathology-Proceedings of SPIE, San Diego, CA, USA, 20 March 2014; Volume 9041. [Google Scholar] [CrossRef]
  123. de Matos, J.; de Souza Britto, A., Jr.; de Oliveira, L.E.S.; Koerich, A.L. Texture CNN for Histopathological Image Classification. In Proceedings of the 32nd IEEE International Symposium on Computer-Based Medical Systems (CBMS), Cordoba, Spain, 5–7 June 2019; pp. 580–583. [Google Scholar] [CrossRef] [Green Version]
  124. Ataky, S.T.M.; de Matos, J.; de Souza Britto, A., Jr.; de Oliveira, L.E.S.; Koerich, A.L. Data Augmentation for Histopathological Images Based on Gaussian-Laplacian Pyramid Blending. In Proceedings of the International Joint Conference on Neural Networks, Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  125. De Matos, J.; de Souza Britto, A., Jr.; Oliveira, L.E.S.; Koerich, A.L. Double Transfer Learning for Breast Cancer Histopathologic Image Classification. In Proceedings of the International Joint Conference on Neural Networks, Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  126. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  127. Kainz, P.; Pfeiffer, M.; Urschler, M. Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization. PeerJ 2017, 2017. [Google Scholar] [CrossRef]
  128. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems 25; Curran Associates, Inc.: Red Hook, NY, USA, 2012; pp. 1097–1105. [Google Scholar]
  129. Stanitsas, P.; Cherian, A.; Li, X.; Truskinovsky, A.; Morellas, V.; Papanikolopoulos, N. Evaluation of feature descriptors for cancerous tissue recognition. In Proceedings of the 23rd International Conference on Pattern Recognition, Cancun, Mexico, 4–8 December 2016; pp. 1490–1495. [Google Scholar] [CrossRef]
  130. Spanhol, F.A.; Oliveira, L.S.; Petitjean, C.; Heutte, L. Breast cancer histopathological image classification using Convolutional Neural Networks. In Proceedings of the International Joint Conference on Neural Networks, Vancouver, BC, Canada, 24–29 July 2016; pp. 2560–2567. [Google Scholar] [CrossRef]
  131. Sharma, H.; Zerbe, N.; Klempert, I.; Hellwich, O.; Hufnagl, P. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology. Comput. Med. Imaging Graph. 2017, 61, 2–13. [Google Scholar] [CrossRef]
  132. Budak, Ü.; Cömert, Z.; Rashid, Z.N.; Şengür, A.; Çıbuk, M. Computer-aided diagnosis system combining FCN and Bi-LSTM model for efficient breast cancer detection from histopathological images. Appl. Soft Comput. 2019, 85, 105765. [Google Scholar] [CrossRef]
  133. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.E.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
  134. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. arXiv 2015, arXiv:abs/1512.00567, [1512.00567]. [Google Scholar]
  135. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 4278–4284. [Google Scholar]
  136. Li, W.; Manivannan, S.; Akbar, S.; Zhang, J.; Trucco, E.; McKenna, S.J. Gland segmentation in colon histology images using hand-crafted features and convolutional neural networks. In Proceedings of the IEEE 13th International Symposium on Biomedical Imaging, Prague, Czech Republic, 13–16 April 2016; pp. 1405–1408. [Google Scholar] [CrossRef] [Green Version]
  137. Yan, R.; Ren, F.; Wang, Z.; Wang, L.; Zhang, T.; Liu, Y.; Rao, X.; Zheng, C.; Zhang, F. Breast cancer histopathological image classification using a hybrid deep neural network. Methods 2019, 173, 52–60. [Google Scholar] [CrossRef] [PubMed]
  138. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  139. Khosravi, P.; Kazemi, E.; Imielinski, M.; Elemento, O.; Hajirasouliha, I. Deep Convolutional Neural Networks Enable Discrimination of Heterogeneous Digital Pathology Images. EBioMedicine 2018, 27, 317–328. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  140. Vizcarra, J.; Place, R.; Tong, L.; Gutman, D.; Wang, M.D. Fusion In Breast Cancer Histology Classification. In Proceedings of the 10th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, Niagara Falls, NY, USA, 7–10 September 2019; pp. 485–493. [Google Scholar]
  141. Zerhouni, E.; Lányi, D.; Viana, M.; Gabrani, M. Wide residual networks for mitosis detection. In Proceedings of the IEEE 14th International Symposium on Biomedical Imaging, Melbourne, VIC, Australia, 18–21 April 2017; pp. 924–928. [Google Scholar] [CrossRef]
  142. Gandomkar, Z.; Brennan, P.C.; Mello-Thoms, C. MuDeRN: Multi-category classification of breast histopathological image using deep residual networks. Artif. Intell. Med. 2018, 88, 14–24. [Google Scholar] [CrossRef] [PubMed]
  143. Brancati, N.; De Pietro, G.; Frucci, M.; Riccio, D. A Deep Learning Approach for Breast Invasive Ductal Carcinoma Detection and Lymphoma Multi-Classification in Histological Images. IEEE Access 2019, 7, 44709–44720. [Google Scholar] [CrossRef]
  144. Talo, M. Automated classification of histopathology images using transfer learning. Artif. Intell. Med. 2019, 101, 101743. [Google Scholar] [CrossRef] [Green Version]
  145. Bejnordi, B.E.; Lin, J.; Glass, B.; Mullooly, M.; Gierach, G.L.; Sherman, M.E.; Karssemeijer, N.; van der Laak, J.; Beck, A.H. Deep learning-based assessment of tumor-associated stroma for diagnosing breast cancer in histopathology images. In Proceedings of the IEEE 14th International Symposium on Biomedical Imaging, Melbourne, VIC, Australia, 18–21 April 2017; pp. 929–932. [Google Scholar] [CrossRef]
  146. Karen Simonyan, A.Z. Very Deep Convolutional Networks for Large-scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  147. Xu, Y.; Li, Y.; Wang, Y.; Liu, M.; Fan, Y.; Lai, M.; Chang, E.I.C. Gland Instance Segmentation Using Deep Multichannel Neural Networks. IEEE Trans. Biomed. Eng. 2017, 64, 2901–2912. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  148. Kumar, A.; Singh, S.K.; Saxena, S.; Lakshmanan, K.; Sangaiah, A.K.; Chauhan, H.; Shrivastava, S.; Singh, R.K. Deep feature learning for histopathological image classification of canine mammary tumors and human breast cancer. Inf. Sci. 2020, 508, 405–421. [Google Scholar] [CrossRef]
  149. Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef] [Green Version]
  150. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  151. Kassani, S.H.; Kassani, P.H.; Wesolowski, M.J.; Schneider, K.A.; Deters, R. Classification of Histopathological Biopsy Images Using Ensemble of Deep Learning Networks. In Proceedings of the 29th Annual International Conference on Computer Science and Software Engineering (CASCON ’19), Markham, ON, Canada, 4–6 November 2019; pp. 92–99. [Google Scholar]
  152. Yang, H.; Kim, J.Y.; Kim, H.; Adhikari, S.P. Guided Soft Attention Network for Classification of Breast Cancer Histopathology Images. IEEE Trans. Med. Imaging 2019, 39, 1306–1315. [Google Scholar] [CrossRef]
  153. Bayramoglu, N.; Kannala, J.; Heikkila, J. Deep Learning for Magnification Independent Breast Cancer Histopathology Image Classification. In Proceedings of the 23rd International Conference on Pattern Recognition, Cancun, Mexico, 4–8 December 2016; pp. 2440–2445. [Google Scholar]
  154. Albarqouni, S.; Baur, C.; Achilles, F.; Belagiannis, V.; Demirci, S.; Navab, N. AggNet: Deep Learning From Crowds for Mitosis Detection in Breast Cancer Histology Images. IEEE Trans. Med. Imaging 2016, 35, 1313–1321. [Google Scholar] [CrossRef]
  155. Ciompi, F.; Geessink, O.; Bejnordi, B.E.; de Souza, G.S.; Baidoshvili, A.; Litjens, G.; van Ginneken, B.; Nagtegaal, I.; van der Laak, J. The importance of stain normalization in colorectal tissue classification with convolutional networks. In Proceedings of the IEEE 14th International Symposium on Biomedical Imaging, Melbourne, VIC, Australia, 18–21 April 2017; pp. 160–163. [Google Scholar] [CrossRef] [Green Version]
  156. Kwak, J.T.; Hewitt, S.M. Nuclear Architecture Analysis of Prostate Cancer via Convolutional Neural Networks. IEEE Access 2017, 5, 18526–18533. [Google Scholar] [CrossRef]
  157. Roy, K.; Banik, D.; Bhattacharjee, D.; Nasipuri, M. Patch-based system for Classification of Breast Histology images using deep learning. Comput. Med. Imaging Graph. 2019, 71, 90–103. [Google Scholar] [CrossRef]
  158. Gecer, B.; Aksoy, S.; Mercan, E.; Shapiro, L.G.; Weaver, D.L.; Elmore, J.G. Detection and classification of cancer in whole slide breast histopathology images using deep convolutional networks. Pattern Recognit. 2018, 84, 345–356. [Google Scholar] [CrossRef]
  159. Wang, C.; Shi, J.; Zhang, Q.; Ying, S. Histopathological image classification with bilinear convolutional neural networks. In Proceedings of the 39th Annual International Conferenceof the IEEE Engineering in Medicine and Biology Society, Jeju, Korea, 11–15 July 2017; pp. 4050–4053. [Google Scholar] [CrossRef]
  160. Li, C.; Wang, X.; Liu, W.; Latecki, L.J.; Wang, B.; Huang, J. Weakly supervised mitosis detection in breast histopathology images using concentric loss. Med. Image Anal. 2019, 53, 165–178. [Google Scholar] [CrossRef]
  161. Hou, L.; Nguyen, V.; Kanevsky, A.B.; Samaras, D.; Kurc, T.M.; Zhao, T.; Gupta, R.R.; Gao, Y.; Chen, W.; Foran, D.; et al. Sparse autoencoder for unsupervised nucleus detection and representation in histopathology images. Pattern Recognit. 2019, 86, 188–200. [Google Scholar] [CrossRef] [PubMed]
  162. Sheikh, T.S.; Lee, Y.; Cho, M. Histopathological Classification of Breast Cancer Images Using a Multi-Scale Input and Multi-Feature Network. Cancers 2020, 12, 2031. [Google Scholar] [CrossRef] [PubMed]
  163. Saha, M.; Mukherjee, R.; Chakraborty, C. Computer-aided diagnosis of breast cancer using cytological images: A systematic review. Tissue Cell 2016, 48, 461–474. [Google Scholar] [CrossRef] [PubMed]
  164. Nawaz, S.; Yuan, Y. Computational pathology: Exploring the spatial dimension of tumor ecology. Cancer Lett. 2016, 380, 296–303. [Google Scholar] [CrossRef] [Green Version]
  165. Chen, J.M.; Li, Y.; Xu, J.; Gong, L.; Wang, L.W.; Liu, W.L.; Liu, J. Computer-aided prognosis on breast cancer with hematoxylin and eosin histopathology images: A review. Tumor Biol. 2017, 39. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  166. Robertson, S.; Azizpour, H.; Smith, K.; Hartman, J. Digital image analysis in breast pathology—From image processing techniques to artificial intelligence. Transl. Res. 2017. [Google Scholar] [CrossRef] [Green Version]
  167. Komura, D.; Ishikawa, S. Machine Learning Methods for Histopathological Image Analysis. Comput. Struct. Biotechnol. J. 2018. [Google Scholar] [CrossRef]
  168. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
  169. Zhou, X.; Li, C.; Rahaman, M.M.; Yao, Y.; Ai, S.; Sun, C.; Wang, Q.; Zhang, Y.; Li, M.; Li, X.; et al. A Comprehensive Review for Breast Histopathology Image Analysis Using Classical and Deep Neural Networks. IEEE Access 2020, 8, 90931–90956. [Google Scholar] [CrossRef]
  170. Krithiga, R.; Geetha, P. Breast Cancer Detection, Segmentation and Classification on Histopathology Images Analysis: A Systematic Review. Arch. Computat. Methods Eng. 2020. [Google Scholar] [CrossRef]
  171. He, L.; Long, L.R.; Antani, S.; Thoma, G.R. Histology image analysis for carcinoma detection and grading. Comput. Methods Programs Biomed. 2012, 107, 538–556. [Google Scholar] [CrossRef] [Green Version]
  172. Irshad, H.; Veillard, A.; Roux, L.; Racoceanu, D. Methods for Nuclei Detection, Segmentation, and Classification in Digital Histopathology: A Review—Current Status and Future Potential. IEEE Rev. Biomed. Eng. 2014, 7, 97–114. [Google Scholar] [CrossRef]
  173. Deshmukh, B.S.; Mankar, V.H. Segmentation of Microscopic Images: A Survey. In Proceedings of the International Conference on Electronic Systems, Signal Processing and Computing Technologies, Nagpur, India, 9–11 January 2014; pp. 362–364. [Google Scholar] [CrossRef]
  174. Akhila, E.; Preethymol, B. Detection of malignant tissues: Analysis on segmentation of histology images. In Proceedings of the International Conference on Innovations in Information, Embedded and Communication Systems, Coimbatore, India, 19–20 March 2015; pp. 1–4. [Google Scholar] [CrossRef]
  175. Veta, M.; van Diest, P.J.; Willems, S.M.; Wang, H.; Madabhushi, A.; Cruz-Roa, A.; Gonzalez, F.; Larsen, A.B.L.; Vestergaard, J.S.; Dahl, A.B.; et al. Assessment of algorithms for mitosis detection in breast cancer histopathology images. Med. Image Anal. 2015, 20, 237–248. [Google Scholar] [CrossRef] [Green Version]
  176. Madabhushi, A.; Lee, G. Image analysis and machine learning in digital pathology: Challenges and opportunities. Med. Image Anal. 2016, 33, 170–175. [Google Scholar] [CrossRef] [Green Version]
  177. Cosma, G.; Brown, D.; Archer, M.; Khan, M.; Pockley, A.G. A survey on computational intelligence approaches for predictive modeling in prostate cancer. Expert Syst. Appl. 2017, 70, 1–19. [Google Scholar] [CrossRef] [Green Version]
  178. Tosta, T.A.A.; Neves, L.A.; do Nascimento, M.Z. Segmentation methods of H&E-stained histological images of lymphoma: A review. Inform. Med. Unlocked 2017, 9, 35–43. [Google Scholar]
  179. Cataldo, S.D.; Ficarra, E. Mining textural knowledge in biological images: Applications, methods and trends. Comput. Struct. Biotechnol. J. 2017, 15, 56–67. [Google Scholar] [CrossRef] [Green Version]
  180. Aswathy, M.A.; Jagannath, M. Detection of breast cancer on digital histopathology images: Present status and future possibilities. Inform. Med. Unlocked 2017, 8, 74–79. [Google Scholar] [CrossRef] [Green Version]
  181. Li, Z.; Zhang, X.; Müller, H.; Zhang, S. Large-scale retrieval for medical image analytics: A comprehensive review. Med. Image Anal. 2018, 43, 66–84. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  182. Gurcan, M.N.; Madabhushi, A.; Rajpoot, N. Pattern Recognition in Histopathological Images: An ICPR 2010 Contest. In Recognizing Patterns in Signals, Speech, Images and Videos; Springer: Berlin/Heidelberg, Germany, 2010; pp. 226–234. [Google Scholar]
  183. Shamir, L.; Orlov, N.; Eckley, D.; Macura, T.; Goldberg, I. IICBU Biological Image Repository. 2008. Available online: https://ome.grc.nia.nih.gov/iicbu2008/ (accessed on 16 August 2019).
  184. Roux, L.; Racoceanu, D.; Loménie, N.; Kulikova, M.; Irshad, H.; Klossa, J.; Capron, F.; Genestie, C.; Le Naour, G.; Gurcan, M.N. Mitosis detection in breast cancer histological images An ICPR 2012 contest. J. Pathol. Inform. 2013, 4, 8. [Google Scholar] [CrossRef]
  185. TCGA. The Cancer Genome Atlas Program. Available online: http://cancergenome.nih.gov/ (accessed on 16 August 2019).
  186. Roux, L. MITOS-ATYPIA-14-MITOS & ATYPIA 14 Contest Home Page. 2014. Available online: https://mitos-atypia-14.grand-challenge.org/ (accessed on 16 August 2019).
  187. Cancer Genome Atlas Research Network. Comprehensive molecular profiling of lung adenocarcinoma. Nature 2014, 511, 543–550. [Google Scholar] [CrossRef]
  188. Marinelli, R.J.; Montgomery, K.; Liu, C.L.; Shah, N.H.; Prapong, W.; Nitzberg, M.; Zachariah, Z.K.; Sherlock, G.J.; Natkunam, Y.; West, R.B.; et al. The Stanford Tissue Microarray Database. Nucleic Acids Res. 2008, 36, D871–D877. [Google Scholar] [CrossRef] [Green Version]
  189. Spanhol, F.A.; Oliveira, L.S.; Petitjean, C.; Heutte, L. A Dataset for Breast Cancer Histopathological Image Classification. IEEE Trans. Biomed. Eng. 2016, 63, 1455–1462. [Google Scholar] [CrossRef]
  190. Medical Image Computing and Computer-Assisted Intervention (MICCAI); Lecture Notes in Computer Science; Springer: Berlin, Germany, 2014; Volume 8673. [CrossRef] [Green Version]
  191. Saifuddin, S.R.; Devlies, W.; Santaolalla, A.; Cahill, F.; George, G.; Enting, D.; Rudman, S.; Cathcart, P.; Challacombe, B.; Dasgupta, P.; et al. King’s Health Partners’ Prostate Cancer Biobank (KHP PCaBB). BMC Cancer 2017, 17, 784. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  192. Mazoa, C.; Trujilloa, M.; Alegreb, E.; Salazar, L. Banco de Imagenes Histologicas sobre el Sistema Cardiovascular Humano. 2016. Available online: http://biscar.univalle.edu.co/ (accessed on 16 August 2019).
  193. Bejnordi, B.E.; Veta, M.; van Diest, P.J.; van Ginneken, B.; Karssemeijer, N.; Litjens, G.; van der Laak, J.A.W.M.; The CAMELYON16 Consortium. Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer. JAMA 2017, 318, 2199–2210. [Google Scholar] [CrossRef]
  194. Kruka, S.M. Fuhrman Grades Nuclei. 2016. Available online: http://michalkruk.pl/FDataset.zip (accessed on 16 August 2019).
  195. Kruka, S.M. Fuhrman Grades Images. 2016. Available online: http://michalkruk.pl/Images.zip (accessed on 16 August 2019).
  196. Araujo, T.; Aresta, G.; Castro, E.; Rouco, J.; Aguiar, P.; Eloy, C.; Polonia, A.; Campilho, A. Classification of breast cancer histology images using convolutional neural networks. PLoS ONE 2017, 12, e0177544. [Google Scholar] [CrossRef]
  197. Hafiane, A.; Bunyak, F.; Palaniappan, K. Clustering initiated multiphase active contours and robust separation of nuclei groups for tissue segmentation. In Proceedings of the 19th International Conference on Pattern Recognition, Tampa, FL, USA, 8–11 December 2008; pp. 1–4. [Google Scholar] [CrossRef]
  198. Doyle, S.; Feldman, M.; Tomaszewski, J.; Madabhushi, A. A Boosted Bayesian Multiresolution Classifier for Prostate Cancer Detection From Digitized Needle Biopsies. IEEE Trans. Biomed. Eng. 2012, 59, 1205–1218. [Google Scholar] [CrossRef]
  199. Monaco, J.P.; Tomaszewski, J.E.; Feldman, M.D.; Hagemann, I.; Moradi, M.; Mousavi, P.; Boag, A.; Davidson, C.; Abolmaesumi, P.; Madabhushi, A. High-throughput detection of prostate cancer in histological sections using probabilistic pairwise Markov models. Med. Image Anal. 2010, 14, 617–629. [Google Scholar] [CrossRef] [Green Version]
  200. Lee, G.; Doyle, S.; Monaco, J.; Madabhushi, A.; Feldman, M.D.; Master, S.R.; Tomaszewski, J.E. A knowledge representation framework for integration, classification of multi-scale imaging and non-imaging data: Preliminary results in predicting prostate cancer recurrence by fusing mass spectrometry and histology. In Proceedings of the 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Boston, MA, USA, 28 June–1 July 2009; pp. 77–80. [Google Scholar] [CrossRef]
  201. Caicedo, J.C.; González, F.A.; Triana, E.; Romero, E. Design of a Medical Image Database with Content-Based Retrieval Capabilities. In Advances in Image and Video Technology; Springer: Berlin/Heidelberg, Germany, 2007; pp. 919–931. [Google Scholar]
  202. Basavanhally, A.; Ganesan, S.; Shih, N.; Mies, C.; Feldman, M.; Tomaszewski, J.; Madabhushi, A. A boosted classifier for integrating multiple fields of view: Breast cancer grading in histopathology. In Proceedings of the 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Chicago, IL, USA, 30 March–2 April 2011; pp. 125–128. [Google Scholar] [CrossRef]
  203. Basavanhally, A.; Feldman, M.; Shih, N.; Mies, C.; Tomaszewski, J.; Ganesan, S.; Madabhushi, A. Multi-field-of-view strategy for image-based outcome prediction of multi-parametric estrogen receptor-positive breast cancer histopathology: Comparison to Oncotype DX. J. Pathol. Inform. 2011, 2, 1. [Google Scholar] [CrossRef]
  204. Saraswat, M.; Arya, K.; Sharma, H. Leukocyte segmentation in tissue images using differential evolution algorithm. Swarm Evol. Comput. 2013, 11, 46–54. [Google Scholar] [CrossRef]
  205. Wang, L.W.; Yang, G.F.; Chen, J.W.; Yang, F.; Yuan, J.P.; Sun, S.; Chen, C.Q.; Hu, M.B.; Li, Y. A clinical database of breast cancer patients reveals distinctive clinico-pathological characteristics: A study from central China. Asian Pac. J. Cancer Prev. 2014, 15 4, 1621–1626. [Google Scholar] [CrossRef] [Green Version]
  206. Lezoray, O.; Cardot, H. Cooperation of color pixel classification schemes and color watershed: A study for microscopic images. IEEE Trans. Image Process. 2002, 11, 783–789. [Google Scholar] [CrossRef]
  207. Yang, L.; Qi, X.; Xing, F.; Kurc, T.; Saltz, J.; Foran, D.J. Center for Biomedical Imaging & Informatics. 2013. Available online: http://pleiad.umdnj.edu/CBII/Bioinformatics/ (accessed on 16 August 2019).
  208. Langer, R.; Rauser, S.; Feith, M.; Nährig, J.M.; Feuchtinger, A.M.E.; Frieß, H.; Hoefler, H.; Walch, A.K. Assessment of ErbB2 (Her2) in oesophageal adenocarcinomas: Summary of a revised immunohistochemical evaluation system, bright field double in situ hybridisation and fluorescence in situ hybridisation. Mod. Pathol. 2011, 24, 908–916. [Google Scholar] [CrossRef] [Green Version]
  209. Cheng, L.; Jones, T.D.; Pan, C.X.; Barbarin, A.; Eble, J.N.; Koch, M.O. Anatomic distribution and pathologic characterization of small-volume prostate cancer (<0.5 ml) in whole-mount prostatectomy specimens. Mod. Pathol. 2005, 18, 1022–1026. [Google Scholar] [CrossRef]
  210. Drelie Gelasca, E.; Obara, B.; Fedorov, D.; Kvilekval, K.; Manjunath, B.S. A biosegmentation benchmark for evaluation of bioimage analysis methods. BMC Bioinform. 2009, 10, 368. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  211. Group, Q.C. Adjuvant chemotherapy versus observation in patients with colorectal cancer: A randomised study. Lancet 2007, 370, 2020–2029. [Google Scholar] [CrossRef]
  212. Jantzen, J.; Norup, J.; Dounias, G.; Bjerregaard, B. Pap-smear Benchmark Data For Pattern Classification. In Proceedings of the NiSIS 2005, Abufeira, Portugal, 1 January 2005; pp. 1–9. [Google Scholar]
  213. Chaddad, A.; Tanougast, C.; Dandache, A.; Al Houseini, A.; Bouridane, A. Improving of colon cancer cells detection based on Haralick’s features on segmented histopathological images. In Proceedings of the IEEE International Conference on Computer Applications and Industrial Electronics (ICCAIE), Penang, Malaysia, 4–7 December 2011; pp. 87–90. [Google Scholar] [CrossRef]
  214. Roula, M.; Diamond, J.; Bouridane, A.; Miller, P.; Amira, A. A multispectral computer vision system for automatic grading of prostatic neoplasia. In Proceedings of the IEEE International Symposium on Biomedical Imaging, Washington, DC, USA, 7–10 July 2002; pp. 193–196. [Google Scholar] [CrossRef]
  215. Roula, M.A.; Bouridane, A.; Kurugollu, F.; Amira, A. A quadratic classifier based on multispectral texture features for prostate cancer diagnosis. In Proceedings of the 7th International Symposium on Signal Processing and Its Applications, Paris, France, 4 July 2003; Volume 2, pp. 37–40. [Google Scholar] [CrossRef]
Figure 1. Example of (a) benign and (b) malignant HIs [4].
Figure 1. Example of (a) benign and (b) malignant HIs [4].
Electronics 10 00562 g001
Figure 2. Taxonomy used to classify HI works in this review.
Figure 2. Taxonomy used to classify HI works in this review.
Electronics 10 00562 g002
Figure 3. Number of articles per year after filtering and organizing according to the main subjects.
Figure 3. Number of articles per year after filtering and organizing according to the main subjects.
Electronics 10 00562 g003
Table 1. Number of results without exclusion criteria, and after the application of the first and second exclusion criteria.
Table 1. Number of results without exclusion criteria, and after the application of the first and second exclusion criteria.
Search
Engine
Number of Papers
Search QueryAfter 1st FilterAfter 2nd Filter
IEEE Xplore10270-
ACM Digital Library54-
Science Direct1753163-
Web of Science41055-
Scopus25470-
Total2524363185
Table 2. Top 20 journals per publication between 2008 and 2020.
Table 2. Top 20 journals per publication between 2008 and 2020.
Journal TitleArea# of Publications
Computerized Medical Imaging and GraphicsCHM14
Medical Image AnalysisCHM14
Computers in Biology and MedicineC6
IEEE Transactions on Biomedical EngineeringE6
Artificial Intelligence in MedicineCM4
Pattern RecognitionC4
Computer Methods and Programs in BiomedicineCM3
Expert Systems with ApplicationsCE3
IEEE AccessCE3
IEEE Transactions on Medical ImagingCHE3
Procedia Computer ScienceC3
Applied Soft ComputingC2
Computational and Structural Biotechnology JournalBC2
Cytometry Part AM2
IEEE Journal of Biomedical and Health InformaticsBCE2
Informatics in Medicine UnlockedM2
Information SciencesC2
Journal of Medical ImagingM2
Journal of Pathology InformaticsCM2
MicronB2
NeurocomputingCN2
B: Biochemistry, C: Computing, E: Engineering, H: Health Sciences, M: Medicine, N: Neuroscience.
Table 3. Top 15 conferences by number of publications between 2008 and 2020.
Table 3. Top 15 conferences by number of publications between 2008 and 2020.
Conference# of Publications
IEEE International Symposium on Biomedical Imaging12
IEEE International Conference Engineering in Medicine and Biology Society4
IEEE Conference on Computer Vision and Pattern Recognition (CVPR)3
IEEE International Joint Conference on Neural Networks (IJCNN)3
International Conference on Pattern Recognition (ICPR)3
IEEE International Conference on Healthcare Informatics, Imaging and Systems Biology2
International Conference on Information Technology in Medicine and Education2
International Symposium on Medical Information Processing and Analysis2
Medical Image Computing and Computer-Assisted Intervention2
International Conference on Bioinformatics, Computational Biology and Health Informatics1
Symposium on Applied Computing1
Advances in Image and Video Technology1
Advances in Neural Information Processing Systems (NeurIPS)1
International Conference on Computer Science and Software Engineering1
IEEE International Conference on Systems, Man, and Cybernetics1
Table 4. Summary of publications on unsupervised ML methods for HI segmentation.
Table 4. Summary of publications on unsupervised ML methods for HI segmentation.
ReferenceYearTissue/OrganMethod
Liu et al. [18]2008Lymph nodesISODATA
Tosun et al. [13]2009Colorectalk-means
Hafiane et al. [19]2009ProstateSpatial constraint fuzzy c-means
He et al. [20]2010CervixGaussian mixture models
Fatakdawala et al. [5]2010Breastk-means
Roullier et al. [6]2011Breastk-means
Rahmadwati et al. [7]2011Uterusk-means
Peng et al. [8]2011Prostatek-means
He et al. [9]2011Uterusk-means
Onder et al. [21]2013ColorectalQuasi-supervised nearest neighbors
Fatima et al. [10]2014Braink-means
Nativ et al. [14]2014Liverk-means
Yang et al. [22]2014ProstateMean shift, Similarity
Sirinukunwattana et al. [23]2015BreastDictionary, Thresholding
Huang [24]2015BreastXCA
Mazo et al. [11]2016Cardiack-means
Shi et al. [15]2016Lymph nodesk-means
Brieu et al. [16]2016Lungk-means
Shi et al. [17]2017Liverk-means
Shi et al. [15]2017Lymph nodesk-means
Table 5. Summary of publications on supervised ML methods for HI segmentation.
Table 5. Summary of publications on supervised ML methods for HI segmentation.
ReferenceYearTissue/OrganClassifier
Yu and Ip [25]2008GastricSHMM
Arteta et al. [26]2012BreastStructured SVM
Janssens et al. [27]2013MuscleSVM
Saraswat and Arya [28]2014SkinNSGA-II, Threshold
Qu et al. [29]2014BreastSVM
Salman et al. [30]2014Prostatek-NN
Chen et al. [31]2015BreastSVM
Geessink et al. [32]2015ColorectalQDA
Zarella et al. [33]2015BreastSVM
Santamaria-Pang et al. [34]2015EpitheliumSVM
Wang et al. [35]2016BreastGA + SVM
Arteta et al. [36]2016BreastStructured SVM
Brieu and Schmidt [37]2017NARegression tree
Song et al. [38]2019Breast, prostate, kidney, liver, stomach, bladderDT
NA: Not available.
Table 6. Summary of publications devoted to feature extraction from HIs.
Table 6. Summary of publications devoted to feature extraction from HIs.
ReferenceYearTissue/
Organ
Feature
Caicedo et al. [72]2008SkinColor and gray histograms, LBP, Tamura
Ballaro et al. [39]2008BoneMorphometric
Marugame et al. [48]2009BreastMorphometric
Kong et al. [86]2009BrainTextural, morphological
Kuse et al. [52]2010Lymph nodesGLCM
Orlov et al. [78]2010Lymph nodesZernike, Chebychev, Chebyshev-Fourier, color
histograms, GLCM, Tamura, Gabor, Haralick,
edge statistics
Petushi et al. [40]2011BreastMorphometric
Madabhushi et al. [41]2011ProstateVoronoi diagram, Delaunay triangulation,
minimum spanning tree, nuclear statistics
Osborne et al. [49]2011SkinMorphometric
Caicedo et al. [53]2011SkinGray, color, invariant feature, Sobel, Tamura
LBP, SIFT
Huang et al. [62]2011BreastReceptive field, sparse coding
Cruz-Roa et al. [77]2011SkinSIFT, luminance, DCT
Loeffler et al. [47]2012ProstateMorphometric
Song et al. [42]2013PancreasMorphometric
Gorelick et al. [43]2013ProstateMorphometric, geometric
Filipczuk et al. [44]2013BreastMorphometric
Atupelage et al. [61]2013BloodFractal dimension
Basavanhally et al. [75]2013BreastMorphological, textural, graph-based
De et al. [79]2013UterusGLCM, Delaunay triangulation, weighted
density distribution
Ozolek et al. [45]2014ThyroidLinear optimal transport
Olgun et al. [51]2014ColorectalLocal object pattern
Michail et al. [84]2014Lymph nodesMorphometric, texture
Vanderbeck et al. [80]2014LiverMorphological, textural, pixel neighboring
statistics
Kandemir et al. [81]2014EsophagusMorphometric, LBP, SIFT, color histograms
Fernández-Carrobles et al. [54]2015BreastTextons
Gertych et al. [59]2015ProstateLBP
Tashk et al. [76]2015BreastLBP, morphometric, statistical
Coatelen et al. [82], Coatelen et al. [83]2015LiverMorphometric, GLCM, LBP, fractal dimension, Graph-based
Balazsi et al. [60]2016BreastLBP
Fukuma et al. [46]2016BrainObject, spatial
Leo et al. [70]2016ProstateGraph-based, shape, entropy, subgraph
connectivity, texture
Phoulady et al. [57]2016UterusHOG, LBP
Bruno et al. [56]2016BreastCurvelet transform, LBP
Noroozi and Zakerolhosseini [63]2016SkinZ-transform coefficients
Niazi et al. [66]2016BladderMorphometric
Yu et al. [71]2016LungQuantitative, texture
Chan and Tuszynski [65]2016BreastFractal dimension
Kwak and Hewitt [50]2017ProstateMorphometric
Reis et al. [58]2017BreastBIF, LBP
Mazo et al. [12]2017CardiacLBP, Haralick
Wan et al. [64]2017BreastWavelet transform, Gaussian distribution,
Symmetric alpha-stable
Spanhol et al. [67]2017BreastDeep
Das et al. [85]2017OralHu’s moment, fractal dimension, entropy
Pang et al. [73]2017LungLBP, GLCM, Tamura, SIFT, global, morphometric
Kruk et al. [74]2017KidneyMorphometric, textural, and statistical
Peyret et al. [55]2018ProstateLBP
Vo et al. [68]2019BreastDeep
George et al. [69]2019BreastDeep
Table 7. Summary of publications focusing on HI classification based on monolithic classifiers.
Table 7. Summary of publications focusing on HI classification based on monolithic classifiers.
ReferenceYearTissue/OrganClassifier
Caicedo et al. [72]2008SkinSVM
Ballaro et al. [39]2008BoneDT, k-NN
Mete and Topaloglu [94]2009SkinDT, NB, SVM
Marugame et al. [48]2009BreastBayes
Orlov et al. [78]2010Lymph nodesk-NN, NB, RBF
Osborne et al. [49]2011SkinSVM
Malon et al. [87]2012BreastSVM
Sidiropoulos et al. [95]2012BrainPNN
De et al. [79]2013UterusLDA
Atupelage et al. [61]2013LiverSVM
Cosatto et al. [102]2013GastricMLP (MIL)
Homeyer et al. [106]2013Liverk-NN, NB, RF
Song et al. [42]2013Pancreask-NN, NB, NN, SVM
Irshad et al. [105]2014BreastDT, MLP, SVM
Xu et al. [103]2014ColorectalGaussian (MIL)
Kandemir et al. [81]2014GastricSVM (MIL)
Olgun et al. [51]2014ColonSVM
Vanderbeck et al. [80]2014LiverSVM
Coatelen et al. [82], Coatelen et al. [83]2014LiverSVM
Michail et al. [96]2014Lymph nodesLDA
Harai and Tanaka [89]2015Colorectalk-NN
Korkmaz and Poyraz [93]2015BreastSVM
Kandemir and Hamprecht [101]2015GastricSVM (MIL)
Chan and Tuszynski [65]2016BreastSVM
Guo et al. [88]2016UterusSVM
Beevi et al. [97]2016BreastRKS
Jothi and Rajam [98]2016ThyroidVPRS + CMR
Barker et al. [100]2016BrainElastic net
Bruno et al. [56]2016BreastDT, Polynomial, RF, SVM
Pang et al. [73]2017LiverELM, RF, SVM
Wan et al. [64]2017BreastSVM
Mazo et al. [12]2017CardiacSVM
Peikari et al. [90]2017BreastSVM
BenTaieb et al. [91]2017OvarySVM
Zhang et al. [92]2017LungSVM
Spanhol et al. [67]2017BreastLogistic regression
Sudharshan et al. [104]2019BreastSVM, k-NN, QDA, RF, CNN (MIL)
Khan et al. [107]2019BreastNB, RF, SVM
Kurmi et al. [108]2019BreastSVM, MLP
Table 8. Summary of recent publication on ensemble approaches for HI analysis.
Table 8. Summary of recent publication on ensemble approaches for HI analysis.
ReferenceYearTissue/
Organ
Base ClassifierCombination
Rule/Function
Daskalakis et al. [109]2008Thyroidk-NN, LLSMD, SQ-Bayes,Vot, Min, Max,
SVM, PNNSum, Prod
Kong et al. [86]2009Neuroblastomak-NN, LDA, Bayesian, SVMWV
Meng et al. [110]2010Liver,PCCWV
Lymphocytes
DiFranco et al. [119]2011ProstateSVM and RFMV
Wang and Yu [111]2013LungDTAdaboost
Gorelick et al. [43]2013ProstateDTAdaboost
Filipczuk et al. [44]2013BreastSVM, PerceptronPerceptron
Vink et al. [112]2013BreastDT, StumpsAdaboost
Basavanhally et al. [75]2013BreastRFMV
Phoulady et al. [113]2014UterusOtsu segmentorsSimilarity
Zarella et al. [33]2015LymphomaSVMWS
Di Franco et al. [114]2015ProstateSVMAvg
Gertych et al. [59]2015ProstateSVM, RFMV
Tashk et al. [76]2015BreastSVM, RFMV
Albashish et al. [115]2015ProstateSVMSum
Huang and Kalaw [116]2016Prostatek-NN, SVM, DT, RF, LDA,Adaboost
QDA, NB
Balazsi et al. [60]2016BreastRFMV
Wright et al. [120]2016ColorectalRFMV
Fernández-Carrobles et al. [117]2016BreastDTSum, Variance
Kwak and Hewitt [50]2017ProstateSVMBoosting
Kruk et al. [74]2017KidneySVM + RFMV
Valkonen et al. [121]2017BreastRFMV
Romo-Bucheli et al. [118]2017SkinNAAdaboost
NA: Not available, WV: Weighted vote, MV: Majority vote, Avg: Average, Min: Minimum, Max: Maximum, Sum: Summation, Prod: Product.
Table 9. Summary of publications using DL methods in HI analysis.
Table 9. Summary of publications using DL methods in HI analysis.
ReferenceYearTissue/OrganNetwork Architecture
Malon et al. [87]2012BreastLeNet-5
Cruz-Roa et al. [122]2014Breast3-layer Custom
Stanitsas et al. [129]2016BreastAlexNet
Spanhol et al. [130]2016BreastAlexNet
Bayramoglu et al. [153]2016Breast10-layer Custom
Albarqouni et al. [154]2016BreastAggNet Custom
Li et al. [136]2016GlandAlexNet, Inception-V1
Zerhouni et al. [141]2017BreastWide ResNet
Bejnordi et al. [145]2017BreastVGG-net, VGG16
Wang et al. [159]2017ColorectalBilinear Custom
Ciompi et al. [155]2017Colorectal11-layer Custom
Kainz et al. [127]2017ColorectalLeNet
Sharma et al. [131]2017GastricAlexNet, Custom
Xu et al. [147]2017GlandVGG16
Kwak and Hewitt [156]2017Prostate6-layer Custom
Khosravi et al. [139]2018Breast, Lung, BladderInception-V1, ResNet
Gandomkar et al. [142]2018BreastResNet
Hou et al. [161]2019Gland, BreastCAE+CNN Custom
Li et al. [160]2019BreastFCN Custom
Vizcarra et al. [140]2019BreastInception-V3, Inception-ResNet-V2
Brancati et al. [143]2019BreastResNet
Budak et al. [132]2019BreastAlexNet, BLSTM
Kassani et al. [151]2019BreastVGG19, MobileNet, DenseNet
Yang et al. [152]2019BreastDenseNet-169
Roy et al. [157]2019Breast11-layer to 14-layer Custom
Gecer et al. [158]2019Breast9-layer Custom
Yan et al. [137]2019BreastInception-V3, BLSTM
Talo [144]2019BreastResNet-50, DenseNet-161
Kassani et al. [151]2019BreastVGG19, MobileNet, DenseNet
Yang et al. [152]2019BreastDenseNet-169
de Matos et al. [123]2019Breast7-layer Texture Custom
de Matos et al. [125]2019BreastInception-V3
Kumar et al. [148]2020BreastVGG16
Ataky et al. [124]2020Breast7-layer Texture Custom
Sheikh et al. [162]2020Breast24-layer Custom
Table 10. Summary of the reviews and surveys on HIs and ML approaches.
Table 10. Summary of the reviews and surveys on HIs and ML approaches.
ReferenceYearImage
Type
SubjectJournal or Conference
He et al. [171]2012HISegmentation, featureComp Methods Progr
extraction, classificationBiomed
Irshad et al. [172]2014HI,Nuclei extraction, segmentation,IEEE Reviews Biomed
IHCFeature extraction, classificationEng
Deshmukh and Mankar [173]2014HI, IHCSegmentationInternational Conference Electr Syst Sig
Other Proc Comp Techn
Akhila and Preethymol [174]2015HINuclei segmentation,International Conference Innov Inform
classificationEmb Comm Sys
Veta et al. [175]2015HIResults of MITOS2013 ChallengeMedical Image Analysis
Nawaz and Yuan [164]2016VariousTumor ecologyCancer Letters
Madabhushi and Lee [176]2016HIDetection, segmentation, featureMedical Image Analysis
extraction, classification
Saha et al. [163]2016HISlide preparation, staining,Tissue and Cell
microscopic, imaging,
preprocessing, segmentation,
feature extraction, classification
Chen et al. [165]2017HIImage analysis of H&E slidesTumor Biology
Robertson et al. [166]2017VariousDLTranslat Research
Cosma et al. [177]2017HI,Deep and shallow methodsExpert Sys App
Other
Tosta et al. [178]2017HISegmentation for lymphocytesInform Medicine Unlocked
Litjens et al. [168]2017MIDL for medical imagesMedical Image Analysis
Cataldo and Ficarra [179]2017HIFeature extractionComput Struct Biotechn J
Aswathy and Jagannath [180]2017HIImage processing, classificationInform Medicine Unlocked
Li et al. [181]2018MIContent retrievalMedical Image Analysis
Komura and Ishikawa [167]2018HIDatasets and ML methodsComput Struct Biotechn J
Zhou et al. [169]2020HIClassical and deep neuralIEEE Access
networks, classification
Krithiga and Geetha [170]2020HIImage enhancement, segmentation,Archives Comput
feature extraction, classificationMethods Eng
MI: Medical images; IHC: Immunohistochemistry images.
Table 11. Summary of publicly available HI datasets.
Table 11. Summary of publicly available HI datasets.
YearReferenceDataset ReferenceDataset Size
2010Kuse et al. [52][182]20 Img
2010Meng et al. [110][183]528 Img, 265 Img, 376 Img
2012Arteta et al. [26][182]20 Img
2014Irshad et al. [105][184]200 Img
2015Sirinukunwattana et al. [23][184]50 WSI
2015Huang [24][185]NA
2016Beevi et al. [97][186]96 Img
2016Arteta et al. [36][182]20 Img
2016Yu et al. [71][187,188]2186 WSI, 294 Img
2016Chan and Tuszynski [65][189]82 Pat, 7909 Img
2016Barker et al. [100][185,190]45 Img, 604 Img
2016Huang and Kalaw [116][185]682 Img
2017Das et al. [85][184]15 Img
2017Reis et al. [58][191]55 WSI
2017Mazo et al. [12][192]3000 Img
2017Valkonen et al. [121][193]170 WSI, 100 WSI
2017Wan et al. [64][184]50 Img
2017Kruk et al. [74][194,195]70 Pat, 62 Pat, 94 Img
2018Sudharshan et al. [104][189]82 Pat, 7909 Img
2018Hou et al. [161][185,190]NA
2019Gandomkar et al. [142][189]82 Pat, 7909 Img
2019Kumar et al. [148][189], CMTHis82 Pat, 7909 Img
2019Vo et al. [68][189,196]82 Pat, 7909 Img, 269 Img
2019Vizcarra et al. [140][4]400 Img, 30 WSI
2019Yan et al. [137]NA249 Img
2019George et al. [69][189,196]82 Pat, 7,909 Img, 269 Img
2019Budak et al. [132][189]82 Pat, 7909 Img
2019Kurmi et al. [108][189]82 Pat, 7909 Img
2019Kassani et al. [151][189,193,196]82 Pat, 7909 Img, 269 Img
2019Yang et al. [152][4]400 Img, 30 WSI
2019Roy et al. [157][4]400 Img, 30 WSI
Table 12. Summary of datasets that are not publicly available.
Table 12. Summary of datasets that are not publicly available.
YearReferenceDataset ReferenceDataset Size
2008Yu and Ip [25]Yu and Ip [25]200 Img
2008Ballaro et al. [39]NA297 Img
2008Liu et al. [18]NA480 Img
2008Caicedo et al. [72]NA1502 Img
2008Daskalakis et al. [109]NA115 Img
2009Marugame et al. [48]NA217 WSI
2009Mete and Topaloglu [94]NA2 WSI
2009Kong et al. [86]NA389 Img
2009Tosun et al. [13]NA16 pat
2009Hafiane et al. [19][197]8 Img
2010Orlov et al. [78]NA30 WSI
2010He et al. [20]NANA
2010Fatakdawala et al. [5]NA100 Img, 9 Pat
2011Huang et al. [62]NA9 Slides, 36,000 Img, 40×
2011Madabhushi et al. [41][198,199,200]58 Pat, 100 Img, 20 Pat, 40 Img, 6 Pat
2011Cruz-Roa et al. [77]NA1502 Img basal, 2828 Img tissues
2011Caicedo et al. [53][201]6000
2011Petushi et al. [40]NA30 WSI
2011Osborne et al. [49]NA34 cases, 126 Img
2011Roullier et al. [6]NANA
2011He et al. [9]NANA
2011Peng et al. [8]NA8 Pat, 62 Img
2011Rahmadwati et al. [7]NA475 Img
2011DiFranco et al. [119]NA14 Pat, 15 Img
2012Loeffler et al. [47]NA125 Pat
2012Sidiropoulos et al. [95]NA140 cases
2013Atupelage et al. [61]NA109 Pat, WSI 369 Img
2013Song et al. [42]NA11 slides, 7 Pat
2013Basavanhally et al. [75][202,203]126 Pat, 29 Pat
2013De et al. [79]NA62 Img
2013Homeyer et al. [106]NA71 Img
2013Cosatto et al. [102]NA12,726 Pat, 12,745 WSI, 26,879 Img
2013Janssens et al. [27]NA111 Img
2013Onder et al. [21]NA230 Img
2013Wang and Yu [111]NA369 Img
2013Gorelick et al. [43]NA50 WSI
2013Filipczuk et al. [44]NA675 Img, 75 Pat
2013Vink et al. [112]NA51 Img
2014Vanderbeck et al. [80]NA59 Pat
2014Kandemir et al. [81]NA97 Pat, 214 Tissue
2014Saraswat and Arya [28][204]30 Img
2014Olgun et al. [51]NA3236 Img, 258 Pat
2014Qu et al. [29][205]125 Pat, 1180 Img
2014Fatima et al. [10]NA5 Pat, 80 Img
2014Xu et al. [103][206]10 Img, 103 Img
2014Salman et al. [30]NA20 Pat, 200 Img
2014Michail et al. [84]NA300 Img
2014Ozolek et al. [45]NA94 Pat
2014Nativ et al. [14]NA54 Img, 9 Pat
2014Yang et al. [22][207]96 WSI
2014Phoulady et al. [113]NA28,698 Img
2015Fernández-Carrobles et al. [54]NA40 WSI
2015Harai and Tanaka [89]NA123 Pat, 400 Img
2015Chen et al. [31]NA230 Pat, 1,150 Img
2015Korkmaz and Poyraz [93]NA160 Img
2015Santamaria-Pang et al. [34]NA350 Img
2015Tashk et al. [76][184]50 Img
2015Kandemir and Hamprecht [101][208]110 Img
2015Zarella et al. [33]NA101 Pat
2015Gertych et al. [59]NA210 Img
2015Di Franco et al. [114][119,209]15 Img, 14 Pat, 9 Pat
2015Albashish et al. [115]NA40 Pat, 149 Img
2016Leo et al. [70]NA146 WSI
2016Noroozi and Zakerolhosseini [63]NA33 Img
2016Fukuma et al. [46]NA20 WSI
2016Bruno et al. [56][210]58 Img
2016Wang et al. [35]NA68 Img
2016Niazi et al. [66]NA15 WSI, 34 Img
2016Balazsi et al. [60]NA66 Img
2016Wright et al. [120][211]157 Pat
2016Jothi and Rajam [98]NA12 Pat, 219 Img,
155 Img, 64 Img
2016Phoulady et al. [57]NA39 Pat, 390 Img
2016Shi et al. [15]NA47 WSI, 423 Img
2016Mazo et al. [11][192]200 Img
2016Brieu et al. [16]NA90 Img
2016Fernández-Carrobles et al. [117]NA170 WSI
2017Pang et al. [73]NA96 WSI
2017Peikari et al. [90]NA121 WSI, 64 Pat
2017BenTaieb et al. [91]NA133 WSI
2017Zhang et al. [92][212]285 Img, 917 Img
2017Shi et al. [17]NA200 Img
2017Shi et al. [15][15]47 WSI, 423 Img
2017Kwak and Hewitt [50]NA771 Img
2017Romo-Bucheli et al. [118]NA907 Img, 9 WSI
2017Brieu and Schmidt [37]NA30 WSI
2018Peyret et al. [55][213,214,215]10 Img
2019Li et al. [160]ICPR2014 MITOSIS,NA
AMIDA13, TUPAC16
2019Gecer et al. [158]NIH-sponsored projectsNA
2019Khan et al. [107]NANA
2019Brancati et al. [143]D-IDCNA
2019Talo [144]Kimia Path24NA
NA: Not available, Img: Images, Pat: Patients, WSI: Whole Slide Image.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

de Matos, J.; Ataky, S.T.M.; de Souza Britto, A.; Soares de Oliveira, L.E.; Lameiras Koerich, A. Machine Learning Methods for Histopathological Image Analysis: A Review. Electronics 2021, 10, 562. https://doi.org/10.3390/electronics10050562

AMA Style

de Matos J, Ataky STM, de Souza Britto A, Soares de Oliveira LE, Lameiras Koerich A. Machine Learning Methods for Histopathological Image Analysis: A Review. Electronics. 2021; 10(5):562. https://doi.org/10.3390/electronics10050562

Chicago/Turabian Style

de Matos, Jonathan, Steve Tsham Mpinda Ataky, Alceu de Souza Britto, Luiz Eduardo Soares de Oliveira, and Alessandro Lameiras Koerich. 2021. "Machine Learning Methods for Histopathological Image Analysis: A Review" Electronics 10, no. 5: 562. https://doi.org/10.3390/electronics10050562

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop