**1. Introduction**

Biomedical image processing is an interdisciplinary field [1] that spreads its foundations throughout a variety of disciplines, including electronic engineering, computer science, physics, mathematics, physiology, and medicine. Several imaging techniques have been developed [2], providing many approaches to the study of the body, including X-rays for computed tomography, ultrasounds, magnetic resonance, radioactive pharmaceuticals used in nuclear medicine (for positron emission tomography and single-photon emission computed tomography), elastography, functional near-infrared spectroscopy, endoscopy, photoacoustic imaging, and thermography. Even bioelectric sensors, when using high-density systems sampling a two dimensional surface (e.g., in electroencephalography or electromyography [3]), can provide data that can be studied by image processing methods. Biomedical image processing is finding an increasing number of important applications, for example, to make image segmentation of an organ to study its internal structure and to support the diagnosis of a disease or the selection of a treatment [4].

Classification theory is another well developed field of research [5] connected to machine learning, which is an important branch of artificial intelligence. Different problems have been addressed, from the supervised identification of a map relating input features to a desired output, to the exploration of data by unsupervised learning (cluster analysis, data mining) or online training through experience. The estimation of informative features and their further processing (by feature generation) and selection (either by filtering or with approaches wrapped to the classifier) are important steps, both to improve classification performance (avoiding overfitting) and to investigate the information provided by candidate features to the output of interest. Excellent results have also been recently documented by deep learning approaches [6], in which optimal features are automatically extracted in deep layers on the basis of training examples and then used for classification.

When classification methods are associated with image processing, computer-aided diagnosis (CAD) systems can be developed, e.g., for the identification of diseased tissues [7] or a specific lesion or malformation [4]. These results indicate interesting future prospects in supporting the diagnosis of diseases [8].

### **2. This Special Issue**

The present issue consists of six papers on a few topics in the wide range of research fields covered by biomedical image processing and classification.

In [9], the authors have proposed a CAD system for identification and assessment of glomeruli from kidney tissue slides. Their approach is based on deep learning, exploiting convolutional neural network (CNN) architectures tailored for the semantic segmentation task. The obtained results are promising, as also stated by expert pathologists. Moreover, the proposed system can easily be integrated into the existing pathologists' workflow thanks to an XML interface with Aperio ImageScope [10].

With the recent advances of techniques in digitalized scanning, tissue histopathology slides can be stored in the form of digital images [11]. In recent years, many efforts have been devoted to developing automated classification and segmentation techniques with the aim of improving accuracy and e fficiency in digital pathology [12]. In kidney transplantations, pathologists evaluate the architecture of renal structures to assess the nephron status. An accurate evaluation of vascular and stromal injury is crucial for determining kidney acceptance, which is currently based on the pathologists' histological evaluations on renal biopsies in addition to clinical data. In this context, automated algorithms may o ffer crucial support to histopathological image analysis. An example is given in this Special Issue [13].

Although the performance of a machine learning algorithm depends on the amount of available data, few studies have explored the minimal amount of data required to train a CNN in medical deep learning or the possibility of having scarce annotations [14]. An innovative contribution is given in this Special Issue [15]. The paper explores the minimum number of patients required to train a U-Net that accurately segments the prostate on T2-weighted MRI images. A U-Net was trained on patient numbers that ranged from 8 to 320 and its performance was measured. The Dice score significantly increased from training sizes of 8 to 120 patients and then plateaued with minimal improvement after 160 cases. This study suggests that modest dataset sizes could be su fficient to segmen<sup>t</sup> other organs effectively as well.

The correlation between conjunctival pallor (on physical examinations) and anemia paved the way for new non-invasive methods for monitoring and identifying the potential risks of this important pathology. A critical research challenge for this task is represented by designing a reliable automated segmentation procedure for the eyelid conjunctiva. A graph partitioning segmentation approach is proposed in [16], exploiting normalized cuts for perceptual grouping, thereby introducing a bias towards spectrophotometry features of hemoglobin. The segmentation task has been further investigated by a subsequent work, proposing a deep-learning-based approach involving a deconvolutional neural network [17]. The overall pipeline for building a reliable estimator is composed of several smaller tasks having multiple research challenges [18,19]. For instance, starting from the digital image capturing phase, the process is a ffected by heterogeneous ambient lighting conditions and intrinsic color balancing techniques by the device [20].

An e fficient framework for enhancing and segmenting brain MRIs to identify a tumor is discussed in [21]. The hybridized fuzzy clustering and distance regularized level set (DRLS) technique e ffectively extracted the region of interest (ROI) in the brain slices. For identifying the ROI, fuzzy clustering was employed by selecting the number of clusters *k*, validated using the silhouette metric. In post-processing, the ROI mining techniques, marker controlled watershed segmentation, seed region growing and DRLS were adopted to extract the anomalous section from the segmented objects [22,23]. Tumor volume computation and 3D-modeling of the clinical dataset abnormalities were performed using the physical spacing metadata available in the header of the DICOM images considered. This can help physicists locate the tumor and determine other information (e.g., size and shape) during initial diagnosis, and thereby the process of treating the tumor may be enhanced.

Finally, one paper in this Special Issue has addressed the problem of identifying the volume status of patients [24]. The method was developed within a long-standing research activity on the automated investigation of the pulsatility of the inferior vena cava (IVC) from ultrasound measurements. The clinical approach is based on the subjective choice of a fixed direction along which to investigate IVC pulsations. However, the vein may have a complicated shape and show respirophasic movements, which introduce uncertainties into the clinical evaluation. Two automated methods have been introduced to delineate the IVC edges along sections either transverse or longitudinal to the blood vessel [25–27]. Preliminary results have shown the importance of using these automated methods to obtain more repeatable, reliable, and accurate information on IVC pulsatility than when using subjective clinical methods [28–31]. In this Special Issue, the two views are used to extract features that, integrated by a classification algorithm, can result in improved performance in diagnosing the volemic status of patients [24].
