*Article* **Bayesian Learning of Shifted-Scaled Dirichlet Mixture Models and Its Application to Early COVID-19 Detection in Chest X-ray Images**

**Sami Bourouis 1,\*, Abdullah Alharbi <sup>1</sup> and Nizar Bouguila <sup>2</sup>**


**Abstract:** Early diagnosis and assessment of fatal diseases and acute infections on chest X-ray (CXR) imaging may have important therapeutic implications and reduce mortality. In fact, many respiratory diseases have a serious impact on the health and lives of people. However, certain types of infections may include high variations in terms of contrast, size and shape which impose a real challenge on classification process. This paper introduces a new statistical framework to discriminate patients who are either negative or positive for certain kinds of virus and pneumonia. We tackle the current problem via a fully Bayesian approach based on a flexible statistical model named shifted-scaled Dirichlet mixture models (SSDMM). This mixture model is encouraged by its effectiveness and robustness recently obtained in various image processing applications. Unlike frequentist learning methods, our developed Bayesian framework has the advantage of taking into account the uncertainty to accurately estimate the model parameters as well as the ability to solve the problem of overfitting. We investigate here a Markov Chain Monte Carlo (MCMC) estimator, which is a computer–driven sampling method, for learning the developed model. The current work shows excellent results when dealing with the challenging problem of biomedical image classification. Indeed, extensive experiments have been carried out on real datasets and the results prove the merits of our Bayesian framework.

**Keywords:** infection detection; COVID-19; X-ray images; image classification; bayesian inference; shifted-scaled dirichlet distribution; MCMC; gibbs sampling

#### **1. Introduction and Related Works**

Pneumonia is a severe disease issue resulting in inflammation of the lungs where a large number of people lose their lives every day. The causes of this infectious disease could be attributed to viruses or bacteria. Today, the SARS-CoV-2 virus named COVID-19 pneumonia is causing a significant outbreak around the world, having a serious impact on the health and life of several people. In particular, it causes pneumonia in humans and carries severe infections between people. Patients with COVID-19 can have acute symptoms and some may die of major organ failure. One of the critical steps in the fight against this disease is the possibility to quickly detect and track contaminated persons and place them under particular care. Early inspection of confirmed cases is of great urgency because of its infectious nature. One of the many ways of detecting the disease is by a chest radiographs of the patient. Recently, some studies have shown that studying COVID-19 from Chest X-ray images may be considered as the quickest solution to diagnose patients [1]. It is noteworthy that chest X-ray radiography is one of the interesting imaging to diagnose several related chest diseases such as pneumonia, lung cancer, emphysema and pulmonary edema [2,3]. However, sometimes this medical imaging can be subject

**Citation:** Bourouis, S.; Alharbi, A.; Bouguila, N. Bayesian Learning of Shifted-Scaled Dirichlet Mixture Models and Its Application to Early COVID-19 Detection in Chest X-ray Images. *J. Imaging* **2021**, *7*, 7. https:// doi.org/10.3390/jimaging7010007

Received: 11 November 2020 Accepted: 7 January 2021 Published: 10 January 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

to error for inexperienced radiologists, while being tedious for experienced ones. Visual examination of these radiographs is generally restricted due to low infectious disease specificity. In addition, the presence of noise, the contrast which is often insufficient between the soft tissues and the overlap in appearance properties are often sources of error for an accurate diagnosis [1,4]. These inconsistencies can result in important biased decisions for clinicians.

To deal with these drawbacks and to detect infected patients, it is necessary to develop effective and automated computerized support tools able to offer radiologists desirable measures about the disease severity. These tools should also allow rapid detection and prediction of any possible infection, in particular COVID-19. Nevertheless, performing a precise analysis of big biomedical data is too difficult and time consuming because these images contain various patterns and symptoms at different stages (early, middle, advanced) [4,5]. For instance at the early stage, it is not easy at all to discover COVID-19 symptoms having acute respiratory distress syndrome in chest X-ray (CXR) scans because these symptoms can look similar to other viral infections like RSV pneumonia. Consequently, it is important to consider such assumption and to take into account robust features extraction techniques when implementing new systems.

Several promising algorithms have been implemented in the past decades to deal especially with infection detection. Some traditional machine learning-based methods are applied to support pneumonia diagnosis in children by classifying chest radiographs into normal or pneumonia cases [6]. Haar wavelet transform is also investigated as an effective feature extraction technique. Some classifiers such as FCM, DWT and WFT [2] and K-nearest neighbor (k-NN) [3] were exploited in this context to detect pneumonia infection. Nevertheless, these conventional methods fail to identify properly lung with lesions. It is true that traditional methods helped the specialists in their diagnosis, but the resulting accuracy was poor. Thus, other image processing-based systems have been proposed to address the problems of infection localization and detecting malicious lesions using, for example, SVM, Neural Networks (NN) and Deep NN (DNN) [5,7–9].

The Fully CN (FCN) method is also applied for segmenting lung in CXR [10]. Another work which is conducted using deep learning method is proposed in [11] to classify CT scan and chest X-ray into three classes: influenza-A viral pneumonia, COVID-19, and normal. The obtained accuracy is 89.3%. As a result, the accuracy is 89.3% and the training process takes a long time. After studing the related work, it is obvious that the success of supervised CNN and deep learning methods to classify CXR images and detect COVID-19 relies mainly on the size of training data. For smaller data set, these techniques are not suitable since this size is responsible for poor performances and in many cases, it becomes too difficult to generate more training data. Thus, it is important to look for other alternatives. Features extraction methods are also exploited in conjuction with some classifiers in order to extrcat ans select relevant visual features. For instance, the ResNet50 feature extractor is used with SVM and CNN for detecting and classifying lung nodule disease in chest CT- images [12]. Other approaches such as registration and active shape models [13,14] are exploited with pixel-based statistical classification methods in order find the boundary/region targets. For example, the lung region is determined through a non-rigid registration step between the chest radiograph of the image patient and a reference model [13].

The good results obtained from applying artificial intelligence and machine learning models to some previous epidemics are motivating researchers to provide new perspective for addressing this novel coronavirus outbreak. In particular, classifying non-Gaussian data in an unsupervised way can be of great interest for automated medical applications. Among the main existing methods to tackle this problem, statistical mixture models have recently gained considerable interest from both the theoretical and practical points of view [15–20]. This approach has led to the design of new more efficient tools. Our work is mainly based on recent research findings that have shown modeling visual data (such as images) effectively is very important for further applications such as image classification. In particular, the taking into account of the distribution of Dirichlet is very interesting to

deal with non-Gaussian data modelling [21]. Other derived models such as the scaled Dirichlet mixture (a generalization of the Dirichlet) [16] have also been shown to be effective for data grouping and classification. Further works have show that it is possible to improve these last two models by introducing an additional parameter which leads to a more flexible model. The resulting statistical mixture is called shifted-scaled Dirichlet mixture (SSDMM) and is assumed to be a generalization of the scaled model (here the Shifted term mean a perturbation in the simplex). This new model has been applied successfully for a variety of applications [22].

#### **2. Motivations**

The work developed in [22] is based on a shifted-scaled Dirichlet mixture model (SSDMM) and evaluated for data clustering and writer identification. Two important issues arise when deploying mixture models which are calculating the parameters of the mixture and determining the exact number of components that best describes the data set. These issues have been tackled recently by learning the SSDMM via deterministic Maximum Likelihood Estimator (MLE) [22]. Nevertheless, it is known that MLE has major shortcomings linked to its sensitivity at the initialization step. Therefore, a better solution especially for our case (i.e., when dealing with complex medical noisy data including COVID-19 infection) is to develop a more robust alternative based on fully Bayesian inference approach. We recall that Bayesian estimation has attracted a lot of attention for many applications [23–33]. It is also known that the Bayesian approach may be more practical due to the existance of powerful simulation techniques like MCMC [29]. Moreover, the model complexity can be easily solved using for example the marginal likelihood-based technique. Thus, our focus in this paper is to implement an effective Bayesian learning method for SSDMM in order to take into account the complexity of medical data and to overcome the drawbacks of frequentist (deterministic) approaches [34,35]. To the best of our knowledge, such an approach has never been tackled before, especially for the problem of chest x-ray images classification.

The rest of this paper is organized as follows. In next section, the finite shifted-scaled Dirichlet mixture model and the Bayesian approach are exposed. Experimental results and the merits of our approach are introduced in Section 4. Finally, we end this work and provide some possible extensions to be treated in the future.

#### **3. Bayesian Framework for the Shifted-Scaled Dirichlet Mixture Model**

We start this section by revising both the Dirichlet and scaled Dirichlet distributions, and then introduce a new generalization of these distributions named shifted-scaled Dirichlet distribution (SSDD). The finite shifted-scaled Dirichlet mixture model is also presented. Then, we develop a fully Bayesian framework for learning the parameters of this finite mixture model.

#### *3.1. Dirichlet and Scaled-Dirichlet Distributions*

**Definition 1** (Dirichlet distribution)**.** *Let us consider a random vector <sup>Y</sup>* = (*y*1, ... , *yD*) <sup>∈</sup> *<sup>S</sup><sup>D</sup> (sample space), where* ∑*<sup>D</sup> <sup>d</sup>*=<sup>1</sup> *yd* = 1*. We say that Y has a D-variate Dirichlet distribution with parameter<sup>α</sup>* = (*α*1,..., *<sup>α</sup>D*) <sup>∈</sup> <sup>R</sup>*<sup>D</sup>* <sup>+</sup> *if its density function is:*

$$Y \sim Dir^D(\vec{\alpha})$$

$$f(\vec{Y}) = p(\vec{Y}|\theta) = \frac{\Gamma(\alpha\_+)}{\prod\_{i=1}^D \Gamma(\alpha\_i)} \prod\_{i=1}^D y\_i^{\alpha\_i - 1} \tag{1}$$

*whereα denotes a shape parameter, α*<sup>+</sup> = ∑*<sup>D</sup> <sup>i</sup>*=<sup>1</sup> *α<sup>i</sup> and* Γ *indicates the Euler gamma function.*

It is noted that the Dirichlet distribution with D parameters (*<sup>Y</sup>* <sup>∼</sup> *DirD*(*α*)) is still popular, especially when it comes to analyzing composition data, and this popularity is due to its its conjugate property with the multinomial likelihood.

**Definition 2** (Scaled Dirichlet distribution)**.** *If Y follows a scaled Dirichlet distribution, then its density function is given as:*

$$\begin{aligned} Y &\sim SD\dot{r}^D(\vec{\alpha}, \vec{\beta})\\ f(\vec{Y}) &= p(\vec{Y}|\theta) = \frac{\Gamma(\alpha\_+)}{\prod\_{i=1}^D \Gamma(a\_i)} \frac{\prod\_{i=1}^D \beta\_i^{a\_i} y\_i^{a\_i - 1}}{(\sum\_{i=1}^D \beta\_i y\_i)^{a\_+}} \end{aligned} \tag{2}$$

*α* = (*α*1, ... , *αD*) *and <sup>β</sup>* = (*β*1, ... , *<sup>β</sup>D*) <sup>∈</sup> <sup>R</sup>*<sup>D</sup>* <sup>+</sup> *are the parameters of this distribution. β is a scale parameter.*

The scaled Dirichlet distribution has 2*D* parameters and in this case we have *Y* ∼ *SDirD*(*α*, *β*). If the parameter *β* is fixed, then we obtain a Dirichlet model.

#### *3.2. Finite Shifted-Scaled Dirichlet Mixture Model*

**Definition 3** (Shifted-Scaled Dirichlet distribution)**.** *Suppose that Y follows a shifted scaled Dirichlet distribution with parameters <sup>α</sup>* = (*α*1, ... , *<sup>α</sup>D*) <sup>∈</sup> <sup>R</sup>*<sup>D</sup>* <sup>+</sup>*, <sup>λ</sup>* = (*λ*1, ... , *<sup>λ</sup>D*) ∈ S*<sup>D</sup> and <sup>a</sup>* <sup>∈</sup> <sup>R</sup>+*. Then, the density probability of this distribution is given as:*

$$\begin{aligned} \mathcal{Y} &\sim pSDir^D(\vec{a}, \vec{\lambda}, a) \\ f(\vec{Y}) &= p(\vec{Y}|\theta) = \frac{\Gamma(a\_+)}{\prod\_{i=1}^D \Gamma(a\_i)} \frac{1}{a^{D-1}} \frac{\prod\_{i=1}^D \lambda\_i^{-(a\_i/a)} y\_i^{(a\_i/a)-1}}{(\sum\_{i=1}^D (y\_i/\lambda\_i)^{(1/a)})^{a\_+}} \end{aligned} \tag{3}$$

*where λ denotes a location parameter.*

The shifted-scaled Dirichlet distribution has 2*D* parameters and in this case we have *<sup>Y</sup>* <sup>∼</sup> *pSDirD*(*α*, *λ*, *a*). If the parameter *a* = 1, then we obtain a scaled Dirichlet model.

Now, suppose that we have a set of vectors <sup>Y</sup> <sup>=</sup> { *Y*1, *Y*2, ... , *YN*}, where each vector *Yn* = (*yn*1, ... , *ynD*) follows a mixture of SSD, then the corresponding likelihood is defined as:

$$p(\mathcal{Y}|\Theta) = \prod\_{n=1}^{N} \sum\_{k=1}^{K} \pi\_k p(\vec{Y}\_n|\theta\_k) \tag{4}$$

where the model's parameters are defined by Θ = (*π* , *θ*) and {*πk*} are positive mixing parameters (∑*<sup>k</sup> <sup>π</sup><sup>k</sup>* = 1). Each vector is supposed coming from one component as *Yn* ∼ *pSDirD*(*α*, *λ*, *a*). The shape parameter has the role to describe the form of the shifted SDMM. The scale (*a*) checks how the plotting of the density is distributed and *λ* follows the location of the data densities. In the next section, we will develop our Bayesian approach based on the presented mixture of SSDD.

#### *3.3. Fully Bayesian Learning Algotithm*

In many cases, the deterministic approach (named also maximum likelihood-based technique) via the well known EM algorithm [36] is used to estimate the parameters of finite mixture models due to its simplicity. Deterministic approach assumes that <sup>Z</sup> = ( *Z*1, ... , *ZN*), is a missing data. Thus, if *Yn* ∈ *j* then *Zij* = 1, else *Znj* = 0. Because the likelihood-technique depends on initial values and is sensitive to local minima, we propose here to overcome these limitations by developing an efficient way based on Bayesian inference to better learn the Shifted-Scaled Dirichlet mixture model. More precisely, we propose to investigate one of the effective simulation techniques called Markov Chain Monte Carlo (MCMC) via Gibbs sampler [37,38]. Thus, the complete likelihood is defined as:

$$p(\mathcal{Y}, \mathcal{Z} | \Theta) = \prod\_{n=1}^{N} \prod\_{k=1}^{K} (\pi\_k p(\vec{Y}\_n | \theta\_k))^{Z\_{nk}} \tag{5}$$

Using Bayes formula, the likelihood and the priors will be expressed together to define the posterior distribution like this:

$$p(\Theta|\mathcal{Y}, \mathcal{Z}) \propto p(\mathcal{Y}, \mathcal{Z}|\Theta)p(\Theta) \tag{6}$$

The proposed Bayesian algorithm for SSDMM parameters' learning is based on the following steps :

	- (a) Generate *Z*(*t*) *<sup>i</sup>* ∼ M(1; *<sup>Z</sup>*ˆ(*t*−1) *<sup>i</sup>*<sup>1</sup> ,..., *<sup>Z</sup>*ˆ(*t*−1) *iM* )
	- (b) Generate *<sup>π</sup>* (*t*) from *<sup>p</sup>*(*π*|Z(*t*))
	- (c) Generate (*θ*)(*t*) from *<sup>p</sup>*(*θ*|Z(*t*), <sup>Y</sup>)

where <sup>M</sup>(1; *<sup>Z</sup>*ˆ(*t*−1) *<sup>i</sup>*<sup>1</sup> , ... , *<sup>Z</sup>*ˆ(*t*−1) *iM* ) is a multinomial distribution of order one with parameters (*p*(1| *Yi*)(*t*−1), ..., *<sup>p</sup>*(*M*| *Yi*)(*t*−<sup>1</sup>)). Based on this algorithm, we have to evaluate *<sup>p</sup>*(*π*|Z) and *p*(*θ*|Z, Y).

#### 3.3.1. Priors and Posteriors

The choice of priors is one of the most crucial steps in Bayesian modeling. These priors reflect our belief about the the model's parameters and are updated and enhanced according to the observed data (see for example details in [39]). In the following, the choice of the priors is addressed as well as the determining of the resulting posteriors for our fully Bayesian approach.

Estimating the posterior will lead to have our parameters Θ ∼ *p*(Θ|Y, Z). In order to perform this step, we proceed with an elegant sampling technique called Gibbs sampler. This method allows the use of conditional posterior distribution in order to update each parameter.

Since no convenient conjugate prior exist for*α<sup>k</sup>* and *ak*, we adopt a common choice for them which is the Gamma distribution G(.) :

$$p(\mathfrak{a}\_{kd}) = \mathcal{G}(\mathfrak{a}\_{kd}|\mathfrak{u}\_{kd}, \mathfrak{v}\_{kd}) \tag{7}$$

$$p(a\_k) = \mathcal{G}(a\_k | \mathcal{g}\_{k'} h\_k) \tag{8}$$

Then, we determine the posterior distributions according to these priors and by considering the following:

$$p(\vec{a}\_k | \mathcal{Z}, \mathcal{Y}) \propto p(\vec{a}\_k) \prod\_{Z\_{ik}=1} p(\vec{Y\_i} | \theta\_k) \propto \prod\_{d=1}^D p(a\_{kd}) \prod\_{Z\_{ik}=1} p(\vec{Y\_i} | \theta\_k) \tag{9}$$

$$p(a\_k | \mathcal{Z}, \mathcal{Y}) \propto p(a\_k) \prod\_{Z\_{ik} = 1} p(\vec{Y}\_i | \theta\_k) \tag{10}$$

Regarding the parameter *λk*, since it is defined in a simplex, therefore, it is a common and classic choice in Bayesian inference to choose the Dirichlet distribution as prior with parameters *η<sup>k</sup>* = (*ηk*1,..., *ηkD*). So, it is expressed as:

$$p(\vec{\lambda}\_k | \eta\_k) = \frac{\Gamma(\sum\_{j=1}^D \eta\_{kj})}{\prod\_{j=1}^D \Gamma(\eta\_{kj})} \prod\_{j=1}^D p\_{kj}^{\eta\_{kj} - 1} \tag{11}$$

Knowing this prior, we can estimate the posterior distribution using the following equation:

$$p(\vec{\lambda}\_k | \mathcal{Z}, \mathcal{Y}) \propto p(\vec{\lambda}\_k | \eta\_k) \prod\_{Z\_{ik} = 1} p(\vec{Y}\_i | \theta\_k) \tag{12}$$

For the prior of mixing weight *π* , the common choice is the Dirichlet distribution since ∑*<sup>K</sup> <sup>j</sup>*=<sup>1</sup> *π<sup>j</sup>* = 1. So, the mixing weight prior is expressed as:

$$p(\vec{\pi}|K,\delta) = \frac{\Gamma(\sum\_{j=1}^{K}\delta\_{\hat{j}})}{\prod\_{j=1}^{K}\Gamma(\delta\_{\hat{j}})} \prod\_{j=1}^{K} \pi\_{\hat{j}}^{\delta\_{\hat{j}}-1} \tag{1.3}$$

The selected prior of Z ( membership variable ) is defined as :

$$p(\mathcal{Z}|\vec{\pi}, K) = \prod\_{j=1}^{K} \pi\_j^{n\_j} \tag{14}$$

where *nj* is the tiotal vectors in cluster *j*. Given the former equations Equations (13) and (14) we have

$$\begin{array}{rcl}p(\vec{\pi}|\ldots) & \propto & p(\boldsymbol{Z}|\vec{\pi},\boldsymbol{K})p(\vec{\pi}|\boldsymbol{K},\boldsymbol{\delta})\\ & \propto & \prod\_{j=1}^{K} \pi\_{j}^{n\_{j}} \frac{\Gamma(\sum\_{j=1}^{K}\delta\_{j})}{\prod\_{j=1}^{K}\Gamma(\delta\_{j})} \prod\_{j=1}^{K} \pi\_{j}^{\delta\_{j}-1} \propto \frac{\Gamma(\sum\_{j=1}^{K}\delta\_{j})}{\prod\_{j=1}^{K}\Gamma(\delta\_{j})} \prod\_{j=1}^{K} \pi\_{j}^{n\_{j}+\delta\_{j}-1} \end{array} \tag{15}$$

This posterior is proportional to the Dirichlet distribution (*δ*<sup>1</sup> + *n*1, ... , *δ<sup>K</sup>* + *nK*). In addition, the posterior of the membership Z may be deduced as:

$$p(Z\_i = j | \dots) \propto \pi\_j p(\vec{Y}\_n | \theta\_j) \tag{16}$$

Finally, we choose the uniform distribution as an appropriate prior for *K*. This value can vary between 1 and *Kmax* (*Kmax* is a predefined value). We summarize the proposed model in the following graphical representation Figure 1.

**Figure 1.** Graphical representation of our developed Bayesian finite shifted-scaled Dirichlet mixture model. Fixed hyperparameters are indicated by rounded boxes and random variables by circles. *Y* is the observed variable, *Z* represents the latent variable, the large box indicates repeated process, and the arcs show the dependencies between variables.

#### 3.3.2. Complete Bayesian Estimation-Algorithm

The Gibbs sampling technique is mainly based on alternating conditional distributions for several steps. Indeed, for each iteration *t* , the resulted estimate Θ*<sup>t</sup>* is sampled from its previous approximate Θ*t*−1. Having all these posterior probabilities in hand, the complete MCMC-based Bayesian algorithm to learn the parameters of our finite mixture model and especially the steps of our Gibbs sampler are as follows:


(a) Generate *Z*(*t*) *<sup>i</sup>* ∼ M(1; *<sup>Z</sup>*ˆ(*t*−1) *<sup>i</sup>*<sup>1</sup> ,..., *<sup>Z</sup>*ˆ(*t*−1) *iK* )


where <sup>M</sup>(1; *<sup>Z</sup>*ˆ(*t*−1) *<sup>i</sup>*<sup>1</sup> , ... , *<sup>Z</sup>*ˆ(*t*−1) *iM* ) is a multinomial distribution of order one with parameters (*p*(1| *Yi*)(*t*−1), ..., *<sup>p</sup>*(*M*| *Yi*)(*t*−<sup>1</sup>)).

#### **4. Experimental Results**

The goal of this section is to evaluate and validate the developed statistical model with the different inference techniques. We have considered several real data sets of images including COVID-19 and different pneumonia types.

#### *4.1. Data Sets*

The first main COVID-19dataset (https://github.com/ieee8023/covid-chestxraydataset) for our experiments is the one developed by Cohen et al. [42]. It contains 542 Chest X-ray (CXR) images. A subset of 434 CXR images represent patients positive to COVID-19 and the rest are COVID-19 negative. The image dimension is 4248 × 3480 pixels. Main statistics of this dataset are given in Table 1. An illustrative sample of confirmed Coronavirus Disease 2019 (COVID-19) is given in Figure 2. This image is from a 53-year-old female who had a fever and cough for 5 days. Indeed, Multifocal patchy opacities can be seen in both lungs (arrows) [43].

COVID-19 image Healthy image

**Figure 2.** Illustrative sample of Chest X-Rays image with COVID-19 [43].

**Table 1.** Data description.


We run also our implemented framework on another available dataset named Augmented COVID-19 Dataset (https://data.mendeley.com/datasets/2fxz4px6d8/4). It is collected from the previous dataset and the Kaggle one (kaggle.com/paultimothymooney/ chest-xray-pneumonia). It is made up of augmented radiographics with and without COVID-19. Here, the number of images is larger than the previous dataset. Our aim is to study the performance of our model when the size of the data increases. This dataset contains 912 COVID-19 images and 912 non COVID-19 images. The augmentation process takes into account some geometric transformations and other ones such as translation, rotation, scaling, flipping, noising, bluring, etc. Some illustrative augmented images are given in Figure 3.

**Figure 3.** Illustrative examples of augmented Chest X-Rays with COVID-19 from the dataset [44].

Finally, we use the chest-xray-pneumonia to evaluate the performance. Thus, we rum our algorithm on big dataset (viral, bacterial infection, and normal) Kaggle (https: //www.kaggle.com/paultimothymooney/chest-xray-pneumonia). It contains 5856 CXR images where 1583 are normal and 4273 are infected with pneumonia. The image dimension is 1024 × 1024 pixels. This dataset is structured into three folders: train, test and val. Some samples are given in Figure 4. Statistics about this dataset are shown in Table 1.

Normal image Bacterial pneumonia Viral pneumonia

**Figure 4.** Illustrative samples of chest-xray-pneumonia from the dataset in [45].

#### *4.2. Methodology*

The developed model is applied to classify several images from different datasets as normal or COVID-19 affected patients using CXR images. To deal with this objective, we proceed with some preprocessing steps. After a pre-segmentation step of the lung region, we extracted some relevant features based on texture analysis. Indeed, several recently published works have shown that the lung is the basic organ which is affected by the corona COVID-19 virus. The classification is performed into two classes: normal and abnormal. Each image is modelled with a mixture of SSDDMM, then we apply the MCMC algorithm to estimate the parameters of each component. Here, the classification problem is presented in terms of assigning each image to the appropriate class using the Bayes rules. In other word, each image is affected to the class that has the greatest posterior probability. The pipeline of the proposed method is given in Figure 5.

It is noted that, in many cases, medical images such as chest x-rays are not easy to interpret; thus, it is mandatory to identify important patterns to interpret better and improve the decision. Feature extraction problem is the process of acquiring relevant information such as texture. The step of feature extraction has the role to improve the performance and accelerate the processing time. In particular, texture's structures (e.g., fine, smooth,

coarse or grained) characterize effectively visual patterns in the image. In the state of the art, many texture extraction methods have been proposed such as statistical ones which are based on different statistics order of the gray-level value. For complex images like medical ones, the use of single feature value cannot lead to satisfactory results; thus, it is important to consider more features to increase the expected performance [46]. In this work, we focus on investigating the so-called Gray Level Co-occurrence Matrix (GLCM)-based features, which has been shown to be efficient and offer interesting results in term of classification accuracy. GLCM matrix provides a co-occurrence matrix of joint probability density of the gray levels of two pixels. In this work, the second-order statistics are investigated to compute some features in order to well-discriminate lung abnormalities. In particular, the following features [47] are calculated for each image: contrast (large differences between neighboring pixels), correlation, energy, entropy, difference variance, difference entropy, inverse difference normalized, information measure of correlation, information measure of correlation. In our analysis, we focused on extracting the lungs area using image thresholding and segmentation processing which leads to identify the left and right lungs from CXR images. In order to remove noise, we applied the Gaussian filter. In Figure 6, we illustrate the obtained segmented lung using the above method. After isolating the lungs, we proceed with feature extraction step and then with classification using the proposed statistical model. The required time for feature extraction for each image is a few seconds and the model fitting taken between 20 to 30 min for the different data sets.

**Figure 5.** The pipeline of the proposed method. First, the lungs are segmented, then robust visual features are extracted. Features are modelled using the proposed mixture model (SSDDMM) and a Bayesian framework is applied to estimate the parameters of the model. Finally, images are classified on the basis of Bayes rule.

**Figure 6.** Process of lungs regions extraction applied on image sample from [42].

#### *4.3. Results Analysis*

In this section, we investigate our approach for COVID-19 detection. The ultimate first goal is to prove the potential of our Bayesian learning algorithm as compared to other learning method named maximum likelihood (ML) estimation. The second goal is to compare the performance of the proposed shifted-scaled Dirichlet mixture model with other methods which are Gaussian mixture-based, Gamma mixture-based, Dirichlet mixture-based and scaled Dirichlet mixture-based method. For performance investigation, we evaluate the performance of our Bayesian learning method and the rest of methods in terms of overall accuracy (ACC), detection rate (DR), and false-positive rate (FPR).

Tables 2–4, show the classification accuracies for the Test sets of each dataset when applying different generative approaches namely: Gaussian mixture model with maximum likelihood (**GMM-ML**), with Bayesian inference (**GMM-B**), Gamma mixture model with maximum likelihood (Γ**MM-ML**), Dirichlet mixture with maximum likelihood (**DMM-ML**), with Bayesian inference (**DMM-B**), scaled Dirichlet mixture with maximum likelihood (**SDMM-ML**), with Bayesian inference (**SDMM-B**), shifted scaled Dirichlet mixture with maximum likelihood (**SSDMM-ML**), and our proposed method named as shifted scaled Dirichlet mixture with Bayesian inference (**SSDMM-B**).

According to these tables, we can see clearly that, in general, all mixture models provide encouraging results taking into account the difficulty of the unsupervised learning problem. It is clear that our proposed Bayesian method for the shifted scaled Dirichlet mixture outperforms, according to the used metrics, the rest of methods. Indeed, our work has better accuracy as well as lowest false positive rate than both Dirichlet and Gaussian mixtures. We can also see that Bayesian learning provides better results than the ML approach for all models. As we can see, for CXR-COVID dataset, the SSDMM-B outperforms other models with accuracy of 89.57% compared to 88.08% for SDMM-B, 88.04% for DMM and 82.44% for GMM. Our Bayesian model is slightly better than SSDMM-MML [22]. Likewise, we came to the same conclusion for the other datasets and we reach the highest accuracy of 93.03% with our model **SSDMM-B** for the CXR-Pneumonia dataset. According to this last result, it is clear that the precision increases (and the false positive decreases) as the dataset size increases. This is can be viewed for CXR-Augmented-COVID and CXR-Pneumonia datasets which contain more images than CXR-COVID. On the basis of the overall accuracy (ACC) for three datasets (CXR-COVID, CXR-Pneumonia, and CXR-Augmented-COVID), it is obviously clear that the difference between the highest and lowest accuracy is between 5.2% and 7.46% for each dataset. The difference between some methods is about 2.26% which is also considered significant according to t-student test. The obtain results confirm the merits of the fully Bayesian formalism for shifted-scaled Dirichlet mixture which is more flexible (since it has more degrees of freedom) than the Dirichlet and the scaled Dirichlet mixtures. Its flexibility also makes it possible to easily integrate more knowledge and especially features selection mechanism into the proposed framework. On the other hand, even a small improvement is worthwhile taking into account the difficulty of the problem especially with the availability of strong machines to do the processing and simulations. Concerning the modeling uncertainty quantification, this is something that distinguishes our approach from deep learning models (black boxes). We are currently working with clinicians to be able to quantify the uncertainty and extract interpretations, as well as explanations from our models which is possible thanks to the generative nature of the deployed model.


**Table 2.** Overall accuracy for chest x-ray (CXR)-COVID Dataset.


**Table 3.** Overall accuracy for CXR-Pneumonia Dataset.

**Table 4.** Overall accuracy for CXR-Augmented COVID-19 Dataset.


It is also noted that the lung segmentation step is difficult particularly when it includes acute respiratory distress syndrome. This difficulty is due to the little contrast at the boundary of the lung. Moreover, when the number of images in this dataset is too small, the obtained results are lower than the case of big datasets. We can conclude that the obtained results are considered very encouraging given that we approach the classification problem in an unsupervised manner. In fact, the flexibility of the shifted-scaled mixture model and the robustness of texture-based features lead to more stable results. For COVID-19 identification through CXR images, the proposed fully Bayesian learning approach for SSDMM has confirmed that it is capable to discriminate images according to texture properties. In order to further improve these results, perhaps other descriptors are needed, especially the consideration of a robust feature selection mechanism to filter out unreliable features and keep only the most relevant ones. Please note that various studies have been proposed in the state of the art [53] which show that textures are very promising for many medical applications [54]. Here, the comparison between different feature-based techniques is beyond the scope of this article. Instead, we investigated in this work one robust texture-based descriptor to have interesting results for the classification of chest x-ray (CXR) images and corona virus convid-19 detection.

#### **5. Conclusions**

In this paper, we have addressed the problems of modeling and classification of multidimensional non-Gaussian data via a purely Bayesian learning approach based on a shifted scaled Dirichlet mixture model. We have especially tackled the problems of chest x-ray (CXR) images classification and COVID-19 detection. The flexibility and capability of the proposed statistical framework is evaluated through three public datasets related to COVID-19 and Pneumonia diseases. Unlike other statistical methods, which assume the heavy assumption that input data are Gaussian, which is not always ture especially for real medical applications, the treated data in our work are modelled via non-Gaussian model and using finite mixtures of shifted scaled Dirichlet distributions that offer reasonable explanations. Our framework has provided promising results and outperforms other

methods. In particular, the Bayesian inference results are more interesting thanks to the consideration of the joint posterior distribution. In this work we have investigated an effective MCMC-based approximation technique given that exact inference in fully Bayesian methods is not easy to compute. Our implemented approach has also the advantage of being more general and extensible enough to be applied for large scale data presenting various infection's type. Future works could be devoted to extending the proposed framework via nonparametric approaches. Other promising future works include the integration of feature selection mechanism into the statistical model to improve the generalization capabilities. We hope also that many other real-world problems, including medical ones, will be addressed within the proposed framework.

**Author Contributions:** Conceptualization, S.B. and A.A.; methodology, N.B.; software, N.B.; validation, S.B. and A.A. and N.B.; formal analysis, S.B.; investigation, S.B. and A.A.; resources, N.B.; data curation, N.B.; writing—original draft preparation, S.B. and A.A.; writing—review and editing, N.B.; visualization, A.A.; supervision, N.B.; project administration, S.B.; funding acquisition, S.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the Research Groups Program funded by Deanship of Scientific Research, Taif University, Ministry of Education, Saudi Arabia, under grant number 1-441-50.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** https://github.com/ieee8023/covid-chestxray-dataset; https://www. kaggle.com/paultimothymooney/chest-xray-pneumonia; https://data.mendeley.com/datasets/ 2fxz4px6d8/4

**Acknowledgments:** Authors would like to thank the Deanship of Scientific Research, Taif University, Kingdom of Saudi Arabia, for their funding support under grant number 1-441-50.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

