*Article* **Entropy Based Data Expansion Method for Blind Image Quality Assessment**

#### **Xiaodi Guan 1,2, Lijun He 1, Mengyue Li <sup>1</sup> and Fan Li 1,\***


Received: 20 November 2019; Accepted: 30 December 2019; Published: 31 December 2019

**Abstract:** Image quality assessment (IQA) is a fundamental technology for image applications that can help correct low-quality images during the capture process. The ability to expand distorted images and create human visual system (HVS)-aware labels for training is the key to performing IQA tasks using deep neural networks (DNNs), and image quality is highly sensitive to changes in entropy. Therefore, a new data expansion method based on entropy and guided by saliency and distortion is proposed in this paper. We introduce saliency into a large-scale expansion strategy for the first time. We regionally add distortion to a set of original images to obtain a distorted image database and label the distorted images using entropy. The careful design of the distorted images and the entropy-based labels fully reflects the influences of both saliency and distortion on quality. The expanded database plays an important role in the application of a DNN for IQA. Experimental results on IQA databases demonstrate the effectiveness of the expansion method, and the network's prediction effect on the IQA databases is found to be improved compared with its predecessor algorithm. Therefore, we conclude that a data expansion approach that fully reflects HVS-aware quality factors is beneficial for IQA. This study presents a novel method for incorporating saliency into IQA, namely, representing it as regional distortion.

**Keywords:** deep neural network; entropy; data expansion; blind image quality assessment; saliency and distortion; human visual system; declining quality

#### **1. Introduction**

With the current state of development of multimedia technology, a large number of videos and images are being generated and processed every day, which are often subject to quality degradation. As a fundamental technology for various image applications, image quality assessment (IQA) has always been an important issue. The aim of IQA is to automatically estimate image quality to assist in the handling of low-quality images during the capture process. IQA methods can be divided into three major classes, namely, full-reference IQA (FR-IQA) [1,2], reduced-reference IQA (RR-IQA) [3], and no-reference IQA (NR-IQA), based on whether reference images are available. In most cases, no reference version of a distorted image is available; consequently, it is both more realistic and increasingly important to develop an NR-IQA model that can be widely applied [4]. NR-IQA models are also called blind IQA (BIQA) models. Notably, deep neural networks (DNNs) have performed well in many computer vision tasks [5–8], which encouraged researchers to use the formidable feature representation power of DNNs to perform end-to-end optimized BIQA, an approach called DNN-based BIQA. These methods use some prior knowledge from the IQA domain, such as the relationship among entropy, distortion and image quality, to attempt to solve IQA tasks using the powerful learning ability of neural networks. Accordingly, there is a strong need for DNN-based BIQA models in various cases where image quality is crucial.

However, attempts to use DNNs for the BIQA task were limited due to the conflicting characteristics of DNNs and IQA [9]. DNNs require massive amounts of training data to comprehensively learn the relationships between image data and score labels; however, classical IQA databases are much smaller than the computer vision datasets available for deep learning. An IQA database is composed of a series of distorted images and corresponding subjective score labels. Because obtaining a large number of reliable human-subjective labels is a time-consuming process, the construction of IQA databases requires many volunteers and complex, long-term experiments. Therefore, expanding the available number of distorted image samples and labels that fully reflect human visual system (HVS)-aware quality factors for training is a key problem for DNN-based BIQA.

Based on the baseline datasets considered for expansion, the current DNN-based BIQA methods can be divided into two general approaches. The first approach is to use the images in an existing IQA dataset as the parent samples; we call this approach small-scale expansion. In this case, the goal of expansion is achieved by dividing the distorted images from the IQA dataset into small patches and assigning to each patch a separate quality label that conforms to human visual perception. The second strategy is to expand the number of distorted images by using another, non-IQA dataset as the parent dataset; we call this approach large-scale expansion. In this approach, nondistorted images from outside the IQA dataset are first selected; then, distortion is added to these images based on the types of distortion present in the IQA dataset to construct new distorted images on a large scale. Then, the newly generated distorted images are simply labeled with different values that reflect their ranking in terms of human visual perception quality to achieve the goal of expansion.

The small-scale expansion strategy relies on division. The initial algorithm [10] assigns the score labels of the parent images to the corresponding small patches and then uses a shallow CNN to perform end-to-end optimization. The small patches and their labels are input directly to the network during training, and the predicted scores for all the small patches are averaged to obtain the overall image score during prediction. However, this type of expansion is not strictly consistent with the principles of the HVS. Previous studies have shown that saliency exerts a crucial influence on human-perceived quality; thus, saliency should be considered in IQA together with distortion and content [11–13]. These studies have shown that the human eye tends to focus on certain regions when assessing an image's visual quality and that different regions have different influences on the perceived quality of a distorted image. Therefore, it is not appropriate for all patches from a single image to be assigned identical quality labels because local perceptual quality is not always consistent with global perceptual quality [14,15]: the uneven spatial distortion distribution will result in varying local scores for different image patches. Thus, many works have attempted to consider this aspect of the problem. The saliency factor was first considered in DNN-based BIQA algorithms. The authors of [16,17] still assigned identical initial quality labels to the small patches, but the predicted scores for all small patches were eventually multiplied by different weights based on their saliency to obtain the overall image scores, thereby weakening the influence of patches with inaccurate labels in nonsalient regions on the overall image quality. In [18,19], strategies based on proxy quality scores [18] and an objective error map [19] were used to further improve the accuracy of the labels for different patches. All these strategies further increased the accuracy of this type of expansion and led to better predictions, confirming that the joint consideration of the influence of saliency and distortion on image quality more comprehensively reflects HVS-related perceptual factors. However, division strategies have obvious inherent drawbacks. First, because expansion is applied only to the existing distorted images in the IQA database (the expansion parent), the diversity of the training sample contents is not increased. The different levels of quality influenced by saliency and distortion must already be present in the training dataset, but it is difficult to claim that a typical small IQA database can comprehensively represent the influence of HVS factors on quality; hence, such methods are easily susceptible to overfitting. Second, there is a tradeoff between the extent of expansion achieved and the patch size. When the patch size is too small, each individual patch will no longer contain sufficient distorted semantic information for IQA, thus inevitably destroying the correlations between image patches. In contrast, a large patch size results

in smaller-scale expansion, meaning that only a shallow network can be used for training. Moreover, the generated saliency-based patch weights will show large deviations from the real salient regions.

To avoid dividing the images in the IQA database while still not requiring human labeling, the large-scale expansion strategy instead involves creating new distorted images by adding distortion to a large number of high-definition images obtained from outside the IQA database. Separate values that reflect the overall quality level are assigned to each distorted image obtained from each original parent image. Because the labels of the newly generated images are not direct quality scores, the expanded database is used only to pretrain the DNN, which is then fine-tuned on the IQA database. This approach alleviates the training pressure placed on the small IQA dataset and successfully avoids the drawbacks of division encountered in the small-scale strategy because the number of labeled training images is expanded by a large amount, increasing the diversity of the training sample content. Such unrestricted, large-scale expansion also makes it possible to use deeper networks; in fact, a deep model pretrained on an image recognition task could also be used to further enhance the effect. This large-scale expansion approach was developed over the past two years, and it showed a much better effect than small-scale expansion algorithms. However, large-scale expansion also has some significant shortcomings. Although the newly added images with quality-level labels are consistent with human perception, they reflect only HVS-aware quality factors; distortion and the joint effects of saliency and distortion are not considered. Moreover, large-scale expanded datasets are typically prepared to assist in specific IQA tasks. The more similar the extended pretraining dataset is to the original IQA dataset for the target task, the more effectively it can support the IQA task. In this case, a "similar" dataset is an expanded dataset that fully reflects the influences of the HVS-related perceptual factors (saliency and distortion) as embodied in the IQA task of interest. The current algorithms [15,20] that use this approach mainly follow the lead of RankIQA [20]: they generate a series of distorted image versions by adding different levels of distortion to each original parent image (with uniform distortion for each image region) and assign different numerical-valued labels to them to reflect the overall quality level. Consequently, the quality degradation of each distorted image depends only on the level of the distortion added to the whole image. As a result, HVS-aware quality factors are not well embedded into the expanded database. Using this type of extended dataset to pretrain the network will simply cause it to learn that a greater level of distortion leads to greater quality degradation; the network will be unable to discern that salient regions are more important than nonsalient regions and that different regions contribute differently to the overall image quality. Obviously, this type of expansion does not result in an ideal pretraining dataset for IQA.

In this paper, we introduce saliency into the large-scale expansion method, with the aim of constructing DNN-based BIQA models that will be effective in various cases where image quality is crucial. The objective is to be able to automatically estimate image quality to assist in handling low-quality images during the capture process. Moreover, by virtue of the introduction of saliency, our proposed model can achieve better prediction accuracy for large-aperture images (with clear foregrounds and blurred backgrounds), which are currently popular. We propose a new approach for incorporating saliency into BIQA that is perfectly compatible with the large-scale data expansion approach to ensure the full consideration of HVS-related factors in the mapping process. Specifically, we introduce saliency factors through regional distortion, thereby conveniently combining saliency and distortion factors during the expansion of each image to generate a series of distorted image versions. Then, we use the information entropy to rank these images based on their quality to complete the labeling process. By constructing a more efficient pretraining tool for DNN-based BIQA, we improve the prediction performance of the final model. We use our generated large-scale dataset to pretrain a DNN (VGG-16) and then use the original small IQA dataset to fine-tune the pretrained model. Extensive experimental results obtained by applying the final model to four IQA databases demonstrate that compared with existing BIQA models, our proposed BIQA method achieves state-of-the-art performance, and it is effective on both synthetic and authentic distorted images. Therefore, we conclude that a data expansion approach that fully reflects HVS-aware quality factors is beneficial for IQA. This study presents a novel method for incorporating saliency into IQA tasks, namely, representing it as regional distortion.

Our contributions can be summarized as follows: (1) We introduce saliency into the large-scale expansion method in a manner that fully reflects the influence of HVS-aware factors on image quality, representing a new means of considering saliency in IQA. With the incorporation of the saliency factor, the proposed data expansion method overcomes the main drawback of its predecessor algorithm, RankIQA [20], which enables the learning of only the quality decline caused by the overall distortion level. Our approach enables the construction of an efficient pretraining dataset for DNN-based BIQA tasks and results in improved prediction accuracy compared to previous BIQA methods. (2) We propose a new data expansion method that fully reflects HVS-aware factors by generating distorted images based on both distortion and saliency and assigning labels based on entropy. This method successfully embeds the joint influence of saliency and distortion into a large-scale expanded distorted image dataset.

The remainder of this paper is organized as follows. Section 2 describes the important factors that affect image quality and explores how those factors affect human judgments of image quality. Section 3 introduces the proposed expansion method and describes its use in IQA in detail. Section 4 reports the experimental results and presents corresponding discussions. Finally, Section 5 offers conclusions.

#### **2. Exploration of Functional HVS Aspects for Image Quality**

As stated above, the main requirement for the expanded dataset is that it should be as similar as possible to the original IQA dataset. Therefore, to identify some features of good BIQA model design, we analyzed the influence of the three functional aspects of the HVS on human visual IQA. To improve the reliability of the results, all images considered below were taken from the IQA dataset, which consists of distorted images and subjective quality score labels that are often used as criteria based on the human visual perception mechanism.

#### *2.1. The Influence of Saliency on Image Quality*

As previously discussed, saliency is an important factor that influences image quality because when people observe an image, they tend to focus on the regions that contain the most relevant information in the visual scene. Previous HVS evaluation experiments with eye trackers [11–13,21] showed that the visual importance of different local regions varies when humans are estimating the visual quality of a whole image.

To conduct a detailed analysis of the substantial impact of saliency on quality, we analyzed several images with different visual quality scores. The images shown in panels (a) and (b) of Figure 1 are derived from the LIVE Challenge dataset [22], an authentic distortion database in which the labels represent the mean opinion score (MOS) and take values in the range of [0, 100], with higher values indicating better quality; in this dataset, multiple nonuniform distortions typically appear in each image. Images (a) and (b) contain identical levels of blurring in the salient and nonsalient regions, respectively. However, image (b) has a much better visual quality label in the database than image (a) does. These examples show that the level of distortion in the salient regions of an image is more likely to determine the final quality rank than is the level of distortion in nonsalient regions. Humans can more easily perceive distortions in the salient regions and thus assign lower quality scores to images with such distortions. When the foreground area of an image is distorted, the visual quality score of the whole image immediately decreases, regardless of whether the background region is distorted. Thus, the quality in the salient regions is closely related to the final quality score of the whole image.

**Figure 1.** Distorted images from the LIVEC dataset: (**a**) MOS = 50.1882; (**b**) MOS = 72.4574.

By contrast, the effect in nonsalient regions is the opposite. As shown in Figure 1, the low level of distortion in the nonsalient regions of image (a) does not prevent the quality degradation caused by distortions in the salient region. This phenomenon is widespread, especially in synthetic distortion databases. The images shown in panels (a)–(d) of Figure 2 are from the LIVE [23] dataset, which is a synthetic distortion database that contains 29 reference images and 779 distorted images derived from them. The corresponding difference mean opinion score (DMOS) labels for these images, representing subjective quality scores, lie in the range of [0, 100], with a lower value indicating better visual quality. Images (a)–(d) contain no distortion in the salient regions and exhibit varying distortion intensities in the nonsalient regions. However, all of these images have the highest possible DMOS value of 0. This indicates that distortion in nonsalient regions attracts little attention and has little effect on the quality of the entire image.

**Figure 2.** Reference images (DMOS = 0) from the LIVE database: (**a**) painthouse; (**b**) caps; (**c**) monarch; (**d**) stream.

#### *2.2. The Influence of Content on Image Quality*

We will now discuss the crucial impact of content on IQA. We present detailed figures for observation. Among the existing IQA databases, LIVE is the most commonly used. Its 29 reference images were distorted using five types of distortion: JPEG2000 (JP2K), JPEG, white noise in the RGB components (WN), Gaussian blur (GB), and transmission errors in the JPEG2000 bit stream using a fast-fading Rayleigh channel model (FF). Moreover, different levels of distortion were added to each reference image using the same distortion type to ensure that the quality of the distorted images of the same distortion type covers the entire quality range. To draw our conclusions, we selected 4 of the 29 reference images ("painthouse", "caps", "monarch" and "stream", as shown in Figure 2) as well as the distorted images derived from these 4 reference images, as shown in Figure 3. Only 4 of the distortion types (JP2K, JPEG, WN, and GB), all of which are commonly used in IQA databases, are considered here. For each distortion type, we observed the distortion parameter and the DMOS label for each distorted image derived from the 4 reference images. We first generated a scatter plot showing the distortion parameters and the corresponding DMOS quality labels. Then, for each reference image, we artificially fit a smooth curve to these scatter points to observe the trend of variation relating the image quality and the perceived level of distortion.

**Figure 3.** Relationship between the distortion parameter (x-axis) and the DMOS label (y-axis) for different distortion types. Each *x*-axis represents the distortion parameter for the corresponding distortion type. Each scatter point represents one sample in the LIVE dataset [23]. The scatter points representing the distorted images derived from the same reference image were separately fitted to a smooth curve. Different colors indicate different images: (**a**) GB; (**b**) JP2K; (**c**) JPEG; (**d**) WN.

First, we observe that there are no rating biases associated with the reference image contents; all of the reference images, each with different contents, are assigned the same quality score (DMOS = 0) in the public IQA database. This phenomenon is clearly reflected in Figure 3; the starting points of all curves in the same figure are consistent (because the x-axis of the figure for JPEG distortion represents the achieved bitrate, this characteristic is not reflected in this figure). Second, we find that as the level of distortion added to the image increases, images with different contents have different quality degradation curves. In other words, different image contents have different capacities for hiding distortion. For example, image (d) in Figure 2 has a dark content, and the slight distortion in it is unacceptable to the human eye. However, a further increase in the distortion level does not strongly affect an observer's understanding of the content of "monarch"; therefore, the rate of quality degradation is slow.

#### *2.3. The Influence of Distortion on Image Quality*

The degree of distortion seriously affects the image quality. This conclusion is obvious and beyond doubt for all four distortion types displayed in Figure 3. For the same type of distortion, all images with different contents exhibit the same behavior: when the distortion added to the whole image is uniform, the higher the level of distortion (distinguishable by the human eye) added to the whole image is, the lower the image quality is. This means that a negative correlation exists between the level of distortion and the image quality. The same conclusion can be drawn from Figure 3.

#### *2.4. Conclusion: The Joint Influence of Saliency and Distortion on Image Quality Given the Same Type of Distortion for Each Image*

The above analysis provides the following inspiration. When a DNN learns a mapping between distorted images and quality scores, it is actually learning different curves for different image contents. This suggests that it should be possible to construct an expanded training database to improve the DNN-based prediction performance on BIQA tasks by simply adding synthetic distortions to a baseline database containing a large number of original images, exactly as was done to construct the LIVE database. As discussed above, the original external content is not subject to any rating biases; therefore, we should find a parent database that consists of multiple types of images with no distortion and then distort them using several distortion types. For each type of distortion, we could generate a series of distorted images of different qualities for each original image such that the generated distorted images would reflect the joint effects of saliency and distortion on image quality. Then, we could apply a reasonable two-stage training method. First, pairs of images of different qualities from the expanded dataset would be sent to the DNN to pretrain the network to learn the quality ranking of distorted images with the same content. Then, the smaller original IQA database could be used to fine-tune the DNN, which would already be trained to perform quality ranking, to refine the mapping of distorted images to quality scores for each type of content. Then, the DNN should be able to output high-precision score prediction results. The authors of the RankIQA algorithm [20] accomplished this task using four distortion types (JP2K, JPEG, WN, and GB) because these four types can be implemented by means of MATLAB functions and frequently appear in IQA datasets; they generated a series of distorted images from the contents of each original image separately. However, in their expanded dataset, the degradation of image quality with a given distortion type for each parent image depends only on the overall distortion level; the joint influence of distortion and saliency on image quality is not reflected. Thus, if we could fully capture the influence of both saliency and distortion in the expanded dataset, the performance should improve.

#### **3. Proposed Method**

As mentioned above, our main goal is to construct a newly expanded dataset to support DNN-based BIQA tasks. We introduce saliency into the large-scale expansion strategy for the first time by creating distorted images based on the joint consideration of both saliency and distortion. Finally, we label the images based on the information entropy. The degradation of image quality in our new expanded dataset not only is related to the distortion level (as in RankIQA [20]) but also fully reflects the joint influence of distortion and saliency on image quality. We use this large-scale expanded dataset to pretrain a DNN and then use the original small IQA dataset to fine-tune the pretrained DNN. After fine-tuning, we obtain the final BIQA model. The flow chart of our proposed method is shown in Figure 4.

In this section, we present a detailed description of our method, which is divided into two main stages: dataset expansion and the use of the expanded dataset. First, we introduce our novel method of incorporating saliency into the large-scale dataset expansion process for IQA. Then, we describe the dataset generation process: image expansion based on saliency and distortion and image labeling guided by the information entropy. Finally, we describe how the expanded dataset is used in the IQA task, which involves a two-step training process to ensure that the DNN fully learns how HVS-aware factors influence image quality.

**Figure 4.** Pipeline of the proposed data expansion method for IQA. Based on the Waterloo database, we generate a large-scale expanded dataset, and this expanded dataset is then used to pretrain a double-branch network. Then, the original IQA dataset is used to fine-tune a single branch of the network to output quality scores.

#### *3.1. The Usage of Saliency in IQA*

The incorporation of saliency into the expansion procedure is a key step because we want to consciously capture the influences of both saliency and distortion when generating distorted images. Previous algorithms [16–19] introduced saliency into the IQA task by assigning different weights to different regions of a distorted image when predicting the final score. Such saliency usage is suitable for small-scale expansion but cannot be applied in the case of large-scale expansion. Moreover, there is no opportunity to add saliency factors to the existing distorted image versions generated for RankIQA (large-scale expansion), for which several images with different distortion intensities were created and labeled by quality rank. Because each label is a simple number that represents the overall quality level, using regional saliency weights is insufficient. Moreover, the salient regions in any given image may shift under different distortion levels; examples of this attentional shift based on distortion are shown in Figure 5. As the level of distortion increases, the salient areas also shift. Thus, we can see that differently distorted images with the same content should have different local saliency weight values. This saliency shift further increases the difficulty of adding saliency into the existing distorted images generated for RankIQA. Therefore, finding a new way to introduce saliency into the large-scale expansion process for IQA is crucial.

On the one hand, the characteristics of the large-scale expansion strategy are as follows: the time-consuming psychometric approach is not employed to obtain subjective score labels, and each distorted image derived from a given image by applying a given type of distortion has only a simple numerical label that represents its level of quality. On the other hand, Section 2.1 shows that the influences of salient and nonsalient regions on quality are quite different. Based on the two considerations above, we are inspired to introduce saliency into an expanded dataset in the form of regional distortion. We can generate multiple distorted images by adding distortion to high-resolution reference images. Among these distorted images, some will be subjected to global distortion of the original images, some will be distorted only in the salient regions of the reference images, and others will be distorted only in the nonsalient regions. Because the locality of the distortion (both regional and global) in the extended set of distorted images will be different, these images will have different perceptual qualities. Next, instead of asking volunteers to provide subjective scores, we can sort the distorted images based on their information entropy and assign simple numerical labels that represent their quality ranking. In this way, the combined effects of both saliency and distortion on quality will be reflected in the expanded dataset.

(**a**) (**b**)

(**c**) (**d**)

**Figure 5.** Saliency shift caused by different levels of blur. Two images with the same content but different distortion levels are shown on the left. The corresponding saliency maps are shown on the right. (**a**) "painthouse" with low level's distortion; (**b**) the saliency map of (**a**); (**c**) "painthouse" with high level's distortion; (**d**) the saliency map of (**c**).

To implement the approach proposed above, we performed two preparatory steps. First, we needed to choose a saliency model. From among the many possible saliency models, we selected [24] because it emphasizes the identification of the entire salient area. Second, we needed to establish a measure of how the impact factor affects the quality (as discussed in Section 2). In addition to the information entropy, this will be another important measure for guiding the image generation and labeling processes during our expansion procedure. Based on these two preparatory steps, we introduce the details of our expansion method below.

#### *3.2. Generating Images for the Expansion Dataset*

We selected the Waterloo database [25], which includes a large number of high-resolution images (4744), as the parent database to be used in the expansion process. Using MATLAB, we added distortion to these images to construct a large-scale expanded dataset containing a total of 4744 × 4 × 9 distorted images. Here, the factor of 4 arises from the 4 types of distortion (JP2K, JPEG, WN, and GB) applied to each parent image; we adopted these four distortion types because they are found in most available IQA databases. The factor of 9 arises from the fact that for each distortion type, a total of nine distorted images of different qualities were generated, using a total of five distortion levels for each distortion type. We summarize this information in Table 1. Please note that because we used MATLAB to simulate the types of distortion present in the LIVE dataset, the distortion functions and distortion factors used may be different from those used in LIVE; therefore, the parameters in Table 1 are slightly different from those in Figure 3. Next, with the help of Figure 6, we will explain how we used the five distortion levels and different saliency models to generate nine distorted versions of each parent image.


**Table 1.** Important indicators and parameters involved in the expansion process. Distortion levels 1–5 are defined based on the distortion parameters used in the expansion procedure. The five distortion parameters presented for each distortion type are listed in order from level 1 to level 5.

As an example, we chose one original parent image ("shrimp" from the Waterloo database), and the image shown in panel (b) is its saliency map, generated as described in [24]. Due to space constraints, only the nine distorted images generated using GB distortion are shown in Figure 6. Please note that nine corresponding distorted image versions were also generated for each of the other three distortion types from each original parent image. As Figure 6 shows, during the expansion procedure, we used the method introduced in [24] to extract the saliency map of each original parent image. Then, according to the saliency map, we defined the region with pixel values greater than 30 as the salient region and defined the remaining area as the nonsalient region. Each image was thus divided into two parts, the salient region and the nonsalient region. Then, we independently added different levels of distortion to these two regions of the original image and spliced the results to obtain a distorted image. The distortion levels applied to the salient and nonsalient regions to generate the nine distorted images are shown in the GB distortion level column (e.g., "level 0 + level 1" for image (c) means that this image was generated by adding GB distortion of level 0 to the salient region and GB distortion of level 1 to the nonsalient region of image (a)). The definitions of distortion levels 1–5 for each distortion type can be found in Table 1, and a level of 0 means no distortion.

Our expanded set of distorted images fully reflects the influence of HVS-aware quality factors. The nine distorted image versions generated from each parent image contain different levels of distortion across the entire image region, thus representing the influence of the overall distortion level on quality. In addition, some distorted images have different levels of distortion in the salient and nonsalient regions, thus representing the joint influence of saliency and distortion on quality. We ranked the nine distorted images of the same distortion type generated from each original image separately. The corresponding distorted image versions of decreasing quality can fully reflect the quality degradation caused by various HVS-aware factors.

**Figure 6.** Image examples from our generated database. The first column contains an original image and its saliency map. The nine distorted images generated using GB distortion are displayed; each distortion was generated by adding particular levels of distortion to the salient and nonsalient regions of the original image. These distortion levels are displayed alongside the corresponding distorted images, and the corresponding image quality labels are given in the last column. Please note that the definitions of the salient regions and the distortion levels can be found in Section 3.2. Nine distorted images were generated in this way for all four distortion types, although only the results of GB expansion are shown in this figure. (**a**) the original version of "shrimp"; (**b**) the saliency map of (**a**); (**c**–**k**) the nine distorted versions of (**a**) under GB distortion type.

#### *3.3. Entropy-Based Image Quality Ranking of the Expanded Dataset*

After generating the distorted images, we next assigned quality labels to them. We know that each image in an IQA database will have an assigned quality score generated through a time-consuming psychometric experiment, an option that is unavailable to us, and is, in fact, unnecessary. Labels that simply reflect the quality ranking are sufficient to create the needed effect (as discussed in detail in Section 3.4). We refer to the nine distorted images of the same distortion type generated from the same parent image as a group; thus, there are a total of 4744 × 4 groups in our expanded dataset. We sorted the nine distorted images in each group separately by quality using the information entropy defined on the basis of Shannon's theorem because the information entropy is a measure that reflects the richness of the information contained in an image. The larger the information entropy of an image is, the richer its information and the better its quality. Moreover, the information entropy value is sensitive to image distortion and quality. Distortion in the salient region will lead to a significant reduction in the entropy value. Therefore, the information entropy is a suitable basis for our labeling procedure. The formula is as follows:

$$H = -\sum\_{i=0}^{255} p\_i \* \log p\_i \tag{1}$$

where *H* represents the information entropy of the image and *pi* represents the proportion of pixels with a grayscale value of *i* in the grayscale version of the image. The ordering of the information entropy values reflects the quality ranking of a group of images. We used this formula to calculate the information entropy of each of the nine distorted images in one group and ranked these nine images in order of their information entropy values. Accordingly, labels 1–9 were assigned to represent the image quality ranking. As mentioned above, there are a total of 4744 × 4 groups in our expanded dataset. We use letters *c* to *k* to denote the distorted image versions generated to compose each group (where *c* represents the distorted image generated by adding no distortion to the salient region and level 1 distortion to the nonsalient region of the original image). For each of these nine distorted image versions, we calculated the average entropy for the corresponding 4744 × 4 images, as shown in Table 2. The information entropy ranking results for most groups are consistent with the average order listed in Table 2. For each group, the labels for distorted images *c* to *k* range from 1 to 9, representing their sequentially decreasing quality. For example, for the nine images in Figure 6, their entropy sequentially decreases in the order in which they are displayed; the labels range from 1 to 9. Some groups also exist in which the information entropy order is different from the average order displayed in Table 2; in most such cases, the entropy values of images *d* (in which only the nonsalient region is distorted at level 2) and *e* (in which only the salient region is distorted at level 1) are reversed. However, we still sort image *e* below image *d* in quality to emphasize the importance of the salient region.

**Table 2.** The average information entropy values for all images corresponding to the same distorted image version across all groups. Each value is calculated as the average information entropy of the corresponding image version in each of the 4744 × 4 groups.


These information entropy results are consistent with the previous conclusions regarding how HVS factors affect image quality. Images with only background distortion have higher quality indices than those with foreground distortion and whole-region distortion because distortion in only nonsignificant regions leads to only weak quality degradation due to the smaller entropy of nonsalient regions. Consequently, images with only foreground distortion and with overall distortion at the same level are of similar quality. In addition, as we discussed in Section 2, the quality of the salient regions is highly consistent with that of the whole image. Please note that for a few landscape

images in the Waterloo database, which have no obvious salient regions, we treated the entire image as the salient region to avoid negative effects. Although no convincing quality score labels could be extracted for these images, we were still able to use the expanded database for our BIQA task by adopting a Siamese network and a corresponding training method, as discussed in the next section.

#### *3.4. Using the Expansion Dataset for the IQA Task*

Now, we will introduce the use of our new expanded dataset. Our training process consists of two steps: pretraining on the expanded dataset and fine-tuning on the IQA database. We trained a model based on VGG-16 [26], with the number of neurons in the output layer modified to 1. In our expanded database, for each original image, there are nine distorted images with corresponding labels from 1–9 that represent their quality ranking for each distortion type. We followed the training setup used by the authors of RankIQA [20]. During pretraining, to train the network on the quality ranking task, we used a double-branch version of VGG-16 (called a Siamese network) with shared parameters and a hinge loss. We show a schematic diagram of the pretraining process in Figure 7 and explain the training process in conjunction with the figure. Each input to the network consists of two images and two labels: a pair of images of different quality that are randomly selected from among the nine distorted images in one group. The image with the lower label (indicating higher quality) is always sent to the *x*<sup>1</sup> branch, and the other image is sent to the *x*<sup>2</sup> branch. When the outputs of the two branches are consistent with the order of the two labels, meaning that the network correctly ranks the two images by quality, the loss is 0. Otherwise, the loss is not 0, and the parameters will be adjusted (by decreasing the gradient of the higher branch and increasing the gradient of the lower branch) as follows:

$$\frac{\partial L}{\partial \theta} = \begin{cases} 0 & \text{if } (f(\mathbf{x}\_2; \theta) - f(\mathbf{x}\_1; \theta)) \le 0, \\ \frac{\partial f(\mathbf{x}\_2; \theta)}{\partial \theta} - \frac{\partial f(\mathbf{x}\_1; \theta)}{\partial w} & \text{otherwise.} \end{cases} \tag{2}$$

where *θ* represents the network parameters. Thus, the loss function is continuously optimized by comparing the outputs of the two branches, and eventually, the training of the quality ranking model is complete. Because any two of the nine distorted images in a group may be paired to form the input, the network is efficiently forced to learn the joint influence of saliency and distortion on image quality. After pretraining, either network branch can produce a value for an input image (because the two branches share parameters), and the quality ranking of different input images will be reflected by the order of their corresponding output values.

**Figure 7.** Pretraining process. The left side presents a series of distorted images of decreasing quality in the same group, and the right side presents a two-branch VGG-16 network where the two branches share parameters and a loss function.

We have found that this pretrained model is nearly identical to the IQA model and can effectively judge the effects of saliency and distortion on quality. However, the output of this network is not a direct image quality score. Only when multiple different images are input to obtain different output

values does the order of these values reflect the order of the images in terms of quality. Therefore, to facilitate the comparison of our model with other BIQA models and transform the network output into a direct quality score, our method includes an IQA-database-based fine-tuning step. From the pretrained model, we extract one branch to obtain a single VGG-16 network and perform training on the original IQA dataset to complete the fine-tuning process. In each round of training, the input to the network is one image, and the corresponding quality score is the label in the IQA database; thus, the network learns an accurate mapping from distorted images to scores. Again following the approach of RankIQA, we use the sum of the squared errors as the loss function during fine-tuning.

#### **4. Experiments and Results**

#### *4.1. Datasets and Evaluation Protocols*

We used two types of datasets in our experiments: a non-IQA dataset used for the generation of the large-scale expanded pretraining dataset and several IQA datasets for performing fine-tuning. As the non-IQA dataset that was used to generate new distorted images, we adopted the Waterloo Exploration Database [25], which includes 4744 high-resolution images. The diversity of the image scenes and the clarity of the images make this database suitable for our purposes. As the IQA datasets, we used three synthetic IQA databases (i.e., databases containing synthetic distortions), namely, LIVE [23], CSIQ [27], and LIVE MD [28], and one authentic IQA database, namely, LIVE Challenge (LIVEC) [22], in which the distortion present in each image may be a complex combination of multiple types (such as camera shaking and overexposure) to test our model's generalization capability and its scope of application.

As the evaluation measures, we selected two metrics that are commonly used in the BIQA domain, namely, the Spearman rank order correlation coefficient (SROCC) and the Pearson linear correlation coefficient (PLCC). Given N input images, the SROCC is calculated as follows:

$$SROCC = 1 - \frac{6\sum\_{i=0}^{N} (p\_i - q\_i)^2}{N(N^2 - 1)}\tag{3}$$

first, the *N* ground-truth scores and *N* predicted scores are ranked separately. Accordingly, *pi* denotes the *i*-th value in the ordered list of predicted scores, and *qi* denotes the *i*-th value in the ordered list of ground-truth scores. Therefore, the SROCC measures the monotonicity of the predictions. The PLCC is calculated as follows:

$$PLCC = \frac{\sum\_{i=0}^{N} (u\_i - \overline{u})(v\_i - \overline{v})}{\sqrt{\sum\_{i=0}^{N} (u\_i - \overline{u})^2} \sqrt{\sum\_{i=0}^{N} (v\_i - \overline{v})^2}} \tag{4}$$

where *ui* and *vi* are the predicted score and ground-truth score, respectively, for the *i*-th image and *u* and *v* are the averages of the *N* predicted scores and the *N* ground-truth scores, respectively. Therefore, the PLCC measures the accuracy of the predictions. It can be seen from the formulas that the SROCC and PLCC both lie in the range of [0, 1] and that a larger value indicates a stronger correlation between the two columns of variables.

#### *4.2. Experimental Setup*

In Section 3.4, we introduced some information on the training process. Here, we provide more details and explain the reason for the selected experimental settings. To evaluate the performance improvement achieved by our algorithm in comparison with its predecessor algorithm RankIQA [20], we adopted the same network used in RankIQA—the VGG-16 architecture [26]—and changed the number of neurons in the output layer to 1 because our objective is not regression but rather a classification task. During both pretraining and fine-tuning, we randomly cropped a single subimage with dimensions of 224 × 224 from each training image to be used as the input in each epoch. During testing, we randomly sampled 30 224 × 224 subimages from one image and adopted the average of the

corresponding 30 predicted outputs as the final score for this image. The quality ranking of the nine distorted images in a group was determined on the basis of an overall comparison of the full image region. Although a size of 224 × 224 is not sufficient to cover the entire image, it does cover more than 1/3 of the full image size; thus, this requirement does not destroy the quality ranking of the input images. Also for consistency with RankIQA [20], we adopted the Caffe [29] framework for training. The entire pretraining process consisted of 50,000 iterations, while the fine-tuning process consisted of 20,000 iterations. Additionally, L2 weight decay was used throughout the entire training process.

#### *4.3. Performance Comparison*

We compared the performance of our method on several IQA databases with the performance of various state-of-the-art FR-IQA and NR-IQA methods, including the FR-IQA methods PSNR, SSIM [1] and FSIMc [2]; the traditional NR-IQA methods BRISQUE [30], CORNIA [31], IL-NIQE [32] and FRISQUEE [33]; and the DNN-based NR-IQA methods CNN [10], RankIQA [20], BIECON [18], and DIQA [19]); as well as a DNN-based NR-IQA method that incorporates saliency, DIQaM [16]. We also compared our method with other well-known DNN models. Three networks (AlexNet [34], ResNet50 [35] and VGG-16, initialized from ImageNet) were also directly fine-tuned on each IQA database and treated as baselines. We used the final version of our DNN model, which was pretrained on the expanded dataset and then fine-tuned on the IQA dataset, to obtain image quality scores. The SROCC and PLCC were then calculated between the predicted quality scores (the output of our fine-tuned model) and the quality labels of the distorted images in the IQA database. The results are shown in Table 3, where the best three performance results are highlighted in bold. We divided the distorted images and their corresponding score labels into two groups, using 80% for training and 20% for testing. For all databases, the contents of the training and test sets did not overlap. This division process was repeated ten times. To avoid the influence of randomness on the evaluation of the prediction effect, the results of averaging the SROCC and PLCC scores over all ten runs are reported in Table 3.


**Table 3.** Comparison of the SROCC and PLCC scores on the four IQA datasets.

Red: the highest. Blue: the second. Green: the third.

First, it can be clearly seen that our proposed model achieves the highest PLCC and SROCC scores on almost all tested databases, indicating that the proposed data expansion method for DNN-based BIQA has the best overall effect for both synthetic and authentic distortion databases and is largely consistent with the subjective judgments made by humans. Moreover, we can see that compared with its predecessor algorithm, RankIQA, our method achieves better results on all of the datasets listed, especially on LIVEC, because the introduction of the saliency factor causes the model to somewhat depend on the distortion type consistency between the expanded dataset and the original IQA database. The performance improvement on CSIQ is also considerable, possibly because the reference images in this dataset include many examples with clear foregrounds but blurred backgrounds.

Table 3 also concisely presents a comparison of the different types of methods that can be applied to IQA tasks. We can see that our method is superior to any of the classical NR methods due to the strong autonomous learning capability of CNNs. Among the deep learning methods, many models performed poorly on the LIVEC dataset because their training process requires reference images, which do not exist in the LIVEC dataset. By contrast, our fine-tuning process does not require reference images. Moreover, from the results of the directly fine-tuned baselines listed above, we can see not only that a good algorithm can perform well but also that the convolutional computing ability of a relatively deep and large network such as ResNet50, which has a total of 50 layers, is advantageous. However, our approach of introducing an expanded dataset makes it easy to use a smaller network, which incurs lower computational costs, to achieve results similar to those of a larger network.

Because the SROCC and PLCC results were averaged over ten runs in our experiment, we also present the standard deviations of these results to illustrate the stability of our model's predictive performance. Table 4 shows the standard deviations of the PLCC and SROCC scores over ten runs for RankIQA and our method. Because our method uses the same training procedure as RankIQA but differs in the use of the expanded dataset, RankIQA is a suitable choice for comparison. The other BIQA methods (whose specific experimental data are unavailable and which are also less suitable as methods for comparison) are not shown in Table 4. As Table 4 shows, the standard deviations of the results of our algorithm are smaller than those of RankIQA. This finding indicates that our algorithm achieves not only better prediction performance but also higher stability. Moreover, it is interesting to find that the performance on the LIVE MD dataset sometimes fluctuates across different divisions of the training and test datasets. These fluctuations may occur because some of the images contained in this dataset have unclear foregrounds, and when these images appear in the training set, they may induce a reduction in performance. Nevertheless, the average result is high. Therefore, the standard deviations of the prediction results further reflect the effectiveness of our proposed data expansion method.


**Table 4.** Standard deviations of the SROCC and PLCC scores for RankIQA and our method.

#### *4.4. Scatter Plots*

To further visualize the consistency between our method's final predicted scores and the subjective human perception scores in the IQA databases, we show scatter plots of the scores predicted by our model (pretrained on the expanded dataset and fine-tuned on the corresponding IQA database) versus the ground-truth labels (DMOSs/MOSs) in Figure 8. This is another way of expressing the information in Table 3 and clearly shows the agreement between the predicted scores and the ground-truth values. Scatter plots of the results obtained on each of the four IQA databases (LIVE, CSIQ, LIVE MD and LIVEC) are shown in Figure 8. In these scatter plots, each point represents an image sample, the x-axis represents the DMOS/MOS scores associated with the samples in the dataset, and the *y*-axis represents the predicted quality scores obtained with our method. Because the four databases use different subjective score labels (i.e., LIVE and LIVE MD use DMOS scores in the range of [0, 100],

LIVEC uses MOS scores in the range of [0, 100], and CSIQ uses DMOS scores in the range of [0, 1]), there are two different *x*-axis ranges in Figure 8. For the CSIQ database, the *x*- and *y*-axis scales range from 0 to 1. For the other three databases, unified scales from 0 to 100 are used.

**Figure 8.** Scatter plots of the quality scores predicted by our method versus the ground-truth subjective DMOSs/MOSs for four datasets: (**a**) LIVE; (**b**) CSIQ; (**c**) LIVE MD; (**d**) LIVEC.

Figure 8 shows that the predicted quality scores output by our method have a monotonic relationship with the ground-truth labels, especially on the LIVE MD and CSIQ datasets. This plot also explains the high correlation coefficients achieved on these two datasets. For the LIVEC dataset, the sample points are not tightly clustered around the isopleth, and the correlation is more obvious when the MOS value is small, which is unexpected due to the multiple distortion types and diversity of scenes. Nevertheless, the sample points for LIVEC are roughly evenly distributed on both sides of the isopleth, which represents great progress compared with the other algorithms. Thus, we can conclude that our expanded dataset provides effective support for IQA and gives the final model the capability to precisely predict human-perceived image quality over a wide range of datasets.

#### *4.5. Ablation Studies*

The output of our pretrained model is not a direct image quality score. Only when multiple different images are input and their output values are obtained can the order of these values reflect the quality ranking of the images. To further evaluate the contribution of our expanded dataset and more accurately evaluate the contribution of the incorporation of saliency during the pretraining stage, we applied our pretrained model to various images and compared its predicted outputs to evaluate whether it could precisely rank images by quality. We compared our model with the pretrained RankIQA model, for which only images with whole-region distortion are considered during pretraining. Five image examples from the CSIQ database are shown in Figure 9, and in Table 5, we show the

ground-truth label ranking, the order of the output of the pretrained RankIQA model, and the order of the output of our pretrained model for these images. We can see that RankIQA can accurately sort images (d) and (e), which contain whole-region distortion, but fails on images (a) and (c), which have clear foregrounds but blurred backgrounds accounting for nearly half of the entire image. By contrast, after only pretraining on the expanded dataset, our model fully reflects the joint influence of saliency and distortion and thus can perform well on images with both whole-region distortion and only local-region distortion, as is particularly evident from its performance on (a) and (c). We can see that the distortion level in the foreground in (c) is larger than that in (a), but the overall distortion of (c) is less than that of (a) when the entire image is considered. RankIQA [20] will tend to output a better quality score for (c) because the RankIQA model has only "distortion-level" awareness during training; it considers all regions equally in the final prediction. However, because our pretrained model has saliency awareness, it can sort the images correctly, as expected; therefore, our expanded database, which is based on both saliency and distortion and guided by information entropy, is more "similar" to the IQA database and can thus provide more effective assistance for the IQA task.

**Figure 9.** Distorted images from the CSIQ dataset. Their DMOS values increase from (**a**–**e**), representing a decrease in image quality.

**Table 5.** The ranking orders for several images as obtained with the pretrained models of RankIQA and our method. A larger value represents a worse image quality.


#### *4.6. Discussion*

4.6.1. Studies on the Generation of Expanded Datasets from Different Parent Databases

In this section, we study the effects of using different parent databases for expansion. To confirm the effectiveness of our selected parent database, we performed tests using different databases as parents for data expansion, including the Waterloo database, which consists of images with rich scene contents, and MSRA-B [36], another classical database that contains 5000 original high-quality images and their saliency maps. The results can be seen in Table 6, where better performance results are highlighted in bold type. When we used MSRA-B as the baseline database to generate a series of distorted images for pretraining, the performance was reduced to a certain degree. This result was unexpected; however, it can be attributed to the insufficient richness of the MSRA-B dataset, which contains only images that are somewhat monotonous and have clear salient regions. By contrast, the complexity of the image content in the IQA datasets varies widely. Therefore, the more content-abundant Waterloo dataset was better suited to our requirements and resulted in higher performance.


**Table 6.** Comparison of the SROCC and PLCC scores of fine-tuned models pretrained on different expanded datasets.

#### 4.6.2. Studies on Generating Different Numbers of Distorted Images for Each Distortion Type

As mentioned in Section 3, we refer to the nine distorted images of the same distortion type that are generated from the same original image as a group. Here, we study the influence of generating different numbers of distorted images per group. We tested three different designs for the distorted images generated in the expansion process. The number "7" refers to a design in which we removed the second (no distortion added to the salient region and level 2 distortion added to the nonsalient region of the original image) and fifth (level 2 distortion added to the salient region and no distortion added to the nonsalient region of the original image) distorted images in each group, resulting in only seven distorted images per group. This approach results in the generation of a total of 4744 × 4 × 7 distorted images. The number "9" refers to the group design represented in Figure 6. This approach results in the generation of a total of 4744 × 4 × 9 distorted images. The number "11" refers to a design in which we added two further distorted images to each group—a distorted image obtained by adding no distortion to the salient region and level 3 distortion to the nonsalient region of the original image—and another distorted image obtained by adding level 3 distortion to the salient region and no distortion to the nonsalient region. These additional images were inserted after the second image and after the sixth image, respectively, of the previously described 9-image group. The results are shown in Table 7, where we highlight the best performance results in bold type. We can see that when these different expanded databases are used for pretraining, as the number of distorted images per group increases, the performance initially increases and then decreases. The highest value is reached in case "9", possibly because of overfitting induced by the larger database, leading to reduced performance. When the number of distorted images per group increases past a certain threshold, the saliency effect becomes invalid and may lead to incorrect sorting. These findings indicate that the training process reaches saturation with the addition of two pairs of local-region distortions. Therefore, we elected to use nine distorted images per group, as shown in Figure 6.


**Table 7.** Performance differences caused by generating different numbers of distorted images per parent image for each distortion type.

#### **5. Conclusions**

In this paper, we have proposed a new approach for considering saliency in IQA. In this approach, we expand a large-scale distorted image dataset with HVS-aware labels to assist in training a DNN model to more effectively address IQA tasks. The novel feature of the proposed method is that this is the first time that a saliency factor was incorporated into the large-scale expansion strategy by representing saliency the form of a regional distortion. Then, by using the information entropy to rank the generated images by quality, we ensure that the labels in the newly expanded dataset are highly consistent with human perception. The ability to fully consider the various factors affecting image

quality also solves the overfitting problem. Specifically, the introduction of saliency not only improves the applicability and versatility of the overall model but also overcomes the heavy reliance of the predecessor to our algorithm on the degree of similarity between the distortion types in the expanded dataset and the original IQA database. The final experimental results demonstrate the effectiveness of the proposed method, which outperforms other advanced BIQA methods on several IQA databases.

**Author Contributions:** All authors have contributed to this work significantly. X.G., and M.L. provided ideas, performed experiments and wrote the manuscript. L.H., and F.L. revised the manuscript. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by National Science Foundation of China Project (No. 61701389 and No. 61671365), Joint Foundation of Ministry of Education of China (No. 6141A02022344), and Foshan Science and Technology Bureau Project (No. 2017AG100443).

**Conflicts of Interest:** The authours declare no conflicts of interest.

#### **References**


c 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **BOOST: Medical Image Steganography Using Nuclear Spin Generator**

#### **Bozhidar Stoyanov and Borislav Stoyanov \***

Konstantin Preslavsky University of Shumen, 9712 Shumen, Bulgaria; b.stoyanov@shu.bg

**\*** Correspondence: borislav.stoyanov@shu.bg

Received: 12 March 2020; Accepted: 21 April 2020; Published: 26 April 2020

**Abstract:** In this study, we present a medical image stego hiding scheme using a nuclear spin generator system. Detailed theoretical and experimental analysis is provided on the proposed algorithm using histogram analysis, peak signal-to-noise ratio, key space calculation, and statistical package analysis. The provided results show good performance of the brand new medical image steganographic scheme.

**Keywords:** steganography; nuclear spin generator; medical image; peak signal-to-noise ratio; key space calculation

#### **1. Introduction**

In this century, with the rapid evolution of data processing and information technologies, web security instruments are becoming more and more relevant. Various health systems are constantly relocating into the cloud and mobile device space. A body of US national rules for the defence of certain medical information must be taken into account for secure communication [1,2]. Many technologies have been introduced in recent years for secure storage and transmission of medical records and information regarding patient identity, such as digital watermarking [3,4], image encryption [5–9], and steganography [10,11].

Nevertheless, most of those schemes depend on some form of cryptography. The aim of cryptography is to create and analyze protocols that prevent individuals or the public from reading private data. In cryptography, an encryption is the method of encoding data. This method converts the original representation of the data, known as input text, into an alternative form known as encrypted text. Only authorized parties can decrypt encrypted data back to input text and access the original data [12]. Unlike cryptography, steganography is the art and science of hiding in plain sight secret data without being detected inside an innocent objects, called containers, so that it can be safely transmitted on a public channel of communication [13,14]. Containers may have the form of video streams, audio records, and digital images.

Image steganography refers to the hiding of user data in an image file [15]. Medical image steganographic schemes play a significant function in contemporary therapeutic procedures. The digital security of medical records and patient data both during communication and at the storage location must be ensured [16]. For medical images, sensitive patient information is embedded as header details defined in the Digital Imaging and Communications in Medicine (DICOM) standard in the image files [17] and should be removed before network transmission.

The efficiency of the steganography methods can be calculated by the three valuable specifications: security, capacity, and visual undetectability [18,19].

Numerous strategies are employed to conceal a variety of input data with respect to medical images. Because of the resistance of increasing statistical attacks, use of chaotic functions in steganography algorithms becomes more popular. Satish et al. [20] introduced Logistic map based spread spectrum image steganography. Jain and Lenka [19] used an asymmetric cryptographic system for secret information hiding in brain images. Jain and Kumar [21] presented a medical record steganography method based on Rivest–Shamir–Adleman cryptosystem and decision tree for data inclusion. Jain et al. [22] described an improved medical image steganographic methodology using a public key cryptosystem and linear feedback shift register (LFSR), and dynamically picked diagonal blocks. Ambika and Biradar [23] proposed a novel technique to hide data in medical images. The scheme uses two level discrete wavelet transformation with a pixel selection by Elephant Herding–Monarch Butterfly algorithm. By using 1D chaotic function, medical image stego algorithm is presented in [24].

The steganography techniques provide the necessary security and privacy in data transmission. In our humble opinion, the main contributions of our work can be summarized as follows:


In Section 2, we present a novel pseudorandom byte output method based on two nuclear spin generators. In Section 3, we introduce the novel medical image steganography algorithm BOOST and complete steganalysis is given. Finally, the article is concluded in Section 4.

#### **2. Pseudorandom Byte Output Algorithm Using Nuclear Spin Generator**

Pseudorandom generators are basic primitives used in cryptography algorithms but in our case we apply the random properties of pseudorandom byte generator to steganography algorithm. Pseudorandom generators are software realized methods for extracting sequences of random values.

#### *2.1. Proposed Pseudorandom Byte Output Algorithm*

The nuclear spin generator is a high-frequency oscillator which generates and controls the oscillations of the motion of a nuclear magnetization vector in a magnetic field. This system exhibits a large variety of regular and dynamic motions [25–29]. The nuclear spin generator was first described by Sherman [30]. The typical NSG is nonlinear three-dimensional dynamical system given by

$$\begin{aligned} \dot{x}(t) &= -\beta x + y \\ \dot{y}(t) &= -x - \beta y(1 - kz) \\ \dot{z}(t) &= \beta (a(1 - z) - ky^2), \end{aligned} \tag{1}$$

where *x*, *y*, and *z* are the components of the nuclear magnetization vector in the *X*, *Y*, and *Z* directions, respectively, and *α*, *β*, and *k* are positive parameters. The nuclear spin generator with initial values (*x*, *y*, *z*)=(0.12, 0.25, 0.0032) and parameters equal to (*α*, *β*, *k*)=(0.15, 0.75, 21.5) is plotted in Figures 1 and 2.

**Figure 1.** Nuclear spin generator in 3D phase space.

**Figure 2.** Nuclear spin generator time series.

The novel pseudorandom byte output algorithm is based on the next few steps:


1013))), 256), *ym*(*i*) = *mod*(*abs*(*int*(*y*(*i*) <sup>×</sup> 1013))), 256), and *zm*(*i*) = *mod*(*abs*(*int*(*z*(*i*) <sup>×</sup> 1013))), 256), where *abs*(*a*) returns the modulus of *a*, *int*(*a*) returns the the integer part of *a*, truncating the value behind the decimal sign, and *mod*(*a*, *b*) returns the reminder after division.


#### *2.2. Key Size Analysis*

The set of all initial values compose the key size. The key size of the proposed pseudorandom generator has three secret values *x*(0), *y*(0), and *z*(0). As reported by IEEE floating-point standard [31], the computational precision of the 64-bit double-precision number is about 10−14. The key size of the proposed scheme is (1014) <sup>3</sup> <sup>=</sup> <sup>10</sup><sup>42</sup> <sup>≈</sup> <sup>2139</sup> bits. This is high enough against mechanisms of exhaustive attack [32].

#### *2.3. Statistical Tests*

To estimate unpredictability of the novel nuclear spin equation based pseudo-random byte generator, we used National Institute of Standards and Technology (NIST) statistical software [33] and ENT [34] statistical application. Using the novel pseudorandom byte generator, 3000 sequences of 125,000 bytes were produced.

The NIST package contains 15 statistical tests: frequency, block frequency, cumulative sums forward and reverse, runs, longest run of ones, rank, spectral, non overlapping templates, overlapping templates, universal, approximate entropy, serial first and second, linear complexity, random excursion, and random excursion variant. The application calculates the proportion of streams that pass the particular tests. The range of acceptable proportion is determined using the confidence interval, defined as

$$
\mathfrak{p} \pm 3\sqrt{\frac{\hat{p}(1-\hat{p})}{m}},
$$

where *p*ˆ = 1 − *α* and *m* is the number of binary tested sequences. NIST recommends that, for these tests, the user should have at least 1000 sequences of 1,000,000 bits each. In our setup, *m* = 3000. Thus, the confidence interval is

$$0.99 \pm \Im \sqrt{\frac{0.99(0.01)}{3000}} = 0.99 \pm 0.0054498.$$

The proportion should lie above 0.9845502 with exception of random excursion and random excursion variant tests. These two tests only apply whenever the number of cycles in a sequence exceeds 500. Thus, the sample size and minimum pass rate are dynamically reduced taking into account the tested sequences.

The distribution of *p*-values is examined to ensure uniformity. The interval between 0 and 1 is divided into 10 subintervals. The *p*-values that lie within each subinterval are counted. Uniformity may also be specified through an application of a *χ*<sup>2</sup> test and the determination of a *p*-value corresponding to the goodness-of-fit distributional test on the *p*-values obtained for an arbitrary statistical test, *p*-value of the *p*-values. This is implemented by calculating

$$\chi^2 = \sum\_{i=1}^{10} \frac{(F\_i - \text{s}/10)^2}{\text{s}/10} \text{s}$$

where *Fi* is the number of *p*-values in subinterval *i* and *s* is the sample size. A *p*-value is computed such that *p*-*valueT* = *IGAMC*(9/2, *χ*2/2), where *IGAMC* is the complemented incomplete gamma statistical function. If *p*-*valueT* ≥ 0.0001, then the sequences can be considered to be uniformly distributed.

The output values of the first 13 test are in Table 1. The minimum pass rate for each statistical test with the exception of the random excursion variant test is approximately 2953 for a sample size of 3000 binary sequences. The random excursion test outputs eight *p*-values, which are tabulated in Table 2. The random excursion variant test outputs 18 randomness probability values: *p*-values, as shown in Table 3. The minimum pass rate for the random excursion variant test is approximately 1788 for a sample size of 1819 binary sequences.

The output results in Tables 1–3 indicate that all *p*-values are uniformly distributed over the (0, 1) interval. The total numbers of acceptable streams are within the expected confidence levels for all performed tests. Based on the results, the novel pseudo-random byte generator passed without error NIST suite.

The ENT consists of six statistical tests (entropy, optimum compression, *χ*<sup>2</sup> square, arithmetic mean value, Monte Carlo for *π*, and serial correlation), which focus on the pseudorandomness of byte sequences. We tested a stream of 375,000,000 bytes of the proposed generator. The value of entropy is 8.0 byte per byte; the optimum compression would reduce the byte file by 0%; *χ*<sup>2</sup> square is 238.18 (randomly would exceed this value 76.79% of the times; the sequence is random); arithmetic mean value is 127.5040 (very close to 127.5, less then 10%); Monte Carlo for *π* is 3.141616448 (error 0.00%); and serial correlation coefficient is 0.000003 (less then 0.005 for true random generators). The novel pseudorandom byte generator passed successfully ENT tests.

Based on the excellent test outputs, we can infer that the proposed pseudorandom byte generator has satisfying statistical properties and provides reasonable level of security.


**Table 1.** National Institute of Standards and Technology (NIST) test suite results.

**Table 2.** NIST Random excursion test results.



**Table 3.** NIST Random excursion variant test results.

#### **3. Medical Image Steganography Using Nuclear Spin Generator**

#### *3.1. Embedding Scheme*

In this subsection, by using the pseudorandom byte generation algorithm based on the nuclear spin function in Section 2, we present a medical image steganography algorithm named BOOST.

We consider 16 bits DICOM grayscale input images of *n* × *n* size. As input message, we specify the patient information (text based patient medical records with patient identification data). The information includes patient name, patient ID/UID, and doctors remarks. Stego image is the input image with embedded encrypted patient information. The DICOM header data are directly transferred into stego image, based on [35].

The proposed medical image steganography algorithm BOOST consists of the following steps:


#### *3.2. Extraction Scheme*

1. Retrieve the number *L* of embedded bytes, input levels interval [*a*, *b*], and the secret key space of the pseudorandom generator based on the nuclear spin generator in Section 2.


The proposed medical image steganography algorithm was implemented in C++ programming language. Fifteen 16-bit monochrome DICOM images were used for the experimental analysis. The test images were selected from the National Electrical Manufacturers Association (NEMA) medical image database: ftp://medical.nema.org/medical/dicom/DataSets/WG16/Philips/ClassicSingleFrame/. The folder consists of classical 16 bits DICOM grayscale single frame medical images of brains, knees, and livers. An example to illustrate the BOOST is presented in Figure 3.

**Figure 3.** Illustration of embedding a message using the BOOST method and input levels interval [20, 48]: (**a**) the original input image Brain IM\_0001; and (**b**,**c**) the location of embedded message.

#### *3.3. Steganographic Analysis*

An image histogram is an accurate illustration of the tonal value distribution in digital images. This check compares both input and stego image histograms. Histograms, performed using ImageJ2x 2.1.5.0 (http://www.rawak.de/rs2012/), for three input images and their stego images are also shown in Figure 4.

It is considered that the histograms of the stego images are much the same as those of the input images with no evidence of hidden messages in stego images.

Peak Signal-to-Noise Ratio (PSNR) is the proportion between the highest possible value of a signal and the value of distorting noise that affects the accuracy of its representation. It is defined as:

$$PSNR = 10\log\_{10}\frac{(2^d - 1)^2}{MSE}(dB),\tag{2}$$

where *d* is the bit depth of the pixel and MSE is the Mean-Square Error between the input and stego images. MSE is defined as:

$$MSE = \frac{1}{mn} \sum\_{i=1}^{m} \sum\_{j=1}^{n} \left( P[i, j] - S[i, j] \right)^2,\tag{3}$$

where *P*[*i*, *j*] and *S*[*i*, *j*] are the *i*th row and *j*th column pixel in the input and stego images, respectively.

**Figure 4.** (**a**,**e**,**i**) Input images Brain IM\_0001, Knee IM\_0001, and Liver IM\_0001; (**b**,**f**,**j**) their histograms; (**c**,**g**,**k**) stego images; and (**d**,**h**,**l**) their histograms.

In Table 4, we provide the computed values for MSE and PSNR for BOOST algorithm. MSE and PSNR are calculated for images with 1050 bytes (8400 bits), 1042 bytes (8336 bits), and 1119 bytes (8952 bits) embedded. Maximum payload is calculated as a number of non-black pixels.

From results obtained, as shown in Table 4, the PSNR values are extremely high, above 113 dB, which suggests an excellent level of security for the proposed BOOST algorithm.

The Bit Error Rate (BER) is computed as the actual number of bit positions which are changed in the stego image compared with the input image. A value of BER close to 0.0 stands for high efficiency of the steganography algorithm. The Normalized Cross-Correlation (NCC) calculates the cross-correlation in the the frequency domain, depending on the size of the images. Then, it computes the local sums by pre-computing running sums. Use local sums to normalize the cross-correlation to get correlation coefficients. The output matrix holds the correlation coefficients, which can range between −1.0 and 1.0. NCC is defined as:

$$\text{NCC} = \frac{\sum\_{i=1}^{m} \sum\_{j=1}^{n} (P[i, j] \times S[i, j])}{\sum\_{i=1}^{m} \sum\_{j=1}^{n} (P[i, j])^2}. \tag{4}$$

A value of NCC close to 1.0 represents perfect quality of the stego image.

The Structural SIMilarity (SSIM) index is an algorithm for measuring the similarity between input and stego images [36]. The output SSIM index is a decimal number between −1 and 1. Value 1 indicates excellent structural similarity.


**Table 4.** Mean-Square Error(MSE) and Peak Signal-to-Noise Ratio (PSNR) results.

In Table 5, we provide the calculated values for BER, NCC, and SSIM for the presented BOOST scheme. From the obtained results shown in Table 5, it is clear that the BER are very close to 0.0 and NCC and SSIM values are almost equal to 1.0. The data indicate that the BOOST scheme provides good quality and excellent structural similarity.

**Table 5.** Bit Error Rate (BER), Normalized Cross-Correlation (NCC), and SSIM (Structural SIMilarity) results.


The resistance of the BOOST algorithm against cropping attack [37,38] was tested. Cropping is the mechanism by which outer parts of the image are cut. Three stego images (Brain IM\_0001, Knee IM\_0001, and Liver IM\_0001) generated from the BOOST algorithm were subjected to cropping attacks.

The normalized correlation (NC) values were calculated for the stego image and the corresponding cropped image [38]. The output NC results varied between 0.8944 and 1, as shown in Table 6. We see from these results that the proposed BOOST algorithm reasonably resists cropping attack.

**Table 6.** Normalized correlation (NC) results against cropping attack.


The steganographic analysis undoubtedly shows the good rate of the proposed algorithm. Table 7 summarizes some of the computed values of our proposed scheme with other algorithms.



Using the given test results, we can conclude that the presented algorithm BOOST, based on the nuclear spin generator, has satisfying statistical properties and provides a proper safety expectation.

#### **4. Conclusions**

We introduce a novel medical image steganographic scheme named BOOST. The presented algorithm uses a novel pseudorandom byte output technique based on the nuclear spin generator. Our security investigation (mean square error, peak signal-to-noise ratio, normalized cross-correlation, and structural similarity) shows that the proposed hiding can be used with success for secure medical record communication.

**Author Contributions:** B.S. (Bozhidar Stoyanov) and B.S. (Borislav Stoyanov) wrote and edited the manuscript. Both authors have read and agreed to the published version of the manuscript

**Funding:** The paper was partially supported by the National Scientific Program "Information and Communication Technologies for a Single Digital Market in Science, Education and Security (ICTinSES)", financed by the Ministry of Education and Science, Bulgaria for Bozhidar Stoyanov and Borislav Stoyanov.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**




c 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
