Next Article in Journal
Structural Stability of Pseudo-Parabolic Equations for Basic Data
Next Article in Special Issue
New Metaheuristics to Solve the Internet Shopping Optimization Problem with Sensitive Prices
Previous Article in Journal
An Experimental Comparison of Self-Adaptive Differential Evolution Algorithms to Induce Oblique Decision Trees
Previous Article in Special Issue
Design and Implementation of a Discrete-PDC Controller for Stabilization of an Inverted Pendulum on a Self-Balancing Car Using a Convex Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Resolving Contrast and Detail Trade-Offs in Image Processing with Multi-Objective Optimization

by
Daniel Molina-Pérez
1,* and
Alam Gabriel Rojas-López
2,*
1
Escuela Superior de Cómputo, Instituto Politécnico Nacional, Ciudad de México 07700, Mexico
2
Centro de Innovación y Desarrollo Tecnológico en Cómputo, Instituto Politécnico Nacional, Ciudad de México 07700, Mexico
*
Authors to whom correspondence should be addressed.
Math. Comput. Appl. 2024, 29(6), 104; https://doi.org/10.3390/mca29060104
Submission received: 21 August 2024 / Revised: 29 October 2024 / Accepted: 9 November 2024 / Published: 11 November 2024
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2024)

Abstract

:
This article addresses the complex challenge of simultaneously enhancing contrast and detail in an image, where improving one property often compromises the other. This trade-off is tackled using a multi-objective optimization approach. Specifically, the proposal’s model integrates the sigmoid transformation function and unsharp masking highboost filtering with the NSGA-II algorithm. Additionally, a posterior preference articulation is introduced to select three key solutions from the Pareto front: the maximum contrast solution, the maximum detail solution, and the knee point solution. The proposed technique is evaluated on a range of image types, including medical and natural scenes. The final solutions demonstrated significant superiority in terms of contrast and detail compared to the original images. The three selected solutions, although all are optimal, captured distinct characteristics within the images, offering different solutions according to field preferences. This highlights the method’s effectiveness across different types and enhancement requirements and emphasizes the importance of the proposed preferences in different contexts.

1. Introduction

Image enhancement is the process of applying specific techniques to boost an image’s visual quality. These techniques can imply diverse criteria, such as increasing the image contrast, noise reduction, highlighting relevant details, or adjusting brightness and color saturation. The main goal of image enhancements is to make the information within the image more easily interpretable and perceptible to human viewers, as well as to boost the automatic process of applications such as pattern recognition, medical analysis, and computer vision, among others [1,2].
Traditional image enhancement methods are divided into two well-defined categories [3,4]. The first is the spatial domain enhancement, which directly modifies pixel values to adjust aspects such as contrast and image detail. This category includes techniques such as gamma correction [5], histogram equalization [2], and sigmoid correction [6], as well as more advanced techniques such as multi-dimensional adaptive local enhancements [7] and recursive filters for color balance and shadow correction [8]. The second category is frequency domain enhancement, which transforms the image into a mathematical domain to adjust the frequency components, allowing fine detail enhancement and reduction of undesirable patterns through techniques such as high-pass or low-pass filtering. In this family, methods such as wavelet transform [9], discrete cosine transformation [10], and Fourier transform [11] are included. Both categories have specific applications and are chosen according to the type of improvement desired.
Remarkably, not all images require the same enhancement process, since the enhancement strategy relies upon the image’s specific characteristics. For example, a low-contrast image can significantly benefit from spatial domain enhancement techniques, like the contrast adjustment or the histogram equalization approach, which enhance the image’s sharpness [12,13]. On the other hand, in medical images such as magnetic resonances, the fine details highlighting can be critical; therefore, frequency domain enhancement techniques are more suitable for adjusting the high frequencies and spotlighting the image’s internal structures [14]. In deep learning architectures that handle visual data such as point clouds [15] or drone datasets [16], image enhancement should aim to improve feature extraction by making relevant details more prominent, ensuring that critical features such as edges, shapes, or textures are effectively captured during the training process. Hence, there is no unique ideal operator for all images nor a unique quantitative metric that automatically evaluates the image quality. Automatic image enhancement is a process that produces enhanced images without human intervention and is an extremely complicated task in image processing [17].
Image enhancement methods are typically parametric, which means their effectiveness relies significantly on the fine-tuning of various parameters, leading to increased complexity in achieving optimal results. Recently, multi-objective optimization has gained prominence in image processing as it addresses the challenge of simultaneously improving multiple, conflicting quality criteria. Therefore, the current work is focused on image enhancement, considering two essential properties of image processing: contrast and details. Improving these properties through transformation functions is generally compromised, meaning that increasing contrast can lead to a significant loss of details in the image. Hence, a multi-objective optimization problem with two objective functions is established: one related to the image contrast and the other to the image details. These functions are measured employing the entropy and the standard deviation for the image contrast, while the pixels’ quantity and intensity in high-frequency regions measure the image details. The contrast enhancement is performed by the sigmoid transform function, and the detail enhancement is performed by the unsharp masking and highboost filtering. To address the problem, NSGA-II [18] is used with a posterior preference articulation, meaning that specific solutions are selected once the Pareto front is finally computed. The current work offers the following contributions:
  • The trade-off between the image contrast and details is set as a multi-objective problem. Unlike the traditional mono-objective approach, which only provides an optimal solution with a predefined priority, the current proposal offers the best solutions regarding the compromises between both criteria along the Pareto front.
  • A posterior preference operator is articulated, providing three key images from the Pareto front: the image with maximum contrast, the image with maximum detail, and the image at the knee of the front, representing the image closest to the utopia point. This operator allows end-users to select the most suitable solutions for their needs.
  • An experiment is conducted with images of two categories: medical and natural scene images. Both categories represent research fields where image processing is an essential endeavor. The results of this experiment demonstrate that the NSGA-II achieves images of superior quality compared to the original instances. Furthermore, a thorough analysis is conducted regarding the suitability of the obtained images according to the established preferences. For medical images, the evaluation focuses on how the selected solutions enhance the clarity and detail of relevant structures, which is crucial for diagnostics and analysis. For natural scene images, the analysis shows how the solutions improve contrast and detail, making the images more visually appealing and impactful.
The remainder of this paper is organized as follows: Section 3 outlines the sigmoid correction and unsharp masking with highboost filtering methods used in this study. In Section 4, we present the proposed multi-objective optimization model and posterior preference articulation method. Section 5 provides the benchmark results, including experimental design, graphical analysis, and quantitative evaluations. Finally, Section 6 offers conclusions and suggestions for future research directions.

2. Related Work

A significant challenge for image enhancement methods is tuning their parameters to achieve optimal results. This task can be complex, as each parameter may affect the image characteristics in different ways. Evolutionary algorithms (EAs) stand out as highly effective tools in this context. Pioneer work in this area is that of Bhandarkar et al. [19], where a genetic algorithm (GA) was employed for an image segmentation problem. The outcomes showed that the GA outperforms traditional image segmentation methods regarding accuracy and robustness against noise. In subsequent years, numerous studies proposed diverse approaches to address image enhancement problems through EAs. An example of this is presented in [20], where an optimization-based process employs the particle swarm optimization (PSO) algorithm to enhance the contrast and details of images by adjusting the parameters of the transformation function, taking into account the relationship between local and global information. Another instance of optimization-based image enhancements is presented in [21], where an accelerated variant of PSO is used to optimize the above-mentioned transformation function, achieving a more efficient algorithm in terms of convergence.
Using EAs to solve image enhancement problems remains a common trend in recent years. This is observed in works such as [22], where the differential evolution (DE) algorithm was employed to maximize image contrast through a modified sigmoid transformation function. This function adjusts parameters that control the contrast magnitude and the balance between bright and dark areas, with optimal values determined through the evolutionary process. Similarly, Bhandari and Maurya [23] developed a novel optimized histogram equalization method, preserving average brightness while improving the contrast through a cuckoo search algorithm. The proposed method uses plateau boundaries to modify the image histogram, avoiding the extreme brightness changes often caused by traditional histogram equalization. Another interesting application integrating EAs and image histograms is presented in [24], where a GA was applied to optimize a histogram equalization through optimal subdivisions considering different delimited light exposition regions. Particularly, these optimization-based strategies have been taken to fields beyond engineering, like [25], where directed searching optimization was applied in a medical image enhancement process, improving the contrast while preserving the texture in specified zones through a threshold parameter. The popularity of EAs has led to more sophisticated approaches, as in [26], where hybridization between whales optimization and chameleon swarm algorithms was proposed specifically to find the optimal parameters of the incomplete beta function and gamma dual correction. Several other EAs have been applied to image enhancement problems, such as monarch butterfly optimization [27], chimp optimization algorithm [28], sunflower optimization [29], and slime mold algorithm [30], among others. However, despite their contributions to notable improvements in image processing, these approaches primarily focus on maximizing or minimizing a single criterion. As a result, single-objective methods may inadequately address the complex relationships among various image characteristics, thereby limiting their effectiveness in real-world applications, where multiple criteria must be optimized simultaneously.
In the last decade, multi-objective optimization in image processing has also been the subject of several investigations. The relevance of the multi-objective approach lies in the need to balance multiple quality criteria simultaneously. In many cases, improving one image characteristic may worsen another. This inherent conflict between particular objectives requires an approach that obtains a set of optimal solutions, known as the Pareto front. For example, in [31], a PSO variant was proposed to address a multi-objective problem aimed to simultaneously maximize the available information quantity (through the entropy evaluation) and minimize the resulting image distortion (measured by the structural similarity index). Similarly, in [32], a multi-objective optimization using PSO is implemented to simultaneously optimize brightness, contrast, and colorfulness in a Retinex-based method. Another instance of multi-objective image enhancement problems is presented in [33], where GA was employed to maximize the Poisson log-likelihood function (used to measure the quantitative accuracy) and the generalized scan-statistic model (measures the detection performance). In [34], a multi-objective cuckoo search algorithm was used to enhance contrast by maximizing entropy and minimizing noise variance in adaptive histogram equalization. Another trade-off problem of image enhancement was solved in [35], where through the Non-dominated Sorting Genetic Algorithm based on Reference Points (NSGA-III), the optimal parameters for anisotropic diffusion were found, aiming to produce effective filtering results while balancing the image noise and its contrast.
While the Pareto front approach provides a diverse range of solutions in image enhancement tasks, it often lacks mechanisms for preference articulation, which is essential for decision makers to select the most appropriate solution based on specific application needs [36,37,38]. Despite its importance, the explicit application of preference articulation in the literature on multi-objective image enhancement remains limited. This work proposes a multi-objective approach to simultaneously enhance contrast and details, as these two parameters have primarily been addressed through single-objective methods. Consequently, there is a lack of models that consider their interdependence. Additionally, this approach incorporates preference articulation to facilitate the selection of a limited set of images that effectively capture the diverse characteristics within the dataset.

3. Materials and Methods

3.1. Sigmoid Correction

Sigmoid correction is a technique used in image processing to adjust the contrast of an image in a nonlinear manner. This method is particularly useful for enhancing the contrasts in an image’s dark and bright regions. The correction is achieved using a sigmoid function, which maps the input intensity values to a new range according to a sigmoid curve [22,39,40]. The sigmoid function used for image correction is defined as:
g ( x ) = 1 1 + e α ( x ) ,
where g ( x ) represents the transformed pixel values; x is the original pixel intensity, scaled in the range of 0 to 1; α is the steepness of the sigmoid curve, which controls the degree of contrast adjustment; and is a value that determines the midpoint of the sigmoid curve related to the normalized pixel intensity, allowing control over the balance between the bright and dark regions of the image.
The parameter α affects how rapidly the transition occurs between dark and light regions. A higher value of α results in a steeper curve, increasing contrast by making transitions more abrupt, while lower values of α produce a gentler curve with smoother transitions, as shown in Figure 1a. The parameter enables fine-tuning the balance between bright and dark regions. By adjusting this value, you can control where the midpoint of the contrast adjustment occurs, thus affecting how the dark and light areas of the image are processed. A higher value results in a larger range of intensities being considered dark, leading to a darker overall image, whereas a lower value results in a larger range of intensities being considered bright, making the image brighter, as depicted in Figure 1b.

3.2. Unsharp Masking and Highboost Filtering

Unsharp masking and highboost filtering (UMH) are techniques for enhancing image sharpness and detail by manipulating high-frequency components. The process begins with creating a blurred version of the original image using a smoothing filter, such as an average filter. For a filter size of S, the average filter A f R S × S is defined as an S × S squared matrix, where each a i j i , j { 1 , , S } element within A f is equal to 1 S 2 . With S = 5 , the average filter is represented explicitly as follows:
A f = 1 5 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 .
Applying this low-pass filter to the input image yields a blurred version of the original image, denoted as f b ( x ) . In the unsharp masking process, a mask m ( x ) is created by subtracting the blurred image from the original image:
m ( x ) = x f b ( x ) .
This mask highlights the high-frequency details that are suppressed by the blur. However, not all high-frequency components contribute meaningfully to the image’s detail. To exclude insignificant details, a threshold d t h is applied, where any values in the mask below this threshold are set to zero:
m ( x ) = 0 , if | m ( x ) | d t h m ( x ) , otherwise .
This step ensures that only significant details are enhanced. To enhance the image, this mask is then added back to the original image with a scaling factor k to control the extent of the enhancement. The resulting enhanced image g ( x ) is calculated as:
g ( x ) = x + k · m ( x ) .

4. The Proposed Algorithm

This section presents the novel multi-objective optimization model and a posterior preference articulation method developed specifically for this study. This method is designed to find optimal trade-offs between the objective functions while providing preferential solutions capable of capturing different characteristics of the images. Furthermore, an analysis of the complexity of the proposed algorithm is conducted to evaluate its efficiency in handling the optimization process.

4.1. Multi-Objective Optimization Problem

This work focuses on enhancing images by considering two fundamental properties of image processing: contrast and details. The enhancement of these properties through transformation functions is often compromised, meaning that an increase in contrast can result in a significant loss of details in the image. Therefore, a multi-objective problem is defined with two objective functions:
  • Contrast Function: Defined as the product of entropy H ( I ) and the normalized standard deviation of the pixel intensities σ n o r m ( I ) of the I image. The contrast function is expressed as:
    f 1 ( α , ) = H ( I ) × σ n o r m ( I ) .
  • Details Function: Defined as the product of the number of pixels in high-frequency regions N H F ( I ) and the intensity of these high-frequency pixels I N H F ( I ) of the I image. The details function is denoted as:
    f 2 ( α , ) = N H F ( I ) × log 10 ( log 10 ( I N H F ( I ) ) ) .
Formally, the multi-objective optimization problem can be expressed as in (8), where F = { f 1 , f 2 } represents the set of objective functions to be minimized. The decision vector ϕ = [ α , ] T controls the behavior of the sigmoid correction function, with ϕ m i n as the lower and ϕ m a x as the upper boundary.
min ϕ R 2 F ϕ subject to : ϕ m i n ϕ ϕ m a x .
Before evaluating these objective functions, a series of image transformations are performed. Contrast enhancement is achieved using a sigmoid transformation function, where α and are the decision variables that control the degree of contrast adjustment. Additionally, UMH is applied for detail enhancement. These transformations impact the objective functions by modifying the entropy, standard deviation, number of high-frequency pixels, and intensities. Figure 2 illustrates the overall procedure for solving the multi-objective optimization problem using NSGA-II.
The selection of NSGA-II in this study is based on several algorithm strengths. Firstly, its elitist mechanism ensures the preservation of the best solutions across generations, preventing the loss of optimal solutions. Additionally, the crowding operator used by NSGA-II eliminates the need for sensitive parameters associated with niche techniques, simplifying the implementation process. Furthermore, NSGA-II exhibits low time complexity relative to other multi-objective evolutionary algorithms, which is advantageous for practical applications. It has also demonstrated efficient performance in multi-objective problems involving two objective functions, making it relevant for contrast and detail enhancement tasks. Although other more recent multi-objective evolutionary algorithms have succeeded in generic benchmarks, NSGA-II remains highly competitive in solving real-world problems [41,42], making it suitable for this work.

4.2. A Posterior Preference Articulation

The main objective of preference articulation is to select a limited set of images that effectively capture diverse characteristics within the data, which may be relevant across various contexts. This approach is particularly important given that the Pareto front can encompass as many solutions as there are members in the population or even more when using an external archive. This work focuses on choosing the extreme solutions of the Pareto front and its knee point. The knee point is identified as the solution on the Pareto front that is closest to the utopia point, which typically represents the ideal but unattainable objective values. Within the optimal trade-offs of the Pareto front, these three solutions represent opposing balances of contrast and detail, helping to capture essential image characteristics that may be relevant according to the specific requirements of each application.
To determine the knee point, the Pareto front is first normalized, ensuring that the values of each objective function fall within the range [0, 1]. The utopia point in this normalized space is a vector of zeros, representing the best possible values for each objective. The Euclidean distance of each solution on the normalized Pareto front to the utopia point is calculated as follows:
d i = j = 1 N f ( f i j p j ) 2 ,
where N f is the number of objective functions; f i j is the normalized value of the j-th objective for the i-th solution; and p j is the value of the j-th objective at the utopia point (typically 0 after normalization). The knee point is identified as the solution with the minimum distance to the utopia point.

4.3. Time Complexity

The most computationally expensive process in NSGA-II is the non-dominated sorting, which is used to categorize the individuals in the population [18]. This step has a time complexity of O ( M · N 2 ) , where M is the number of objectives and N is the population size. Additionally, the population’s objective functions evaluation has a time complexity of O ( N · F ) , where F represents the time required to evaluate an individual. Since F depends on the specific application and can vary, the total complexity of NSGA-II is expressed as O ( N · M 2 + N · F ) .
In this approach, the evaluation of each individual includes several image processes. The sigmoid function is applied to each pixel, resulting in an O ( P ) complexity, where P is the total number of pixels in the image. The smoothing filter has a complexity of O ( P · k 2 ) , where k × k is the filter size, implying that operations must be performed on k 2 neighbors. However, this term can be disregarded since k is a constant. The unsharp mask process, the application of detail thresholding, and the enhancement of details also have an O ( P ) complexity. Finally, calculating quality metrics, such as entropy and standard deviation, requires traversing each pixel, adding O ( P ) to the complexity. Thus, the total time complexity for evaluating the objective function is O ( P ) , meaning that the execution time grows linearly with the image size. Therefore, the overall complexity of the proposal is O ( N · M 2 + N · P ) .
To empirically validate this, we conducted an experiment using different image resolutions, as shown in Table 1. The results demonstrate the linear relationship between image size and execution time. This behavior is fundamental when handling high-resolution images, ensuring that processing remains feasible even as image size increases. The experiments are executed on a machine with an Apple M2 chip, an 8-core CPU, and 8 GB of unified memory.
Regarding implementation complexity, the proposed algorithm applies the correction methods before each objective function evaluation. At the end of the generations, preference articulation selects extremes and knee solutions from the Pareto front. These operations do not disrupt the conventional operators of the algorithm, enabling straightforward implementation in other evolutionary algorithms. To effectively support the preference articulation process, it is recommended that the algorithms be based on Pareto dominance.

5. Benchmark Results and Discussion

In this session, the fundamental aspects of the experiment are described. Subsequently, the results are presented and discussed in two parts: first, the visual analysis of the resulting images is conducted, followed by a discussion of the numerical results from the images concerning well-established indicators.

5.1. Experimental Design

A set of twenty images is selected to assess the effectiveness of the method developed in this work. This set is divided into two groups: the first includes 10 natural scene images extracted from the Kodak dataset [43], specifically from kodim01 to kodim10 (hereinafter referred to as Natural1 to Natural10, respectively). The second group consists of 10 medical images (referred to as Medical1 to Medical10, respectively) selected from various libraries, including brain images [44,45], blood composition images (white blood cells of the basophil and eosinophil types) [46,47], X-rays [48], ocular nodules [49], dental infections [50], microphotographs of pulmonary blood vessels [51], and traumatic forearm positioning [52].
The NSGA-II algorithm is executed for each image with a maximum of 30,000 function evaluations, aiming to produce a Pareto front containing the best solutions. From this front, solutions are extracted according to the defined articulation preference operator. The objective is to evaluate the quality of these solutions in terms of contrast and details, complemented by a visual analysis to determine the suitability of each image for specific purposes. Finally, the similarity of the enhanced images to the original ones is assessed using the structural similarity index (SSIM), which quantifies the degree of similarity between the processed and original images.
The parameter values used in this experiment are as follows: population size ( N p = 50 ), number of variables ( N v a r = 2 ), number of objective functions ( N f = 2 ), number of evaluations ( N e v a l = 30,000), mutation probability ( P m = 0.5 ), crossover probability ( P c = 0.7 ), simulated binary crossover parameter ( N c = 5 ), polynomial mutation parameter ( N m = 5 ), details threshold ( d t h = 0.1 ), lower bound ( ϕ m i n = [ 0 , 0 ] T ), and upper bound ( ϕ m a x = [ 10 , 10 ] T ).

5.2. Graphical Results

Table 2, Table 3, Table 4 and Table 5 present the results obtained through the multi-objective optimization image enhancement approach. Specifically, Table 2 and Table 3 show the results for natural images, while Table 4 and Table 5 display medical images. The tables are organized as follows: the first and second columns list the image names and their corresponding original, unenhanced versions. The third to fifth columns showcase the selected points from the Pareto front, representing the maximum contrast, knee point, and maximum detail, in that order. The final column illustrates the obtained Pareto front through the optimization process, with red, green, and orange points indicating the images that achieved maximum contrast, knee point, and maximum detail, respectively.
As observed in the results, the images extracted from the Pareto front significantly maximize contrast and detail compared to the original images. In all study cases, the original image is dominated by the solutions extracted from the fronts, demonstrating the approach’s effectiveness in improving visual quality. However, the differences among the three enhanced images for each problem require a more detailed analysis.
In the natural images, the differences among the three preferred images are more subtle, given that these are high-quality images with inherently low contrast, specifically selected for contrast enhancement exercises. More pronounced differences are observed in Natural1, Natural6, Natural7, and Natural8 images. For the Natural7 image, there is a general improvement in overall details. However, specific regions, such as the highlighted flower in the yellow box, may lose details compared to the original image, which retains more information. This suggests that for future work, it may be advisable to apply local and/or adaptive image enhancement techniques to preserve details in specific regions while maintaining overall image quality.
For medical images, there are instances where differences are more perceptible. For example, in the Medical3 image, the maximum contrast solution makes it difficult to visualize the internal details of the basophil (a white blood cell highlighted in the box), which could result in a less accurate interpretation. In contrast, the knee and maximum detail solutions provide a clearer view of the interior of the white blood cell. Similarly, in the Medical5 image, the maximum contrast solution highlights the hand and arm bone structures. However, the maximum detail image offers a more precise view of the internal structures within the bones (see the highlighted region), which is crucial for a more detailed evaluation. Another notable example is the Medical8 image, where the maximum detail solution offers a more detailed view of the internal structure of the eosinophil (another type of white blood cell). However, the maximum contrast image improves the visibility of red blood cells. As shown in the yellow box, this solution reveals a red blood cell that is nearly imperceptible in the other solutions. An interesting case is the Medical6 image, where only a few non-dominated solutions are present on the Pareto front. Despite the similarities among the preferred solutions, the nodules are much more perceptible in the enhanced images than in the original image, as observed in the highlighted region.
The solutions extracted from the Pareto front represent optimal trade-offs between contrast and detail. For natural images, these three alternatives can be considered useful based on aesthetic criteria or in subsequent automatic processes that require prioritizing one property over another. In the case of medical images, these alternatives allow for a more precise evaluation suited to different diagnostic needs, providing a flexible approach to enhance the visualization of critical details according to the clinical context.

5.3. Quantitative Results

Table 6 and Table 7 present noteworthy information regarding several criteria following the next structure. For each image whose name is presented in the first column, a set of three rows displays the outcomes of the Pareto front’s maximum contrast, knee point, and maximum details solutions. The results per individual regarding their entropy, normalized standard deviation, number of pixels, and pixel intensity are displayed from the third to the sixth column. The seventh and eighth columns display the objective function values of the multi-objective optimization problem. The last column displays the SSIM with respect to the original images, where all those images archived values above 0.7, i.e., SSIM > 0.7, are in boldface, implying that these images accomplished the enhancement with an acceptable similarity to the original image.
As can be seen, the maximum contrast solutions generally yield higher entropy and normalized standard deviation, indicating a broader range of pixel intensities and greater variability in the enhanced images. In contrast, maximum detail solutions focus on enhancing the finer details within the images. This often results in lower entropy and normalized standard deviation compared to maximum contrast solutions but may increase the number and the intensity of high-frequency pixels (indicating more detailed textures). These results highlight differences in contrast and detail between the solutions extracted from the Pareto front that may not be perceptible in the previous visual discussion. If we examine some cases, we can find images such as Natural2, Medical1, and Medical4, where the extreme points on the Pareto front do not show a visual difference. However, their associated values for contrast and detail exhibit numerical differences.
The reported values of the objective functions show that all exposed solutions are non-dominated, indicating that they represent an optimal trade-off between the two evaluated criteria. Regarding the SSIM index, 65% of the solutions exhibit values above 0.7, indicating a generally high level of structural similarity. Furthermore, by only analyzing the medical images, 85% of the solutions reached SSIM outcomes beyond 0.7, indicating that the proposal is a trustworthy tool for dealing with this kind of information. Nonetheless, if only the natural scene images are evaluated, the number of solutions that archived this SSIM outcome decreases to 50%. This may influence the artificially imposed low contrast in this set of images. Consequently, future work should consider incorporating SSIM as an additional objective function, especially for problems where image fidelity is crucial.

5.4. Image Quality Metrics

Besides the analysis related to the optimization problem, an evaluation of well-recognized image enhancement metrics is conducted. Table 8 displays the metrics evaluations related to the natural set (left side) and medical image set (right side). The particular points of interest (Max. contrast, Knee, and Max. Detail) are divided by rows on each set. The study encompasses the Contrast-to-Noise Ratio (CNR), Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), and Natural Image Quality Evaluator (NIQE), which are tagged in that order in the column headers. Next, a brief explanation of how these metrics are addressed is offered.
  • The Contrast-to-Noise Ratio (CNR): This metric quantifies the contrast between a signal and the background. The higher the CNR outcome, the better the image enhancement. Values from zero to one indicate a poor contrast, while ranges from one to three indicate a moderated contrast, and greater values indicate a good contrast, implying that it is easy to identify features within the image [53]. This metric is reference-based, i.e., it compares the enhanced image w.r.t. the original one. This work uses Otsu’s method to compute the threshold to differentiate noise from signal [54].
  • The Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE): This is a no-reference-based metric computed directly from the enhanced image. It measures the degradation of the image quality after an image processing task. Based on statistical approaches, it compares the image against a Gaussian mixture model that scores the image, where the lower the score, the better the image quality. Values lower than twenty mean a high-quality image, values from twenty to forty reflect good image quality, values from forty to sixty indicate fair image quality, and greater values reflect poor-quality images [55].
  • The Natural Image Quality Evaluator (NIQE): This is also a no-reference-based metric that directly assesses the quality of images by comparing them to a statistical model built from natural images instead of a Gaussian mixture model. The lower the NIQE score, the better the image quality. Scores below five indicate high-quality images, a range between five and ten reflects good image quality, and greater values indicate poor quality [56].
According to this information, on each column (evaluated metric) of Table 8, the results that reflect high-quality images are in boldface. The following highlights are offered to provide a straightforward understanding of the outcomes.
  • Regarding the CNR metric, 93.33% (28/30) of the enhanced natural images achieved good noise contrast, indicating clarity in the obtained images. Similarly, 90% of the enhanced medical images (27/30) reached good noise contrast.
  • Regarding the BRISQUE criterion, 23.33% (7/30) of the enhanced natural images can be considered high-quality images, while the rest accomplish good quality standards. Interestingly, the enhanced medical images achieved the same number of high-quality images.
  • In terms of the NIQUE metric, 80% (24/30) of the enhanced natural images reached high-quality standards. On the other hand, only 73.33% (22/30) of the enhanced medical images reach this standard. Nonetheless, the rest of the images of both sets possess good-quality features.
These results demonstrate that the current proposal not only improves image contrast and details while providing a range of trade-off solutions but also preserves image quality and naturalness without introducing noise. This is particularly significant, as these criteria were not explicitly chosen for the optimization problem.

5.5. Comparison with Baseline Methods

In this section, a comparison is made between the results obtained by the proposed multi-objective approach and two well-known contrast enhancement methods: contrast stretching (CS) and adaptive histogram equalization (CLAHE), as illustrated in Table 9. The first advantage of the multi-objective approach is that it produces a set of optimal solutions, each offering a different balance between contrast and detail. On the other hand, traditional methods only provide a single solution. For comparison, we selected the knee points from the Pareto front generated by the proposed method.
The comparison is performed by assessing the dominance of the obtained solutions in terms of contrast ( f 1 ) and details ( f 2 ). If the solution from the multi-objective approach outperforms the traditional methods in both criteria, it is considered dominant (denoted by status “+”). If it improves only one of the two criteria, it is considered non-dominated (denoted by status “ | | ”). If the traditional method outperforms both criteria, the multi-objective solution is dominated (denoted by status “-”).
The results show that the multi-objective approach dominated CS in 19 images, being dominated only in the case of Medical5. Although CS is a classical method that adjusts the histogram to improve the overall contrast of an image, the multi-objective approach demonstrates superior performance as it focuses on enhancing the contrast–detail balance. Compared with CLAHE, 13 non-dominated solutions were obtained, 6 where the proposal dominates, and 1 where CLAHE outperforms our approach. Adaptive equalization performs better in detail enhancement ( f 2 ), which explains the lack of clear dominance in several cases compared to the multi-objective approach. Adaptive correction methods generally have advantages in highlighting contrast and details over general approaches. However, the optimization-based proposal can outperform the adaptive approach in several images while maintaining a competitive performance in most images, demonstrating the proposal’s robustness and flexibility in various scenarios.
To complement the analysis, a Wilcoxon rank-sum test was conducted to assess the statistical significance of the differences between the methods. The p-values indicate the following:
  • CS vs. Multi-objective approach (Contrast): p = 0.01058 , showing a significant difference in contrast performance, favoring the multi-objective approach.
  • CS vs. Multi-objective approach (Details): p = 0.12643 , suggesting no significant difference in detail enhancement.
  • CLAHE vs. Multi-objective approach (Contrast): p = 0.00771 , demonstrating a statistically significant improvement in contrast with the multi-objective approach.
  • CLAHE vs. Multi-objective approach (Details): p = 0.59786 , indicating no significant difference in detail enhancement between CLAHE and the multi-objective approach.
These statistical results reinforce the superiority of the multi-objective approach in improving contrast, particularly when compared with CS and CLAHE, while its performance in detail enhancement remains competitive.

6. Conclusions

The conflict between contrast and detail in image processing is presented as a multi-objective problem. This approach obtains a set of optimal solutions, forming a Pareto front in all cases, highlighting the trade-off between these two properties. Therefore, it is demonstrated that a single-objective approach to this problem will only lead to a particular solution among all the optimal solutions obtained through the multi-objective approach.
The proposed model integrates the sigmoid transformation function and UMH into the NSGA-II. Additionally, a posterior preference articulation is added, which selects three key solutions from the Pareto front: the maximum contrast solution, the maximum detail solution, and the knee point solution. These three solutions showed significant superiority in terms of contrast and detail compared to the original images. Furthermore, the outcomes visually and numerically demonstrated how these three image solutions, though all optimal solutions, differ in terms of entropy, standard deviation, number of detail pixels, and detail intensity. This variability allows fundamental characteristics to emerge across the images, underscoring the relevance of the proposed preferences across various contexts. Moreover, the proposed method demonstrated its effectiveness against traditional contrast enhancement techniques, such as contrast stretching and adaptive histogram equalization, by achieving a superior balance between contrast and detail, as evidenced by the dominance of its solutions in several cases.
A post hoc analysis regarding popular image quality metrics, such as CNR, BRISQUE, and NIQE, showed that part of the images created through the current proposal achieved high-quality image standards, while none of the generated enhanced images decreased below good-quality standards. These results demonstrate that the method is a trustworthy tool for image enhancement, offering a range of solutions that not only meet diverse user needs but also consistently maintain high-quality outcomes
Despite the overall improvement in image details, specific regions may lose details compared to the original image. This suggests that for future work, it may be advisable to apply local and/or adaptive image enhancement techniques to preserve details in specific regions while maintaining overall image quality. Future reserach should develop adaptive preference articulations that can identify all solutions from the Pareto front that reveal singular information within the image. Moreover, a broader investigation into the method’s versatility is planned, including experiments with alternative evolutionary algorithms. Furthermore, multi-objective micro-evolutionary algorithms are a promising and challenging research path in image enhancement tasks. Their potential lies in reducing computational overhead, though they also face challenges related to limited solution diversity and the risk of premature convergence.

Author Contributions

Conceptualization, D.M.-P.; methodology, D.M.-P. and A.G.R.-L.; validation, D.M.-P. and A.G.R.-L.; investigation, D.M.-P.; writing, D.M.-P. and A.G.R.-L.; visualization, A.G.R.-L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The source codes for the multi-objective optimization algorithm used in this study are available at https://github.com/dani90molinaperez/Multi-Objective-Optimization-for-Image-Processing.

Acknowledgments

The authors acknowledge the support from the Consejo Nacional de Humanidades, Ciencia y Tecnología (CONAHCYT) and its support in Mexico through the institutions ESCOM-IPN and CIDETEC-IPN.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Russ, J.C.; Russ, J.C. Introduction to Image Processing and Analysis; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  2. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 4th ed.; Global, Ed.; Pearson: New York, NY, USA, 2018. [Google Scholar]
  3. Aşuroğlu, T.; Sümer, E. Performance analysis of spatial and frequency domain filtering in high resolution images. In Proceedings of the 2015 23nd Signal Processing and Communications Applications Conference (SIU), Malatya, Turkey, 16–19 May 2015; pp. 935–938. [Google Scholar]
  4. Lepcha, D.C.; Goyal, B.; Dogra, A.; Sharma, K.P.; Gupta, D.N. A deep journey into image enhancement: A survey of current and emerging trends. Inf. Fusion 2023, 93, 36–76. [Google Scholar] [CrossRef]
  5. Kubinger, W.; Vincze, M.; Ayromlou, M. The role of gamma correction in colour image processing. In Proceedings of the 9th European Signal Processing Conference (EUSIPCO 1998), Island of Rhodes, Greece, 8–11 September 1998; pp. 1–4. [Google Scholar]
  6. Braun, G.J.; Fairchild, M.D. Image lightness rescaling using sigmoidal contrast enhancement functions [3648-13]. In Proceedings of the Proceedings-SPIE the International Society for Optical Engineering; SPIE International Society for Optical: Bellingham, WA, USA, 1999; pp. 96–107. [Google Scholar]
  7. Stimper, V.; Bauer, S.; Ernstorfer, R.; Schölkopf, B.; Xian, R.P. Multidimensional contrast limited adaptive histogram equalization. IEEE Access 2019, 7, 165437–165447. [Google Scholar] [CrossRef]
  8. Albu, F.; Vertan, C.; Florea, C.; Drimbarean, A. One scan shadow compensation and visual enhancement of color images. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 3133–3136. [Google Scholar]
  9. Zhang, D.; Zhang, D. Wavelet transform. In Fundamentals of Image Data Mining: Analysis, Features, Classification and Retrieval; Springer: New York, NY, USA, 2019; pp. 35–44. [Google Scholar]
  10. Strang, G. The discrete cosine transform. SIAM Rev. 1999, 41, 135–147. [Google Scholar] [CrossRef]
  11. Chittora, N.; Babel, D. A brief study on Fourier transform and its applications. Int. Res. J. Eng. Technol. 2018, 5, 1127–1130. [Google Scholar]
  12. Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef]
  13. Wang, W.; Wu, X.; Yuan, X.; Gao, Z. An experiment-based review of low-light image enhancement methods. IEEE Access 2020, 8, 87884–87917. [Google Scholar] [CrossRef]
  14. Ullah, Z.; Farooq, M.U.; Lee, S.H.; An, D. A hybrid image enhancement based brain MRI images classification technique. Med. Hypotheses 2020, 143, 109922. [Google Scholar] [CrossRef]
  15. Wang, C.; Wu, M.; Lam, S.K.; Ning, X.; Yu, S.; Wang, R.; Li, W.; Srikanthan, T. GPSFormer: A Global Perception and Local Structure Fitting-based Transformer for Point Cloud Understanding. arXiv 2024, arXiv:2407.13519. [Google Scholar]
  16. Wang, R.; Lam, S.K.; Wu, M.; Hu, Z.; Wang, C.; Wang, J. Destination intention estimation-based convolutional encoder-decoder for pedestrian trajectory multimodality forecast. Measurement 2025, 239, 115470. [Google Scholar] [CrossRef]
  17. Dhal, K.G.; Ray, S.; Das, A.; Das, S. A survey on nature-inspired optimization algorithms and their application in image enhancement domain. Arch. Comput. Methods Eng. 2019, 26, 1607–1638. [Google Scholar] [CrossRef]
  18. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  19. Bhandarkar, S.M.; Zhang, Y.; Potter, W.D. An edge detection technique using genetic algorithm-based optimization. Pattern Recognit. 1994, 27, 1159–1180. [Google Scholar] [CrossRef]
  20. Braik, M.; Sheta, A.F.; Ayesh, A. Image Enhancement Using Particle Swarm Optimization. World Congr. Eng. 2007, 1, 978–988. [Google Scholar]
  21. Behera, S.K.; Mishra, S.; Rana, D. Image enhancement using accelerated particle swarm optimization. Int. J. Eng. Res. Technol. 2015, 4, 1049–1054. [Google Scholar]
  22. Nguyen-Thi, K.N.; Che-Ngoc, H.; Pham-Chau, A.T. An efficient image contrast enhancement method using sigmoid function and differential evolution. J. Adv. Eng. Comput. 2020, 4, 162–172. [Google Scholar] [CrossRef]
  23. Bhandari, A.K.; Maurya, S. Cuckoo search algorithm-based brightness preserving histogram scheme for low-contrast image enhancement. Soft Comput. 2020, 24, 1619–1645. [Google Scholar] [CrossRef]
  24. Acharya, U.K.; Kumar, S. Genetic algorithm based adaptive histogram equalization (GAAHE) technique for medical image enhancement. Optik 2021, 230, 166273. [Google Scholar] [CrossRef]
  25. Acharya, U.K.; Kumar, S. Directed searching optimized texture based adaptive gamma correction (DSOTAGC) technique for medical image enhancement. Multimed. Tools Appl. 2024, 83, 6943–6962. [Google Scholar] [CrossRef]
  26. Braik, M. Hybrid enhanced whale optimization algorithm for contrast and detail enhancement of color images. Clust. Comput. 2024, 27, 231–267. [Google Scholar] [CrossRef]
  27. Rani, S.S. Colour image enhancement using weighted histogram equalization with improved monarch butterfly optimization. Int. J. Image Data Fusion 2024, 15, 510–536. [Google Scholar] [CrossRef]
  28. Du, N.; Luo, Q.; Du, Y.; Zhou, Y. Color image enhancement: A metaheuristic chimp optimization algorithm. Neural Process. Lett. 2022, 54, 4769–4808. [Google Scholar] [CrossRef]
  29. Krishnan, S.N.; Yuvaraj, D.; Banerjee, K.; Josephson, P.J.; Kumar, T.C.A.; Ayoobkhan, M.U.A. Medical image enhancement in health care applications using modified sun flower optimization. Optik 2022, 271, 170051. [Google Scholar] [CrossRef]
  30. Ma, G.; Yue, X.; Zhu, J.; Liu, Z.; Zhang, Z.; Zhou, Y.; Li, C. A novel slime mold algorithm for grayscale and color image contrast enhancement. Comput. Vis. Image Underst. 2024, 240, 103933. [Google Scholar] [CrossRef]
  31. More, L.G.; Brizuela, M.A.; Ayala, H.L.; Pinto-Roa, D.P.; Noguera, J.L.V. Parameter tuning of CLAHE based on multi-objective optimization to achieve different contrast levels in medical images. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 4644–4648. [Google Scholar]
  32. Matin, F.; Jeong, Y.; Park, H. Retinex-based image enhancement with particle swarm optimization and multi-objective function. IEICE Trans. Inf. Syst. 2020, 103, 2721–2724. [Google Scholar] [CrossRef]
  33. Abouhawwash, M.; Alessio, A.M. Multi-objective evolutionary algorithm for PET image reconstruction: Concept. IEEE Trans. Med. Imaging 2021, 40, 2142–2151. [Google Scholar] [CrossRef]
  34. Kuran, U.; Kuran, E.C. Parameter selection for CLAHE using multi-objective cuckoo search algorithm for image contrast enhancement. Intell. Syst. Appl. 2021, 12, 200051. [Google Scholar] [CrossRef]
  35. Cuevas, E.; Zaldívar, D.; Pérez-Cisneros, M. Multi-objective Optimization of Anisotropic Diffusion Parameters for Enhanced Image Denoising. In New Metaheuristic Schemes: Mechanisms and Applications; Springer: New York, NY, USA, 2023; pp. 241–268. [Google Scholar]
  36. Jaimes, A.L.; Martınez, S.Z.; Coello, C.A.C. An introduction to multiobjective optimization techniques. Optim. Polym. Process. 2009, 1, 29. [Google Scholar]
  37. Lee, D.H.; Kim, K.J.; Köksalan, M. A posterior preference articulation approach to multiresponse surface optimization. Eur. J. Oper. Res. 2011, 210, 301–309. [Google Scholar] [CrossRef]
  38. Wang, H.; Olhofer, M.; Jin, Y. A mini-review on preference modeling and articulation in multi-objective optimization: Current status and challenges. Complex Intell. Syst. 2017, 3, 233–245. [Google Scholar] [CrossRef]
  39. Imtiaz, M.S.; Wahid, K.A. Image enhancement and space-variant color reproduction method for endoscopic images using adaptive sigmoid function. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 3905–3908. [Google Scholar]
  40. Srinivas, K.; Bhandari, A.K. Low light image enhancement with adaptive sigmoid transfer function. IET Image Process. 2020, 14, 668–678. [Google Scholar] [CrossRef]
  41. Folkersma, L. The Impact of Problem Features on NSGA-II and MOEA/D Performance. Master’s Thesis, Utrecht University, Utrecht, The Netherlands, 2020. [Google Scholar]
  42. Saǧlican, E.; Afacan, E. MOEA/D vs. NSGA-II: A Comprehensive Comparison for Multi/Many Objective Analog/RF Circuit Optimization through a Generic Benchmark. ACM Trans. Des. Autom. Electron. Syst. 2023, 29, 1–23. [Google Scholar] [CrossRef]
  43. Eastman Kodak Company. Kodak Lossless True Color Image Suite. 2013. Available online: https://r0k.us/graphics/kodak/ (accessed on 18 August 2024).
  44. Hartung, M. Visible Human Project—Brain Images. Case Study. 2024. Available online: https://radiopaedia.org/cases/visible-human-project-brain-images-1 (accessed on 10 August 2024).
  45. Al Kabbani, A. Human Brain—Lateral View. Case Study. 2024. Available online: https://radiopaedia.org/cases/human-brain-lateral-view (accessed on 10 August 2024).
  46. Nickparvar, M. White Blood Cells Dataset: A Large Dataset of Five White Blood Cells Types. 2022. Available online: https://www.kaggle.com/datasets/masoudnickparvar/white-blood-cells-dataset/data (accessed on 10 August 2024).
  47. Mooney, P. Blood Cell Image Dataset. 2018. Available online: https://www.kaggle.com/datasets/paultimothymooney/blood-cells/discussion/437393, (accessed on 10 August 2024).
  48. Radiological Society of North America. Pediatric Bone Age Machine Learning Challenge Dataset. 2017. Available online: https://www.kaggle.com/code/plarmuseau/image-contrast-enhancement-techniques/input (accessed on 10 August 2024).
  49. Thurston, M. Lisch Nodules (Photo). 2024. Available online: https://radiopaedia.org/cases/lisch-nodules-photo (accessed on 10 August 2024).
  50. Dhairya, S. Dental Condition Dataset. 2024. Available online: https://www.kaggle.com/datasets/sizlingdhairya1/oral-infection (accessed on 10 August 2024).
  51. Sinitca, A.M.; Lyanova, A.I.; Kaplun, D.I.; Hassan, H.; Krasichkov, A.S.; Sanarova, K.E.; Shilenko, L.A.; Sidorova, E.E.; Akhmetova, A.A.; Vaulina, D.D.; et al. Microscopy Image Dataset for Deep Learning-Based Quantitative Assessment of Pulmonary Vascular Changes. Sci. Data 2024, 11, 635. [Google Scholar] [CrossRef] [PubMed]
  52. Er, A. Trauma Forearm Positioning (Photo). 2020. Available online: https://radiopaedia.org/cases/trauma-forearm-positioning-photo (accessed on 10 August 2024).
  53. Baker, M.E.; Dong, F.; Primak, A.; Obuchowski, N.A.; Einstein, D.; Gandhi, N.; Herts, B.R.; Purysko, A.; Remer, E.; Vachani, N. Contrast-to-noise ratio and low-contrast object resolution on full-and low-dose MDCT: SAFIRE versus filtered back projection in a low-contrast object phantom and in the liver. Am. J. Roentgenol. 2012, 199, 8–18. [Google Scholar] [CrossRef] [PubMed]
  54. Otsu, N. A threshold selection method from gray-level histograms. Automatica 1975, 11, 23–27. [Google Scholar] [CrossRef]
  55. Saad, M.A.; Bovik, A.C.; Charrier, C. Blind image quality assessment: A natural scene statistics approach in the DCT domain. IEEE Trans. Image Process. 2012, 21, 3339–3352. [Google Scholar] [CrossRef]
  56. Zvezdakova, A.; Kulikov, D.; Kondranin, D.; Vatolin, D. Barriers towards no-reference metrics application to compressed video quality analysis: On the example of no-reference metric NIQE. arXiv 2019, arXiv:1907.03842. [Google Scholar]
Figure 1. Comparison of sigmoid correction curves demonstrating the impact of varying parameters on image enhancement. (a) Effect of different α values on the gradient of the sigmoid curve, influencing the contrast adjustment. (b) Impact of different ( ) values on the midpoint of the sigmoid curve, affecting the balance between light and dark regions.
Figure 1. Comparison of sigmoid correction curves demonstrating the impact of varying parameters on image enhancement. (a) Effect of different α values on the gradient of the sigmoid curve, influencing the contrast adjustment. (b) Impact of different ( ) values on the midpoint of the sigmoid curve, affecting the balance between light and dark regions.
Mca 29 00104 g001
Figure 2. The flowchart depicts the multi-objective optimization process for image enhancement, starting with the initialization of a random population of decision vectors ϕ = [ α , ] T that control the sigmoid transformation for contrast adjustment. Balloon 1 denotes the evaluation procedure, which includes image enhancement via sigmoid correction and highboost filtering, followed by evaluating individuals based on contrast and detail objectives. Non-dominated sorting ranks the individuals and tournament selection generates offspring, iterating until stop criteria are met. Finally, the algorithm returns three solutions that present opposing balances between contrast and detail.
Figure 2. The flowchart depicts the multi-objective optimization process for image enhancement, starting with the initialization of a random population of decision vectors ϕ = [ α , ] T that control the sigmoid transformation for contrast adjustment. Balloon 1 denotes the evaluation procedure, which includes image enhancement via sigmoid correction and highboost filtering, followed by evaluating individuals based on contrast and detail objectives. Non-dominated sorting ranks the individuals and tournament selection generates offspring, iterating until stop criteria are met. Finally, the algorithm returns three solutions that present opposing balances between contrast and detail.
Mca 29 00104 g002
Table 1. Execution times of the algorithm for different image resolutions. In each case, 10,000 evaluations of the objective function were performed.
Table 1. Execution times of the algorithm for different image resolutions. In each case, 10,000 evaluations of the objective function were performed.
ResolutionExecution Time (s)
800 × 450 pixels (WVGA)66
1280 × 720 pixels (HD)180
1920 × 1080 pixels (Full HD)528
3840 × 2160 pixels (4K)2180
Table 2. Natural image result—1.
Table 2. Natural image result—1.
OriginalMax. Contrast Knee Point Max. Detail Pareto Front
Natural1Mca 29 00104 i001Mca 29 00104 i002Mca 29 00104 i003Mca 29 00104 i004Mca 29 00104 i081
Natural2Mca 29 00104 i005Mca 29 00104 i006Mca 29 00104 i007Mca 29 00104 i008Mca 29 00104 i082
Natural3Mca 29 00104 i009Mca 29 00104 i010Mca 29 00104 i011Mca 29 00104 i012Mca 29 00104 i083
Natural4Mca 29 00104 i013Mca 29 00104 i014Mca 29 00104 i015Mca 29 00104 i016Mca 29 00104 i084
Natural5Mca 29 00104 i017Mca 29 00104 i018Mca 29 00104 i019Mca 29 00104 i020Mca 29 00104 i085
Table 3. Natural image results—2.
Table 3. Natural image results—2.
OriginalMax. Contrast Knee Point Max. Detail Pareto Front
Natural6Mca 29 00104 i021Mca 29 00104 i022Mca 29 00104 i023Mca 29 00104 i024Mca 29 00104 i086
Natural7Mca 29 00104 i025Mca 29 00104 i026Mca 29 00104 i027Mca 29 00104 i028Mca 29 00104 i087
Natural8Mca 29 00104 i029Mca 29 00104 i030Mca 29 00104 i031Mca 29 00104 i032Mca 29 00104 i088
Natural9Mca 29 00104 i033Mca 29 00104 i034Mca 29 00104 i035Mca 29 00104 i036Mca 29 00104 i089
Natural10Mca 29 00104 i037Mca 29 00104 i038Mca 29 00104 i039Mca 29 00104 i040Mca 29 00104 i090
Table 4. Medical image results—1.
Table 4. Medical image results—1.
OriginalMax. Contrast Knee Point Max. Detail Pareto Front
Medical1Mca 29 00104 i041Mca 29 00104 i042Mca 29 00104 i043Mca 29 00104 i044Mca 29 00104 i091
Medical2Mca 29 00104 i045Mca 29 00104 i046Mca 29 00104 i047Mca 29 00104 i048Mca 29 00104 i092
Medical3Mca 29 00104 i049Mca 29 00104 i050Mca 29 00104 i051Mca 29 00104 i052Mca 29 00104 i093
Medical4Mca 29 00104 i053Mca 29 00104 i054Mca 29 00104 i055Mca 29 00104 i056Mca 29 00104 i094
Medical5Mca 29 00104 i057Mca 29 00104 i058Mca 29 00104 i059Mca 29 00104 i060Mca 29 00104 i095
Table 5. Medical image results—2.
Table 5. Medical image results—2.
OriginalMax. Contrast Knee Point Max. Detail Pareto Front
Medical6Mca 29 00104 i061Mca 29 00104 i062Mca 29 00104 i063Mca 29 00104 i064Mca 29 00104 i096
Medical7Mca 29 00104 i065Mca 29 00104 i066Mca 29 00104 i067Mca 29 00104 i068Mca 29 00104 i097
Medical8Mca 29 00104 i069Mca 29 00104 i070Mca 29 00104 i071Mca 29 00104 i072Mca 29 00104 i098
Medical9Mca 29 00104 i073Mca 29 00104 i074Mca 29 00104 i075Mca 29 00104 i076Mca 29 00104 i099
Medical10Mca 29 00104 i077Mca 29 00104 i078Mca 29 00104 i079Mca 29 00104 i080Mca 29 00104 i100
Table 6. Optimization results for natural images.
Table 6. Optimization results for natural images.
ImageSolution H ( I ) σ norm ( I ) N HF ( I ) Intensity HF ( I ) f 1 f 2 SSIM
Natural1Max. Contrast7.55310.5957136,32828,907.96074.4997100,511.19340.6658
Knee7.55930.5919140,55429,866.66974.4741103,785.08680.6434
Max. Detail7.48550.5771142,30130,043.52714.3198105,104.00620.6153
Natural2Max. Contrast6.39580.289042,6657256.37661.848229,298.30870.6388
Knee6.38670.288943,0057323.34151.844929,547.10930.6484
Max. Detail6.36330.288843,0397331.77221.837429,572.38890.6509
Natural3Max. Contrast7.50800.556227,2634478.15284.176018,199.81450.8264
Knee7.41740.543731,4975253.16764.033121,228.61150.7635
Max. Detail7.23100.514033,0825554.91583.716822,370.50670.6750
Natural4Max. Contrast7.60280.558238,5636506.99444.243726,317.55880.7603
Knee7.57050.549841,0956840.48954.162028,125.82940.7430
Max. Detail7.45890.530542,3586941.62623.957129,014.47050.7154
Natural5Max. Contrast7.63970.6084123,12125,158.05874.648190,179.92750.6053
Knee7.62770.6070123,27525,177.71334.629790,296.08660.5938
Max. Detail7.58160.6062123,31125,173.07024.595990,321.66160.5894
Natural6Max. Contrast7.29650.715192,53718,795.85195.217466,825.32940.6586
Knee7.33020.700596,35619,321.68185.134569,678.17410.6931
Max. Detail7.28510.675897,67519,209.17504.923470,611.62970.7175
Natural7Max. Contrast7.46010.554645,7988129.00784.137431,650.51870.7839
Knee7.38070.544248,5998487.53294.016333,666.66910.7652
Max. Detail7.16480.516050,0888529.14223.697034,707.53550.7388
Natural8Max. Contrast7.29750.7166135,04330,609.37025.229399,829.91650.6525
Knee7.35740.7065142,32231,648.03785.1979105,373.95000.6738
Max. Detail7.30670.6903144,81131,899.51435.0441107,256.07170.6770
Natural9Max. Contrast7.49270.527841,1617399.78723.955028,296.72890.7953
Knee7.44550.526142,5377682.59073.917429,304.36570.8037
Max. Detail7.30500.519642,8237772.14293.795429,520.54230.8067
Natural10Max. Contrast7.57210.528032,8345719.87263.998322,240.94230.7855
Knee7.55010.519835,0586002.21803.924623,814.23530.7937
Max. Detail7.45910.500035,6216062.69483.729524,210.75770.7868
Table 7. Optimization results for medical images.
Table 7. Optimization results for medical images.
ImageSolution H ( I ) σ norm ( I ) N HF ( I ) Intensity HF ( I ) f 1 f 2 SSIM
Medical1Max. Contrast7.84480.582328,9214292.02924.568219,256.76330.7469
Knee7.82790.582029,5644414.58514.555819,718.73330.7608
Max. Detail7.78700.576729,8614481.52974.490819,935.05990.7732
Medical2Max. Contrast7.16260.780920,7643818.13875.593513,726.07970.6879
Knee7.13390.780921,0253876.90845.571013,911.82130.6584
Max. Detail7.10540.778421,1623905.26245.530914,008.80620.6366
Medical3Max. Contrast7.47960.640350168.49794.7890227.26350.7746
Knee7.14340.5984875118.23164.2747427.03640.8652
Max. Detail6.40430.48111058145.90173.0808529.68950.8658
Medical4Max. Contrast7.81360.574415,1561923.45624.48829576.78370.7955
Knee7.78030.570816,0082042.44974.441110,157.29920.8073
Max. Detail7.68510.556316,1902081.75964.275110,286.26660.8092
Medical5Max. Contrast4.11560.43171017117.55671.7767495.98420.5690
Knee4.11560.38291486182.34341.5757763.28750.6173
Max. Detail4.11560.33461729217.49191.3772905.43070.6017
Medical6Max. Contrast7.64610.448216,7212343.13993.426810,709.46270.7703
Knee7.64420.448216,7322347.14013.425910,717.73920.7703
Max. Detail7.63630.448216,7422346.87563.422410,724.06330.7702
Medical7Max. Contrast7.76870.60456378970.34784.69653831.18200.7633
Knee7.76580.60476389972.12924.69583838.34330.7612
Max. Detail7.75910.60476389972.20334.69163838.36630.7607
Medical8Max. Contrast6.74480.486035445.33293.2779150.74810.7133
Knee6.16480.464249664.14212.8618222.85420.8958
Max. Detail5.35910.416358574.71382.2309268.67180.9375
Medical9Max. Contrast5.69860.404129,1177292.71902.302720,000.45910.8336
Knee5.52450.411629,5877479.43642.273720,352.27640.8403
Max. Detail5.34110.413029,7627561.06482.206120,485.14910.8429
Medical10Max. Contrast7.71750.508716,2932996.56633.926010,606.16730.8023
Knee7.55850.500216,5623094.91723.781010,803.76830.8674
Max. Detail7.38720.486416,8123100.23993.593110,968.06180.8806
Table 8. Results for key points of interest (Max. contrast, Knee, and Max. Detail) using three metrics: Contrast-to-Noise Ratio (CNR), Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), and Natural Image Quality Evaluator (NIQE).
Table 8. Results for key points of interest (Max. contrast, Knee, and Max. Detail) using three metrics: Contrast-to-Noise Ratio (CNR), Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), and Natural Image Quality Evaluator (NIQE).
CNRBRISQUENIQE CNRBRISQUENIQE
Natural1Max. Contrast3.634440.51795.6368Medical1Max. Contrast3.680924.5074.2045
Knee3.754739.63825.6135Knee3.729824.25494.2751
Max. Detail3.828837.47435.4585Max. Detail3.776622.76634.0709
Natural2Max. Contrast6.988823.02185.2016Medical2Max. Contrast4.96919.14063.6733
Knee7.209423.97625.2328Knee4.956417.81473.6678
Max. Detail7.330824.21545.2739Max. Detail4.913917.12963.621
Natural3Max. Contrast3.34627.06873.9379Medical3Max. Contrast3.585626.09074.7967
Knee3.616220.48113.8339Knee3.38817.46074.4942
Max. Detail4.14269.30083.7158Max. Detail4.290424.14143.6347
Natural4Max. Contrast3.990620.29483.8824Medical4Max. Contrast3.548236.01474.798
Knee4.118318.72913.7746Knee3.700936.79164.4748
Max. Detail4.353916.03363.5537Max. Detail3.849536.38044.0389
Natural5Max. Contrast4.094327.38833.8463Medical5Max. Contrast4.513442.165.5734
Knee4.040127.22863.8365Knee3.154742.02325.5161
Max. Detail4.040526.75983.8073Max. Detail2.972938.09335.1749
Natural6Max. Contrast5.536932.03974.0916Medical6Max. Contrast4.020723.0524.6726
Knee5.240435.27614.0959Knee4.026122.79294.6967
Max. Detail5.04137.55244.1249Max. Detail3.996222.91694.7093
Natural7Max. Contrast3.459332.2614.1593Medical7Max. Contrast4.066317.53174.1831
Knee3.75727.30563.8866Knee4.017416.92414.2208
Max. Detail3.90132.46783.3908Max. Detail4.026217.14.2797
Natural8Max. Contrast5.366140.17194.4922Medical8Max. Contrast2.57838.02465.5089
Knee4.858136.96444.4318Knee4.554238.54495.0139
Max. Detail4.545529.48854.2977Max. Detail5.239846.19574.9405
Natural9Max. Contrast2.973823.13934.6043Medical9Max. Contrast4.341332.2775.731
Knee3.22821.13124.525Knee4.562325.06895.7618
Max. Detail3.531617.20834.4686Max. Detail4.65625.80065.7614
Natural10Max. Contrast3.388510.77184.4499Medical10Max. Contrast2.841123.32223.2685
Knee3.23939.98664.3771Knee3.833724.72173.1833
Max. Detail2.99767.0674.1238Max. Detail4.303124.5593.1317
Table 9. Comparison of dominance between the proposed multi-objective approach, contrast stretching (CS), and adaptive histogram equalization (CLAHE). The symbols represent dominance status: “+” indicates the multi-objective solution dominates, “ | | ” indicates non-dominance, and “-” indicates the multi-objective solution is dominated.
Table 9. Comparison of dominance between the proposed multi-objective approach, contrast stretching (CS), and adaptive histogram equalization (CLAHE). The symbols represent dominance status: “+” indicates the multi-objective solution dominates, “ | | ” indicates non-dominance, and “-” indicates the multi-objective solution is dominated.
ProblemCECLAHEMulti-Objective (Knee Point)
f1f2Statusf1f2Statusf1f2
Natural13.602480,366.9308+4.0675119,318.1151 | | 4.4741103,785.0868
Natural21.573117,045.7257+2.043539,844.8025-1.844929,547.1093
Natural32.79729570.6512+2.800226,295.5886 | | 4.033121,228.6115
Natural43.277213,866.4487+3.469841,296.5049 | | 4.162028,125.8294
Natural52.978045,449.1717+4.150291,934.6577 | | 4.629790,296.0866
Natural64.114037,133.3034+3.928298,788.5865 | | 5.134569,678.1741
Natural73.198621,209.0719+3.319533,578.2905+4.016333,666.6691
Natural84.022070,655.2488+4.2470108,329.5632 | | 5.1979105,373.9500
Natural93.030018,327.6153+2.884934,909.6415 | | 3.917429,304.3657
Natural102.483610,523.4740+2.933332,775.9795 | | 3.924623,814.2353
Medical13.71889277.9447+3.853123,295.9010 | | 4.555819,718.7333
Medical24.73427518.9552+3.969112,665.8688+5.571013,911.8213
Medical34.2625232.5334+2.6556543.0372 | | 4.2747427.0364
Medical43.97153712.3943+3.834611,225.1916 | | 4.441110,157.2992
Medical52.20465069.1086-0.51660.0000+1.5757763.2875
Medical62.54403904.2629+3.189720,304.1639 | | 3.425910,717.7392
Medical73.61511660.6467+3.55224639.7335 | | 4.69583838.3433
Medical82.729073.8758+2.4340117.5608+2.8618222.8542
Medical92.142718,930.2122+1.800916,273.9174+2.273720,352.2764
Medical102.84116341.1905+2.887410,778.6971+3.781010,803.7683
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Molina-Pérez, D.; Rojas-López, A.G. Resolving Contrast and Detail Trade-Offs in Image Processing with Multi-Objective Optimization. Math. Comput. Appl. 2024, 29, 104. https://doi.org/10.3390/mca29060104

AMA Style

Molina-Pérez D, Rojas-López AG. Resolving Contrast and Detail Trade-Offs in Image Processing with Multi-Objective Optimization. Mathematical and Computational Applications. 2024; 29(6):104. https://doi.org/10.3390/mca29060104

Chicago/Turabian Style

Molina-Pérez, Daniel, and Alam Gabriel Rojas-López. 2024. "Resolving Contrast and Detail Trade-Offs in Image Processing with Multi-Objective Optimization" Mathematical and Computational Applications 29, no. 6: 104. https://doi.org/10.3390/mca29060104

APA Style

Molina-Pérez, D., & Rojas-López, A. G. (2024). Resolving Contrast and Detail Trade-Offs in Image Processing with Multi-Objective Optimization. Mathematical and Computational Applications, 29(6), 104. https://doi.org/10.3390/mca29060104

Article Metrics

Back to TopTop