Next Article in Journal
Robust Path Tracking Control for Autonomous Vehicle Based on a Novel Fault Tolerant Adaptive Model Predictive Control Algorithm
Next Article in Special Issue
Applications of Computer Vision in Automation and Robotics
Previous Article in Journal
Handling Skewed Data: A Comparison of Two Popular Methods
Previous Article in Special Issue
Vision-Based Sorting Systems for Transparent Plastic Granulate
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quality Assessment of 3D Printed Surfaces Using Combined Metrics Based on Mutual Structural Similarity Approach Correlated with Subjective Aesthetic Evaluation

1
Department of Signal Processing and Multimedia Engineering, West Pomeranian University of Technology in Szczecin, 70-313 Szczecin, Poland
2
Department of Information and Communication Technologies, National Aerospace University, 61070 Kharkov, Ukraine
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(18), 6248; https://doi.org/10.3390/app10186248
Submission received: 30 July 2020 / Revised: 4 September 2020 / Accepted: 7 September 2020 / Published: 9 September 2020
(This article belongs to the Special Issue Applications of Computer Vision in Automation and Robotics)

Abstract

:
Quality assessment of the 3D printed surfaces is one of the crucial issues related to fast prototyping and manufacturing of individual parts and objects using the fused deposition modeling, especially in small series production. As some corrections of minor defects may be conducted during the printing process or just after the manufacturing, an automatic quality assessment of object’s surfaces is highly demanded, preferably well correlated with subjective quality perception, considering aesthetic aspects. On the other hand, the presence of some greater and more dense distortions may indicate a reduced mechanical strength. In such cases, the manufacturing process should be interrupted to save time, energy, and the filament. This paper focuses on the possibility of using some general-purpose full-reference image quality assessment methods for the quality assessment of the 3D printed surfaces. As the direct application of an individual (elementary) metric does not provide high correlation with the subjective perception of surface quality, some modifications of similarity-based methods have been proposed utilizing the calculation of the average mutual similarity, making it possible to use full-reference metrics without the perfect quality reference images, as well as the combination of individual metrics, leading to a significant increase of correlation with subjective scores calculated for a specially prepared dataset.

Graphical Abstract

1. Introduction

Additive manufacturing, also referred to as the 3D printing, is one of the key technologies which revolutionizes the small series production in the Industry 4.0 era. The use of the 3D printers makes it possible not only to create some original 3D objects for entertaining purposes but also to launch an individual production of some unique parts of machines and other devices used to replace some damaged older elements. Some other areas of applications of additive manufacturing technology, utilizing plastic filaments usually based on polyactic acid (PLA) or acrylonitrile butadiene styrene (ABS), may be related to biomedical engineering (e.g., individual prosthesis), aerospace and automotive solutions, civil engineering and architecture (e.g., concrete 3D printing), reverse engineering in industry or even the protection of cultural heritage. An integration with 3D scanners enables making custom 3D CAD models and copies of various elements quite easily. Despite a growing popularity of 3D printing for home use, some important limitations should be considered, such as particle emissions [1,2], especially using the ABS filaments.
Nevertheless, the 3D printing process may be affected by various factors influencing the final quality of the manufactured objects, especially considering the low-cost devices designed for home use. Such printers belong to the most popular group based on Fused Deposition Modeling (FDM) where the heated filament is placed from bottom to top layer by layer by the moving extruder forming the 3D printed object. One such source of surface distortions may be the improper melting temperature, dependent on the type of the filament, as well as some changes of the surrounding temperature. Some other issues may be related to poor quality of elements used for printer’s construction and low quality of filaments. Some distortions may also be caused by an improper configuration of stepper motors as well as changing filament’s delivery speed. Both these types of distortions may be easily forced during the preparation of test samples, which have been used for the development of the database used for verification of the proposed methods also in some earlier papers [3,4].
Increasing popularity and availability of the 3D printers, as well as relatively cheap high-resolution cameras, make it possible to integrate some computer vision algorithms which may be useful for real-time monitoring of the printing process (i.e., without noticeable delays according to the manufacturing speed) and the device state [5,6]. Nevertheless, a majority of known solutions are limited to observation of the device’s state and applied mainly for fault diagnosis purposes [7,8,9]. Some other attempts assume the knowledge of the reference data representing some features or descriptors, such as process signatures [10,11]. Some of the proposed systems utilize optical coherence tomography (OCT) [12], thermographic measurements [13], or terahertz technology [14]. Some other important aspects of detection of some quality issues may be related to cybersecurity of additive manufacturing systems [15,16], especially considering that in some cases some embedded defects in a 3D-printed specimen might remain undetectable, e.g., by ultrasonic inspection [17].
Since the use of sophisticated hardware solutions for monitoring of low-cost 3D printers is troublesome due to a large increase in overall system’s costs, the most reasonable solution, particularly for the amateur use, seems to be the analysis of images acquired by affordable cameras. One of such in situ systems for monitoring the manufacturing process based on the comparison of the printed geometry with the computer 3D model has been proposed by Holzmond and Li [18]. Another attempt to anomaly detection and classification for the laser powder bed fusion (LPBF) method based on an unsupervised machine learning algorithm has been proposed by Scime and Beuth [19]. Nevertheless, such automated analysis requires the training of the algorithm, limiting its practical applicability in low-cost devices. A similar problem is also typical for the use of neural networks, adopted e.g., for the monitoring of the 3D inkjet printing process of electronic products [20].
An automated visual inspection of the 3D printing process may also be related to collision detection and the use of visual feedback [21]. On the other hand, some more advanced systems make it possible to optimize the tool paths during manufacturing [22], as well as to use multiple filaments for fabrication [23]. However, the latter solution, known as MultiFab, requires the use of a 3D scanner and closed-feedback loop for small corrections, hence its usefulness for low-cost devices may be limited, even considering the relatively small budget provided by its authors (less than $7000).
An interesting application of machine vision for the detection of defects in 3D prints has been proposed by Straub [24]. This solution, based on five cameras and Raspberry Pi units, has been designed to reduce or eliminate the need for testing of printed objects due to the possibility of automatic correction of minor defects noticed during the printing process, as well as the detection of “dry printing” issues caused by the lack of filament. Nevertheless, the proposed system has turned out to be very sensitive to environmental conditions as well as even minor camera movements. An application of visible sensing for the detection of microdefects in the 3D printed objects used for safety-critical applications has been presented in the paper [25].
Another example of the usefulness of computer vision methods for defect inspection in plastic materials may be the surface control system based on the low-cost visual sensors designed for polyvinyl chloride (PVC) pipes, proposed recently [26].
Regardless of the great progress in the use of machine vision for the monitoring and quality assurance of the 3D printed objects, an automatic visual surface assessment is still a challenging task, particularly correlated with subjective aesthetic perception of the 3D prints. Hence, in this paper, the adaptation of some similarity-based general-purpose full-reference image quality assessment (FR IQA) methods has been proposed together with their combination and optimization towards high correlation with subjective opinions. The experiments have been conducted using a specially prepared database containing 107 images of the 3D printed flat surfaces affected by the various amounts of distortions together with subjective quality scores gathered from 92 human observers.
Although in practical applications the correlation between the proposed objective quality scores and some mechanical properties would be much more desired, the measurement of some of them, e.g., strength of the materials, would require additional destructive experiments. Therefore, in this paper, we focus mainly on the aesthetic quality assessment, similarly as in general-purpose image quality assessment, based on the accordance with subjective opinions. Nevertheless, the developed methods may be further extended, particularly using the data fusion with some other non-destructive measurements, to find some dependencies between them and some physical properties of the manufactured objects.
The rest of the paper is organized as follows: Section 2 contains an overview of similarity-based IQA methods, whereas the description of the developed dataset is provided in Section 3. The idea of the proposed modification of the FR IQA metrics for the quality assessment of the 3D printed surfaces and their combination (Section 4) is followed by the experimental results in Section 5. The paper ends with a discussion (Section 6) and conclusions (Section 7).

2. Overview of Similarity-Based Full-Reference Image Quality Assessment Methods

General-purpose image quality assessment methods may be divided into two major groups, namely subjective and objective methods. The main differences between these two generic approaches are related to the assumed application and required assessment time. The usefulness of the subjective methods in many practical applications is limited by the necessary long time spent on gathering the opinions of individual observers. Hence, in practical applications, objective metrics that may be automatically calculated are more desired. Nevertheless, during the development of objective methods, subjective metrics may be used for verification of their correspondence to human perception of various kinds of image distortions, hence they may be considered mainly in view of aesthetic purposes.
The development of subjective methods is a troublesome and time-consuming task as it requires the engagement of many human observers who fill the questionnaires assessing the quality of the presented images containing various kinds and levels of image distortions. After the statistical analysis of their responses, together with outlier rejection and normalization of scores, it is possible to develop a database of images containing a number of images together with appropriate subjective quality evaluations expressed as Mean Opinion Scores (MOS) or Differential MOS values. During the last years, several such databases have been developed for natural images, as well as for stereoscopic images, video sequences, or even multiply distorted images, which are useful for the verification and performance evaluation of the objective image quality metrics. A comprehensive overview of many IQA databases, as well as various types of quality metrics, may be found, i.e., in the recent survey paper [27].
As the practical applicability of subjective methods is strongly limited by the time necessary to acquire the scores, for many tasks, including optimization of image filtering and lossy compression algorithms, the only possibility is the use of objective metrics which may be calculated automatically without the involvement of the human observers. However, such metrics may be further divided into three families: no-reference metrics (NR IQA), also referred to as “blind” metrics, which do not require the knowledge of the original image, reduced-reference metrics using partial reference information and the most popular full-reference metrics (FR IQA) based on the comparison of the distorted images with the “pristine” reference images without any distortions. Such perfect quality reference images are also included in the IQA databases, helpful mainly in the development of some new FR metrics. Despite the potentially wide areas of applications of “blind” metrics, their universality is still much lower in comparison to FR IQA solutions, causing a higher popularity of the latter approaches.
Most of the general-purpose FR IQA methods are based on the assumption that both compared images represent the same scene but one of the images is corrupted by one or more types of distortions, such as, e.g., the presence of noise, blur, lossy compression, transmission errors, etc. Hence, during the comparison of images, any image registering or spatial adjustment operations, including shifting or rotations are not necessary. Assuming the availability of the reference image, many FR metrics have been proposed during recent years, which utilize the similarity calculation between the original and distorted images. A milestone in the development of such metrics has been the idea of Universal Image Quality Index (UIQI) [28], being the first region-based metric without the disadvantages of the conventional pixel-based metrics, such as Mean Squared Error (MSE) or Peak Signal-to-Noise Ratio (PSNR), known as poorly correlated with subjective quality scores. Due to its potential instability, particularly for black or “flat” (with constant luminance) areas of images, the original version of the UIQI has been shortly replaced by the Structural Similarity (SSIM) [29], where the original formula has been extended as
S S I M = l ( x , y ) · c ( x , y ) · s ( x , y ) = 2 x ¯ y ¯ + C 1 x ¯ 2 + y ¯ 2 + C 1 · 2 σ x σ y + C 2 σ x 2 + σ y 2 + C 2 · σ x y + C 3 σ x σ y + C 3 ,
where the default values of the stabilizing constants (added in comparison to UIQI) for 8-bit grayscale images are: C 1 = ( 0.01 × 255 ) 2 , C 2 = ( 0.03 × 255 ) 2 and C 3 = C 2 / 2 . Their role is to prevent the potential division by zero without affecting significantly the overall quality metric’s value, calculated as the average similarity value for all positions of the sliding window. The local structural similarity values are calculated as the product of three factors representing luminance distortions l ( x , y ) , contrast loss c ( x , y ) , and structural distortions s ( x , y ) between the fragments x and y of images A and B, according to the current position of the sliding window. These similarities are based on the local mean values ( x ¯ and y ¯ ) of the original and distorted images, the respective variances ( σ x 2 and σ y 2 ), and the covariance σ x y . The default window is 11 × 11 pixels Gaussian window, considered as appropriate for typical standard definition images. Since the SSIM metric has been widely accepted in the community, its implementation has appeared in OpenCV and MATLAB®, for example, and many modifications based on similar formulas have been proposed by various researchers.
Considering the postulated independence of results of the image size without the necessity of changing the sliding window’s size, the multi-scale version of this method has been developed by the same authors known as MS-SSIM [30], where the image details at different resolutions are assessed applying iteratively a low-pass filter and downsampling by a factor of 2. The contract and structure comparisons are conducted for each scale, whereas the luminance comparison is computed only for the highest scale. The default normalized weighting exponents have been provided after cross-scale calibration experiments.
During the next several years many other SSIM-based metrics have been proposed, which have also been used in the conducted experiments related to this paper. In 2006 the Quality Index based on Local Variance (QILV) has been proposed [31], where a comparison of the local variance distribution is considered as the extension of the use of mean-variance values in the original SSIM. Finally, the following formula has been proposed, assuming that A and B are the input images:
Q I L V = 2 μ V A μ V B μ V A 2 + μ V B 2 · 2 σ V A σ V B σ V A 2 + σ V B 2 · σ V A V B σ V A σ V B ,
where σ V A V B is the covariance between the variances of two images ( V A and V B respectively), σ V A and σ V B denote the global standard deviations of the local variance with μ V A and μ V B being the mean values of the local variance.
Another modification proposed by Sampat et al. [32] is based on the use of the wavelet domain. The metric known as Complex Wavelet Structural Similarity (CW-SSIM) utilizes the assumption that the presence of some distortions causes consistent phase changes in the local wavelet coefficients. The metric is also more robust to small rotations and translations in comparison to the spatial domain SSIM. On the other hand, the application of information content weighting for the multi-scale SSIM (referred to as IW-SSIM), proposed in the paper [33], incorporates the Laplacian pyramid transform to calculate the appropriate weights based on the statistical Gaussian model of natural images. A similar information weighting may also be applied for MSE and PSNR metrics, significantly enhancing their correlation with subjective quality scores.
One of the most successful similarity-based metrics, known as Feature Similarity (FSIM), has been proposed in two versions (for grayscale and color images—denoted as FSIMc) [34] and verified as one of the best metrics for most IQA databases. It utilizes a similar formula as the SSIM but is applied for low-level features, namely phase congruency (PC—as a significance measure of a local structure) and gradient magnitude (GM), a complementary feature extracted using the Scharr filter. For simplicity, the equal importance of these two factors ( α = β = 1 ) has been assumed in the similarity formula between the images A and B:
S L ( x ) = 2 · P C A ( x ) · P C B ( x ) + T 1 P C A 2 ( x ) + P C B 2 ( x ) + T 1 α · 2 · G M A ( x ) · G M B ( x ) + T 2 G M A 2 ( x ) + G M B 2 ( x ) + T 2 β ,
where T 1 and T 2 are the stability constants preventing the division by zero and x is the sliding window position. The final formula obtained as the averaged S L for all locations is additionally weighted by the phase congruency, assuming that the Human Visual System (HVS) is highly sensitive to phase congruent structures and may be expressed as
F S I M = x A S L ( x ) · P C m ( x ) x A P C m ( x ) ,
where P C m ( x ) = m a x ( P C A ( x ) , P C B ( x ) ) and x denotes each position of the local window on the image plane A (or B). Its extension for color IQA utilizes the YIQ color space with two chrominance components (I and Q), used as the two other features, weighted using an additional exponential parameter γ used for tuning the importance of the chromatic data.
A similar approach based on the comparison of gradient images, known as Gradient Similarity (GSM) has been proposed in the paper [35], where the gradient similarity term, calculated in the same way as the luminance or contrast factor in the SSIM, is combined with the luminance comparison (calculated as the l ( x , y ) term the Formula (1)). As the authors have assumed that the typical Prewitt, Sobel, or Scharr kernels are too small the gradient extraction has been conducted using the following 5 × 5 pixels mask:
M a s k = 0 0 0 0 0 1 3 8 3 1 0 0 0 0 0 1 3 8 3 1 0 0 0 0 0 ,
rotated by 90° to obtain four directional masks for gradient calculations.
Further metrics inspired by the idea of SSIM include a fast and effective spectral residual similarity (SR-SIM), based on specific visual saliency model [36] correlated with perceived image quality, as well as the Edge Strength Similarity (ESSIM) [37], assuming the edge detection based on Scharr kernel and further calculation of edge strength maps determined from the directional derivatives calculated for each pixel. Obtained edge strength maps for the reference and distorted images are compared using the SSIM-like formula, similarly as for the other metrics. The similarity analysis in the frequency domain has led to the idea of the DCT subband similarity (DSS) proposed by Balanov et al. [38]. This approach considers various importance of structural distortions in different DCT subbands from a perceptual point of view. Hence, the similarity values, obtained for individual subbands according to the formula similar to the other mentioned metrics based on the local variances (i.e., their product divided by the sum of squares with stability constants), are weighted providing the final DSS result.
Alternatively, good performance may also be obtained using the SSIM-like formula for measurement of contrast deviation as proposed in the paper [39], where the multiscale contrast similarity deviation (MCSD) has been defined as another relatively fast IQA metric with high accordance with MOS values for the most relevant IQA databases. An interesting fact is that although the contrast has been used in many other metrics, including the original SSIM, in this case, all the calculations are based only on contrast values combined with multi-scale image representation.
An interesting attempt to pooling following the measurement of local distortions has been presented in the paper [40], where the analysis of distortion distribution has been used to improve the performance of the Structural Similarity and Gradient Similarity (the modified methods are referred to as ADD-SSIM and ADD-GSIM, respectively). The authors of this approach have utilized the ranking-based weighting, frequency variation based on structural degradation measurement, and additional entropy gain multiplier.
Recently, some other metrics being the extensions of the original SSIM have been proposed in the paper [41], which utilizes additional predictability of image blocks. The use of the fourth multiplicative component (block predictability) in the metric referred to as SSIM4 is based on the minimal value of mean squared error between the current block and the neighboring ones. Another modification is related to the use of color components (metrics denoted respectively as CSSIM and CSSIM4) and assumes simultaneous computations of metrics for several image scales in the YCbCr color space. Such obtained partial metrics are then averaged with weights obtained by maximizing the rank-order correlations for three datasets.
Two other recent methods utilize contrast combined with visual saliency and Riesz transform with visual contrast. The first one, known as CVSSI (Contrast and Visual Saliency Similarity-Induced Index), has been proposed [42] by the authors of the above mentioned MCSD metric, as the result of the application of the weighted standard deviation of two quality maps obtained for the local contrast and the global visual saliency. The latter approach [43] called Riesz transform and Visual contrast sensitivity-based feature similarity (RVSIM) utilized a Log-Gabor filter for initial image decomposition followed by Riesz transform. Then, the similarity of local amplitude, phase, and direction characteristics is determined forming the similarity matrices subjected to weighting using the visual Contrast Sensitivity Function (CSF), forming a single similarity matrix. It is further combined with gradient magnitude similarity, calculated for the reference and distorted images in the same way as in the FSIM, and subjected to final pooling leading to the overall RVSIM index.
Many of these metrics, as well as some others, have been recently combined using the neural networks for the assessment of remote sensing images [44] leading to promising results. Nevertheless, the direct application of such an approach for the quality assessment of the 3D prints would be troublesome due to the time necessary for the calculation of several metrics as well as the necessity of network training. Considering the variety of the proposed similarity-based approaches to image quality assessment, as well as their relatively short computation time (analyzed also in [44]), the above-described metrics have been examined in this paper to verify their usefulness for an automatic quality evaluation of the 3D printed surfaces, as well as the possibility of their combination, additionally converting them into no-reference methods due to potential applications in real-time systems.

3. The Database of the 3D Printed Surfaces

Verification of the usefulness of various objective IQA metrics for the assessment of the 3D printed surfaces requires the development of a special database containing the images of various distorted and high quality manufactured surfaces together with subjective MOS values. To face this challenge some flat samples have been produced from 9 types of thermoplastic ABS filaments with different colors using three available FDM devices, namely RepRap Ormerod 3, Prusa i3, and XYZprinting da Vinci 1.0 Pro 3-in-1. The choice of the ABS filaments has been motivated by their good mechanical properties and lightness, as well as higher abrasion resistance in comparison with PLA polymers. Nevertheless, relatively high melting temperatures—over 220 °C—together with potentially toxic fumes [1,2] have slightly limited the development of the dataset.
Although the database contains the 107 photographs (together with depth maps obtained by a 3D scanner not used in this paper) of the flat surfaces, the metrics investigated in this paper may be successfully applied regardless of the surface flatness or roundness, hence in this sense, the proposed approach may be considered as universal and may be further verified also for some other surfaces. The images have been acquired in the controlled lighting environment (distributed illumination from three lamps to prevent strong reflections) using a Sony DSC-HX100V camera with the exposure time 1/125 s, 5 mm focal length, and an automatic white balance without flash, ensuring a fixed distance to the surface. The size of such obtained images is 1600 × 1600 pixels, being equivalent to the physical size of the samples equal to 35 mm × 35 mm. Their thickness is about 4 mm and the height of layers varies from 0.3 to 0.35 mm depending on the individual printer and the size of the nozzle [4]. The additional depth maps, however not utilized in this paper, have been acquired using the ATOS 3D scanner manufactured by the GOM company (more information may be found in the paper [3]).
Regardless of the influence of some independent factors, such as quality of filament and construction materials, the presence of distortions in some samples have been forced by changing the temperature, configuration parameters of the stepper motors, filament’s delivery speed, as well as using the software model changes. Some of the samples contain cracks as well as the results of under- and over-filling. All the photographs have been independently assessed by 92 human observers using the five-element quality scale from 1 (very poor) to 5 (very good). Obtained results have been averaged to provide the MOS values, being additionally verified by comparisons with previously acquired experts’ opinions (used for classification purposes in some earlier works). Some sample images together with MOS values are presented in Figure 1, where the MOS values approaching 5 correspond to perfect quality.
As the primary goal of the paper and the developed database is the proposal of the objective method for the quality assessment of the 3D printed surfaces highly correlated with the subjective perception of surface aesthetics, expressed as the MOS values, the mechanical properties of the samples have not been investigated. Nevertheless, the development of an extended database, useful for the analysis of the 3D structure of the manufactured object, supplemented with the results of terahertz or radiographic detection and identification of defects is planned as a part of the future work, considering also some non-planar objects.

4. Proposed Approach

The idea of the combination of various features is partially utilized in many metrics discussed in Section 2, starting from the Structural Similarity, being in fact a combination of three components, representing the luminance, contrast, and structure. Nevertheless, such combination may also be applied for individual (elementary) metrics instead of features, leading to a significant increase in the correlation with subjective quality evaluations. The most advantageous results may be obtained for the combination of metrics utilizing various kinds of features or different domains, which are somewhat complementary to each other.
The first such attempt for the general-purpose IQA, based on the nonlinear combination of three metrics, similarly as in the construction of the SSIM or FSIM formulas, has been proposed in the paper [45], where the weighted product of MS-SSIM [30], Visual Information Fidelity (VIF) [46] and R-SVD metric [47] has been used. A modified metric referred to as Combined Image Similarity Index (CISI) [48] has been presented assuming the replacement of the R-SVD metric with FSIMc. According to one of the recent papers [49], presenting a comprehensive overview and comparison of many IQA methods, this metric (actually based on two similarity-based formulas) seems to be still competitive with many recently proposed more sophisticated FR IQA approaches, also for multiply distorted image databases, confirming the validity of the assumptions made in this work as well. Hence, the idea of combination of similarity-based metrics is considered as worth investigating also for the 3D printed surfaces.
Some other approaches to multi-method fusion have also been proposed by some other researchers, e.g., based on the regression approach [50], also with the use of machine learning [51], application of neural networks [52], also for remote sensing images [44], or genetic algorithms [53]. Nevertheless, considering the assumed application for the quality assessment of the 3D printed surfaces during the manufacturing process, the use of a limited number of metrics combined using their weighted product is assumed according to the general formula:
Q combined = n = 1 N Metric n weight n ,
where N is the number of the weighted elementary metrics.
As the original similarity-based metrics, starting from UIQI and SSIM, belong to the group of full-reference IQA methods, their direct application for the considered tasks would require the knowledge of the reference image. In practical applications, it is usually impossible and even the use of the rendered model of the 3D printed object would require a precise image registering and phase adjustment to compare two images in the same way as for the general-purpose IQA. An additional problem might be related fo different colors and luminance of the acquired images and those rendered from the 3D models. Hence, the idea of the calculation of the average mutual similarity is proposed for the images of the 3D printed surfaces assuming the side location of cameras, initially investigated in the papers [54,55] exclusively for classification purposes. Such side-view mounting of a camera makes it possible to acquire images with visible individual layers of the filament, as illustrated in Figure 1.
The idea of the calculation of the mutual similarity requires the division of the image representing the manufactured sample into regions and the application of the IQA formula for each pair of such obtained blocks. In the conducted experiments the division into 4, 9, and 16 square blocks has been assumed, although the best results have been obtained for 16 blocks (grid of 4 × 4 regions) requiring 120 mutual comparisons. Since the division of the surface into regions is fixed, we may expect the same or very similar orientations of the patters in each region. Hence, due to the use of the mutual similarity based on the comparison of regions, a potential influence of the position and the orientation of the cropping regions would be significantly reduced. Due to the use of the sliding window approach in the SSIM-based quality metrics, the structural changes introduced by such relatively small rotations or translations influence the final results of individual (elementary) metrics not as much as the presence of physical distortions of the 3D printed surface. Additionally, a phase-shifting by a few pixels to adjust the phase of the compared patterns may also be applied as presented in one of our earlier papers [56]. The influence of the color of individual samples has been reduced using the color to grayscale conversion according to ITU Recommendation ITU-R BT.601-7 [57] (using the rgb2gray function in MATLAB®). The illustration of the idea of the mutual similarity is presented in Figure 2, where two examples for the division into 4 and 9 regions are presented with the necessary 6 and 36 mutual similarity calculations. The number of similarities may be expressed as
k = M · ( M 1 ) 2 ,
where M is the number of regions.
Therefore, the average mutual similarity values obtained for the elementary metrics may be considered as the no-reference equivalents of the FR IQA metrics applied for the quality assessment of the 3D printed surfaces. Since we do not use the “pristine” reference images, we cannot apply FR IQA methods directly. Therefore, we have divided the images into parts (regions) and compare these regions using the FR IQA methods to make a workaround of the lack of the reference images. Hence, formally we can classify our approach as an NR (“blind”) IQA approach, although all metrics used internally belong to the group of FR methods.
Such obtained quality scores have been used as the input metrics for the combinations according to Formula (6). The optimized weights have been obtained by the maximization of the Pearson’s Linear Correlation Coefficient (PLCC) between the objective and subjective scores (expressed as MOS values in the developed database), representing the prediction accuracy. Since it has been assumed that the nonlinear combination of elementary metrics should compensate the potentially nonlinear perception of distortions, no fitting functions have been used, differently than for the general-purpose IQA being typically applied according to recommendations of the Video Quality Experts Group (VQEG). Additionally, two rank order correlations have been calculated, representing the prediction monotonicity, namely Spearman Rank Order Correlation Coefficient (SROCC) marker as ρ and Kendall Rank Order Correlation Coefficient (KROCC) denoted as τ . They are defined as
ρ = 1 6 · d i 2 n · ( n 2 1 )
and
τ = n c n d 0.5 · n · ( n 1 )
respectively, where n is the number of images, n c and n d are the numbers of concordant and discordant, and d i is the difference between the position of the i-th image in two sequences ordered according to subjective and objective scores. Both rank-order correlations consider only the positions of the images in these sorted vectors ignoring the differences between the perceived and measured quality.

5. Experimental Results

The procedure of experiments consists of four main steps: production of test samples with various surface quality, subjective perceptual experiments and data analysis, calculation of individual IQA metrics using the assumed division into regions, and combination of metrics and optimization of weights, as illustrated in Figure 3.
The first experiments have been conducted assuming the use of individual elementary metrics for all 107 images converted to grayscale and divided into 4, 9, and 16 regions. For each of them, three correlation coefficients have been calculated to identify potentially the most useful metrics for further combinations. The obtained results are presented in Table 1. As may be noticed, in most cases similar values of PLCC and SROCC have been obtained for individual metrics, hence in further experiments, only Pearson’s correlation has been used as the optimization criterion. For many metrics the choice of the number of regions does not change the correlation values significantly, hence further experiments have been conducted assuming the division into 16 regions. The best PLCC results obtained for various combinations of two metrics are presented in Table 2 (to save space only the values higher than 0.67 have been presented). As may be seen, in most combination the FSIM has been used, hence it has been used as the “fixed” metric for the combinations of three metrics (only the two others have been changed during the optimization of weights, although the FSIM exponent has been optimized as well). The best PLCC results achieved for such combinations of three metrics are presented in Table 3.
In view of results presented in Table 3, the best results may be obtained using the combination of FSIM and CW-SSIM with a third metric (preferably MCSD), however good results may also be achieved using FSIM and MCSD as two basic metrics used in the combinations. Considering these results, the application of more metrics has led to an even slightly better correlation with MOS values as presented in Table 4, where the obtained rank order correlations are additionally presented as well for comparison with Table 1. As may be seen, the best PLCC values are not always equivalent to the highest SROCC and KROCC, although the differences may be considered as negligible. The additional illustration of the obtained increase of the correlation with subjective scores is presented in the scatter plots in Figure 4.
Although the verification of correlations for the combinations of two or three metrics calculated assuming the division into 4 or 9 blocks has confirmed the validity of the choice of 16 regions (leading to better performance), the combination of four and five metrics for a smaller number of blocks gives an opportunity to obtain better results, particularly considering the prediction monotonicity represented by both rank order correlations, significantly decreasing the computation time as shown in Table 4.

6. Discussion

Analyzing the results, obtained by the unconstrained nonlinear optimization od exponent weights using the MATLAB® fminsearch function, which utilizes the simplex search method, additionally verified using gradient-based methods, it may be noticed that the application of an individual elementary metric leads to PLCC and SROCC values not exceeding 0.7 for the best metric (FSIM) regardless of the chosen division into regions. Nevertheless, the application of the combination of even two metrics increases the Pearson’s correlation significantly reaching the values over 0.8 for three “best” combinations. However, removing the FSIM metric from the combination decreases this value to only 0.7081, hence Feature Similarity should be considered as the basic metric used in all combinations.
Further increase of the PLCC for the combination of three metrics allows one to specify the other metrics leading to even better performance (CW-SSIM and MCSD, used also in further combinations of four and five metrics) reaching 0.8472 with further increase to 0.8537 for five metrics, also leading to a substantial improvement of the rank order correlations. The increase of the linearity of the relationship between the combined metrics and MOS values may also be observed in Figure 4, particularly for combinations of 3 and 5 elementary metrics. As presented in Table 4, even better results may be achieved limiting the number of blocks for the combinations of four and five individual metrics, obviously with additional advantageous shortening of the overall computation time.
In comparison to previous studies, the obtained results are superior to those presented in the recent conference paper [58], where PLCC = 0.8353 has been obtained for the combination of various approaches, previously applied only for sample classification purposes. However, the methods applied in the aforementioned paper, apart from the use of FSIMc metric, are based on various additional calculations including preprocessing with the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm, entropy calculations [3,4], use of the Histogram of Oriented Gradients (HOG) and—more importantly—entropy of the depth maps obtained using the additional 3D scanning.
Considering the practical applicability of the proposed approach, the use of even a monochrome camera is effortless, being enough to acquire the necessary data without the need of the time-consuming 3D scanning of the samples, which makes the real-time applications almost impossible, or at least expensive and troublesome, especially in low-cost solutions.

7. Conclusions and Further Work

Presented results confirm the usefulness of the combined IQA metrics for the quality evaluation of the 3D printed surfaces, leading to a high correlation with subjective opinions of the surface aesthetics. The proposed approach to automatic objective quality evaluation of the 3D printed surfaces is based only on the full-reference image quality metrics that rely on the mutual similarity, hence its application is relatively easy since the values of the elementary metrics necessary for the combination may be calculated in parallel. The utilized individual metrics may be effectively calculated [44,49] and therefore the presented approach may be successfully implemented for the real-time quality monitoring during the manufacturing, understood as not introducing noticeable delays in comparison to the relatively slow printing process. An important remark in view of the computation time is the possibility of an increase in the correlation with MOS values using more metrics but decreasing the number of regions.
Application of the proposed combined metrics makes it possible to increase the correlation with the subjective quality evaluation of the 3D printed surfaces by over 0.17 in comparison to the use of the best single metric, considering the prediction accuracy. A similar increase may be observed for SROCC with the increase of KROCC by more than 0.15, providing significantly higher prediction monotonicity.
As the proposed method is color independent, the use of a monochrome camera is enough, although the application of some other color to grayscale conversion methods as well as various color spaces is planned in further research. Another direction of future work is the extension of the proposed method towards the use of some other image quality assessment methods and integration with previously developed methods used so far for classification purposes. Nevertheless, the development of even better-correlated metrics requires additional experiments, as well as the possible extension of the developed dataset using also some additional samples with non-planar surfaces (with the time-consuming acquisition of subjective quality scores for the new samples). Such an extension of the proposed method for the evaluation of non-planar surfaces is possible similarly as for previously investigated entropy-based approaches [4]. Obviously, reasonable results may be expected only for the division into relatively small regions, additionally avoiding the mutual similarity calculations for distant image fragments due to expected different orientation of both compared to local patterns.
Considering some open challenges of the quality control in 3D printing manufacturing analyzed in the papers [59,60], another direction of further planned experiments is related to the integration of the proposed vision-based methods with electromagnetic and radiographic non-destructive testing (NDT) methods, e.g., using terahertz methods, towards the acquisition of the full 3D structure of the manufactured objects in view of their mechanical properties.

Author Contributions

Conceptualization, J.F. and K.O.; methodology, K.O. and V.L.; software, J.F. and K.O.; validation, P.L. and K.O.; formal analysis, P.L., K.O., and V.L.; investigation, J.F., K.O., and V.L.; resources, J.F. and V.L.; data curation, J.F. and P.L.; writing—original draft preparation, J.F. and K.O.; writing—review and editing, V.L.; visualization, J.F. and P.L.; J.F. worked under the supervision of K.O. All authors have read and agreed to the published version of the manuscript.

Funding

The research is partially co-financed by the Polish National Agency for Academic Exchange (NAWA) and the Ministry of Education and Science of Ukraine (agreement with KhAI no. M/45-2020) under the project no. PPN/BUA/2019/1/00074 entitled “Methods of intelligent image and video processing based on visual quality metrics for emerging applications”.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
ABSAcrylonitrile butadiene styrene
ADD-GSIMAnalysis of distortion distribution-based Gradient SIMilarity
ADD-SSIMAnalysis of distortion distribution-based Structural SIMilarity
CADComputer-Aided Design
CISICombined Image Similarity Index
CSFContrast Sensitivity Function
CSSIMColor Structural SIMilarity
CVSSIContrast and Visual Saliency Similarity-Induced Index
CW-SSIMComplex Wavelet Structural SIMilarity
DCTDiscrete Cosine Transform
CLAHEContrast Limited Adaptive Histogram Equalization
DSSDCT subband similarity
ESSIMEdge Strength SIMilarity
FDMFused deposition modeling
FR IQAFull-reference image quality assessment
FSIMFeature Similarity
GMGradient magnitude
GSMGradient Similarity
HOGHistogram of Oriented Gradients
HVSHuman Visual System
IQAImage quality assessment
ITUInternational Telecommunication Union
IW-SSIMInformation Content Weighted Structural SIMilarity
KROCCKendall Rank Order Correlation Coefficient
LPBFLaser powder bed fusion
MCSDMultiscale Contrast Similarity Deviation
MOSMean Opinion Score
MSEMean Squared Error
MS-SSIMMulti-Scale Structural SIMilarity
NDTNon-destructive testing
NR IQANo-reference image quality assessment
OCTOptical coherence tomography
PLCCPearson’s Linear Correlation Coefficient
QILVQuality Index based on Local Variance
SROCCSpearman Rank Order Correlation Coefficient
SR-SIMSpectral residual-based similarity
SSIMStructural SIMilarity
PCPhase congruency
PLAPolyactic acid
PSNRPeak Signal-to-Noise Ratio
PVCPolyvinyl chloride
RVSIMRiesz transform and Visual contrast sensitivity-based feature SIMilarity
UIQIUniversal Image Quality Index
VQEGVideo Quality Experts Group

References

  1. Stephens, B.; Azimi, P.; Orch, Z.E.; Ramos, T. Ultrafine particle emissions from desktop 3D printers. Atmos. Environ. 2013, 79, 334–339. [Google Scholar] [CrossRef]
  2. Azimi, P.; Zhao, D.; Pouzet, C.; Crain, N.E.; Stephens, B. Emissions of Ultrafine Particles and Volatile Organic Compounds from Commercially Available Desktop Three-Dimensional Printers with Multiple Filaments. Environ. Sci. Technol. 2016, 50, 1260–1268. [Google Scholar] [CrossRef] [PubMed]
  3. Fastowicz, J.; Grudziński, M.; Tecław, M.; Okarma, K. Objective 3D Printed Surface Quality Assessment Based on Entropy of Depth Maps. Entropy 2019, 21, 97. [Google Scholar] [CrossRef] [Green Version]
  4. Okarma, K.; Fastowicz, J. Improved quality assessment of colour surfaces for additive manufacturing based on image entropy. Pattern Anal. Appl. 2020, 23, 1035–1047. [Google Scholar] [CrossRef] [Green Version]
  5. Cheng, Y.; Jafari, M.A. Vision-Based Online Process Control in Manufacturing Applications. IEEE Trans. Autom. Sci. Eng. 2008, 5, 140–153. [Google Scholar] [CrossRef]
  6. Delli, U.; Chang, S. Automated Process Monitoring in 3D Printing Using Supervised Machine Learning. Procedia Manuf. 2018, 26, 865–870. [Google Scholar] [CrossRef]
  7. Szkilnyk, G.; Hughes, K.; Surgenor, B. Vision Based Fault Detection of Automated Assembly Equipment. In Proceedings of the ASME/IEEE International Conference on Mechatronic and Embedded Systems and Applications, Parts A and B, Qingdao, China, 15–17 July 2010; ASMEDC: Washington, DC, USA, 2011; Volume 3, pp. 691–697. [Google Scholar] [CrossRef]
  8. Chauhan, V.; Surgenor, B. A Comparative Study of Machine Vision Based Methods for Fault Detection in an Automated Assembly Machine. Procedia Manuf. 2015, 1, 416–428. [Google Scholar] [CrossRef] [Green Version]
  9. Chauhan, V.; Surgenor, B. Fault detection and classification in automated assembly machines using machine vision. Int. J. Adv. Manuf. Technol. 2017, 90, 2491–2512. [Google Scholar] [CrossRef] [Green Version]
  10. Fang, T.; Jafari, M.A.; Bakhadyrov, I.; Safari, A.; Danforth, S.; Langrana, N. Online defect detection in layered manufacturing using process signature. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, San Diego, CA, USA, 14 October 1998; IEEE: Piscataway, NJ, USA, 1998; Volume 5, pp. 4373–4378. [Google Scholar] [CrossRef]
  11. Fang, T.; Jafari, M.A.; Danforth, S.C.; Safari, A. Signature analysis and defect detection in layered manufacturing of ceramic sensors and actuators. Mach. Vis. Appl. 2003, 15, 63–75. [Google Scholar] [CrossRef]
  12. Gardner, M.R.; Lewis, A.; Park, J.; McElroy, A.B.; Estrada, A.D.; Fish, S.; Beaman, J.J.; Milner, T.E. In situ process monitoring in selective laser sintering using optical coherence tomography. Opt. Eng. 2018, 57, 1. [Google Scholar] [CrossRef] [Green Version]
  13. Lane, B.; Moylan, S.; Whitenton, E.P.; Ma, L. Thermographic measurements of the commercial laser powder bed fusion process at NIST. Rapid Prototyp. J. 2016, 22, 778–787. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Busch, S.F.; Weidenbach, M.; Fey, M.; Schäfer, F.; Probst, T.; Koch, M. Optical Properties of 3D Printable Plastics in the THz Regime and their Application for 3D Printed THz Optics. J. Infrared Millim. Terahertz Waves 2014, 35, 993–997. [Google Scholar] [CrossRef]
  15. Straub, J. Physical security and cyber security issues and human error prevention for 3D printed objects: Detecting the use of an incorrect printing material. In Dimensional Optical Metrology and Inspection for Practical Applications VI, Proceedings of the SPIE Commercial + Scientific Sensing and Imaging, Anaheim, CA, USA, 9–13 April 2017; Harding, K.G., Zhang, S., Eds.; SPIE: Bellingham, WA, USA, 2017; Volume 10220, pp. 90–105. [Google Scholar] [CrossRef]
  16. Straub, J. Identifying positioning-based attacks against 3D printed objects and the 3D printing process. In Pattern Recognition and Tracking XXVIII, Proceedings of the SPIE Defense + Security, Anaheim, CA, USA, 16 June 2017; Alam, M.S., Ed.; SPIE: Bellingham, WA, USA, 2017; Volume 10203. [Google Scholar] [CrossRef]
  17. Zeltmann, S.E.; Gupta, N.; Tsoutsos, N.G.; Maniatakos, M.; Rajendran, J.; Karri, R. Manufacturing and Security Challenges in 3D Printing. JOM 2016, 68, 1872–1881. [Google Scholar] [CrossRef]
  18. Holzmond, O.; Li, X. In situ real time defect detection of 3D printed parts. Addit. Manuf. 2017, 17, 135–142. [Google Scholar] [CrossRef]
  19. Scime, L.; Beuth, J. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Addit. Manuf. 2018, 19, 114–126. [Google Scholar] [CrossRef]
  20. Tourloukis, G.; Stoyanov, S.; Tilford, T.; Bailey, C. Data driven approach to quality assessment of 3D printed electronic products. In Proceedings of the 38th International Spring Seminar on Electronics Technology (ISSE), Eger, Hungary, 6–10 May 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 300–305. [Google Scholar] [CrossRef]
  21. Makagonov, N.G.; Blinova, E.M.; Bezukladnikov, I.I. Development of visual inspection systems for 3D printing. In Proceedings of the 2017 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus), St. Petersburg, Russia, 1–3 February 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1463–1465. [Google Scholar] [CrossRef]
  22. Fok, K.; Cheng, C.; Ganganath, N.; Iu, H.H.; Tse, C.K. An ACO-Based Tool-Path Optimizer for 3-D Printing Applications. IEEE Trans. Ind. Inform. 2019, 15, 2277–2287. [Google Scholar] [CrossRef] [Green Version]
  23. Sitthi-Amorn, P.; Ramos, J.E.; Wangy, Y.; Kwan, J.; Lan, J.; Wang, W.; Matusik, W. MultiFab: A Machine Vision Assisted Platform for Multi-material 3D Printing. ACM Trans. Graph. 2015, 34, 1–11. [Google Scholar] [CrossRef] [Green Version]
  24. Straub, J. Initial Work on the Characterization of Additive Manufacturing (3D Printing) Using Software Image Analysis. Machines 2015, 3, 55–71. [Google Scholar] [CrossRef] [Green Version]
  25. Straub, J. An approach to detecting deliberately introduced defects and micro-defects in 3D printed objects. In Pattern Recognition and Tracking XXVIII, Proceedings of the SPIE Defense + Security, Anaheim, CA, USA, 16 June 2017; Alam, M.S., Ed.; SPIE: Bellingham, WA, USA, 2017; Volume 10203. [Google Scholar] [CrossRef]
  26. Bi, Q.; Wang, M.; Lai, M.; Lin, J.; Zhang, J.; Liu, X. Automatic surface inspection for S-PVC using a composite vision-based method. Appl. Opt. 2020, 59, 1008. [Google Scholar] [CrossRef]
  27. Zhai, G.; Min, X. Perceptual image quality assessment: A survey. Sci. China Inf. Sci. 2020, 63. [Google Scholar] [CrossRef]
  28. Wang, Z.; Bovik, A. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  29. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Wang, Z.; Simoncelli, E.; Bovik, A. Multiscale structural similarity for image quality assessment. In Proceedings of the 37th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 9–12 November 2003; IEEE: Piscataway, NJ, USA, 2003; pp. 1398–1402. [Google Scholar] [CrossRef] [Green Version]
  31. Aja-Fernandez, S.; Estepar, R.S.J.; Alberola-Lopez, C.; Westin, C.F. Image Quality Assessment based on Local Variance. In Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006; IEEE: Piscataway, NJ, USA, 2006. [Google Scholar] [CrossRef]
  32. Sampat, M.; Wang, Z.; Gupta, S.; Bovik, A.; Markey, M. Complex Wavelet Structural Similarity: A New Image Similarity Index. IEEE Trans. Image Process. 2009, 18, 2385–2401. [Google Scholar] [CrossRef] [PubMed]
  33. Wang, Z.; Li, Q. Information Content Weighting for Perceptual Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 1185–1198. [Google Scholar] [CrossRef] [PubMed]
  34. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A Feature Similarity Index for Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Liu, A.; Lin, W.; Narwaria, M. Image Quality Assessment Based on Gradient Similarity. IEEE Trans. Image Process. 2012, 21, 1500–1512. [Google Scholar] [CrossRef]
  36. Zhang, L.; Li, H. SR-SIM: A fast and high performance IQA index based on spectral residual. In Proceedings of the 2012 19th IEEE International Conference on Image Processing (ICIP), Orlando, FL, USA, 30 September–3 October 2012; pp. 1473–1476. [Google Scholar] [CrossRef]
  37. Zhang, X.; Feng, X.; Wang, W.; Xue, W. Edge Strength Similarity for Image Quality Assessment. IEEE Signal Process. Lett. 2013, 20, 319–322. [Google Scholar] [CrossRef]
  38. Balanov, A.; Schwartz, A.; Moshe, Y.; Peleg, N. Image quality assessment based on DCT subband similarity. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; IEEE: Piscataway, NJ, USA, 2015. [Google Scholar] [CrossRef]
  39. Wang, T.; Zhang, L.; Jia, H.; Li, B.; Shu, H. Multiscale contrast similarity deviation: An effective and efficient index for perceptual image quality assessment. Signal Process. Image Commun. 2016, 45, 1–9. [Google Scholar] [CrossRef]
  40. Gu, K.; Wang, S.; Zhai, G.; Lin, W.; Yang, X.; Zhang, W. Analysis of Distortion Distribution for Pooling in Image Quality Prediction. IEEE Trans. Broadcast. 2016, 62, 446–456. [Google Scholar] [CrossRef]
  41. Ponomarenko, M.; Egiazarian, K.; Lukin, V.; Abramova, V. Structural Similarity Index with Predictability of Image Blocks. In Proceedings of the 2018 IEEE 17th International Conference on Mathematical Methods in Electromagnetic Theory (MMET), Kiev, Ukraine, 2–5 July 2018; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar] [CrossRef]
  42. Jia, H.; Zhang, L.; Wang, T. Contrast and Visual Saliency Similarity-Induced Index for Assessing Image Quality. IEEE Access 2018, 6, 65885–65893. [Google Scholar] [CrossRef]
  43. Yang, G.; Li, D.; Lu, F.; Liao, Y.; Yang, W. RVSIM: A feature similarity method for full-reference image quality assessment. EURASIP J. Image Video Process. 2018, 2018, 6. [Google Scholar] [CrossRef] [Green Version]
  44. Ieremeiev, O.; Lukin, V.; Okarma, K.; Egiazarian, K. Full-Reference Quality Metric Based on Neural Network to Assess the Visual Quality of Remote Sensing Images. Remote Sens. 2020, 12, 2349. [Google Scholar] [CrossRef]
  45. Okarma, K. Combined Full-Reference Image Quality Metric Linearly Correlated with Subjective Assessment. In Artificial Intelligence and Soft Computing, Proceedings of the 10th International Conference Proceedings, ICAISC 2010, Zakopane, Poland, 13–17 June 2010; Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6113, pp. 539–546. [Google Scholar] [CrossRef]
  46. Sheikh, H.; Bovik, A. Image information and visual quality. IEEE Trans. Image Process. 2006, 15, 430–444. [Google Scholar] [CrossRef] [PubMed]
  47. Mansouri, A.; Aznaveh, A.M.; Torkamani-Azar, F.; Jahanshahi, J.A. Image quality assessment using the singular value decomposition theorem. Opt. Rev. 2009, 16, 49–53. [Google Scholar] [CrossRef]
  48. Okarma, K. Combined image similarity index. Opt. Rev. 2012, 19, 349–354. [Google Scholar] [CrossRef]
  49. Athar, S.; Wang, Z. A Comprehensive Performance Evaluation of Image Quality Assessment Algorithms. IEEE Access 2019, 7, 140030–140070. [Google Scholar] [CrossRef]
  50. Oszust, M. A Regression-Based Family of Measures for Full-Reference Image Quality Assessment. Meas. Sci. Rev. 2016, 16, 316–325. [Google Scholar] [CrossRef] [Green Version]
  51. Liu, T.J.; Lin, W.; Kuo, C.C.J. Image Quality Assessment Using Multi-Method Fusion. IEEE Trans. Image Process. 2013, 22, 1793–1807. [Google Scholar] [CrossRef]
  52. Lukin, V.V.; Ponomarenko, N.N.; Ieremeiev, O.I.; Egiazarian, K.O.; Astola, J. Combining full-reference image visual quality metrics by neural network. In Human Vision and Electronic Imaging XX, Proceedings of the SPIE/IS&T Electronic Imaging, San Francisco, CA, USA, 10–11 February 2015; Rogowitz, B.E., Pappas, T.N., de Ridder, H., Eds.; SPIE: Bellingham, WA, USA, 2015; Volume 9394. [Google Scholar] [CrossRef]
  53. Oszust, M. Full-Reference Image Quality Assessment with Linear Combination of Genetically Selected Quality Measures. PLoS ONE 2016, 11, e0158333. [Google Scholar] [CrossRef] [Green Version]
  54. Okarma, K.; Fastowicz, J.; Tecław, M. Application of Structural Similarity Based Metrics for Quality Assessment of 3D Prints. In Computer Vision and Graphics, Proceedings of the International Conference, ICCVG 2016, Warsaw, Poland, 19–21 September 2016; Chmielewski, L.J., Datta, A., Kozera, R., Wojciechowski, K., Eds.; Springer International Publishing: Cham, Switzerland, 2016; Volume 9972, pp. 244–252. [Google Scholar] [CrossRef]
  55. Okarma, K.; Fastowicz, J. Adaptation of Full-Reference Image Quality Assessment Methods for Automatic Visual Evaluation of the Surface Quality of 3D Prints. Elektron. Elektrotechnika 2019, 25, 57–62. [Google Scholar] [CrossRef]
  56. Fastowicz, J.; Okarma, K. Fast quality assessment of 3D printed surfaces based on structural similarity of image regions. In Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), Świnoujście, Poland, 9–12 May 2018. [Google Scholar] [CrossRef]
  57. International Telecommunication Union. Recommendation ITU-R BT.601-7—Studio Encoding Parameters of Digital Television for Standard 4:3 and Wide-Screen 16:9 Aspect Ratios; International Telecommunication Union: Geneva, Switzerland, 2011. [Google Scholar]
  58. Fastowicz, J.; Lech, P.; Okarma, K. Combined Metrics for Quality Assessment of 3D Printed Surfaces for Aesthetic Purposes: Towards Higher Accordance with Subjective Evaluations. In Computational Science—ICCS 2020, Proceedings of the 20th International Conference Proceedings, Amsterdam, The Netherlands, 3–5 June 2020; Krzhizhanovskaya, V.V., Závodszky, G., Lees, M.H., Dongarra, J.J., Sloot, P.M.A., Brissos, S., Teixeira, J., Eds.; Springer International Publishing: Cham, Switzerland, 2020; Volume 12143, pp. 326–339. [Google Scholar] [CrossRef]
  59. Kim, H.; Lin, Y.; Tseng, T.L.B. A review on quality control in additive manufacturing. Rapid Prototyp. J. 2018, 24, 645–669. [Google Scholar] [CrossRef] [Green Version]
  60. Wu, H.C.; Chen, T.C.T. Quality control issues in 3D-printing manufacturing: A review. Rapid Prototyp. J. 2018, 24, 607–614. [Google Scholar] [CrossRef]
Figure 1. Sample images from the developed database with their average subjective quality scores sorted from the highest to lowest: (a) high quality salmon pink sample (MOS = 4.7253), (b) high quality pink sample (MOS = 4.6923), (c) high quality brown sample (MOS = 4.1333), (d) moderately high quality red sample (MOS = 2.4130), (e) moderately low quality yellow sample (MOS = 1.4130), (f) low quality dark green sample (MOS = 1.1868), (g) low quality black sample (MOS = 1.0978), (h) low quality fluorescent pink sample (MOS = 1.0110), (i) low quality blue sample (MOS = 1.0000).
Figure 1. Sample images from the developed database with their average subjective quality scores sorted from the highest to lowest: (a) high quality salmon pink sample (MOS = 4.7253), (b) high quality pink sample (MOS = 4.6923), (c) high quality brown sample (MOS = 4.1333), (d) moderately high quality red sample (MOS = 2.4130), (e) moderately low quality yellow sample (MOS = 1.4130), (f) low quality dark green sample (MOS = 1.1868), (g) low quality black sample (MOS = 1.0978), (h) low quality fluorescent pink sample (MOS = 1.0110), (i) low quality blue sample (MOS = 1.0000).
Applsci 10 06248 g001
Figure 2. Illustration of the idea of mutual similarity calculation for the division into 4 and 9 regions with necessary respective 6 and 36 computations of the FR metric.
Figure 2. Illustration of the idea of mutual similarity calculation for the division into 4 and 9 regions with necessary respective 6 and 36 computations of the FR metric.
Applsci 10 06248 g002
Figure 3. Illustration of the experimental procedure.
Figure 3. Illustration of the experimental procedure.
Applsci 10 06248 g003
Figure 4. Scatter plots illustrating the linearity of relations between the investigated objective metrics assuming the division into 16 regions and MOS values: (a) for FSIM, (b) for the “best” combination of two metrics, (c) for the “best” combination of three metrics, (d) for the “best” combination of five metrics.
Figure 4. Scatter plots illustrating the linearity of relations between the investigated objective metrics assuming the division into 16 regions and MOS values: (a) for FSIM, (b) for the “best” combination of two metrics, (c) for the “best” combination of three metrics, (d) for the “best” combination of five metrics.
Applsci 10 06248 g004
Table 1. The correlation coefficients obtained for 107 images using the proposed mutual similarity-based approach for the elementary metrics.
Table 1. The correlation coefficients obtained for 107 images using the proposed mutual similarity-based approach for the elementary metrics.
MetricDivision into 16 RegionsDivision into 9 RegionsDivision into 4 Regions
PLCCSROCCKROCCPLCCSROCCKROCCPLCCSROCCKROCC
FSIM [34]0.67560.68650.51950.68200.68450.51850.67800.68260.5114
CW-SSIM [32]0.59290.58230.40280.63230.60980.42320.58070.56330.3981
AD-GSIM [40]0.40810.35150.24530.40200.34700.24140.38730.33240.2354
DSS [38]0.40660.35230.24110.38420.32200.22100.39210.31760.2142
AD-SSIM [40]0.40170.35740.25620.38340.32090.22700.34920.29320.2065
SSIM [29]0.39960.40120.27460.39050.40390.26610.30480.30170.1938
CSSIM4 [41]0.35960.32960.23540.33290.28180.19770.32330.28510.1991
IW-SSIM [33]0.35490.36690.26190.32300.29970.20440.31690.24730.1627
SR-SIM [36]0.31730.24410.15880.31740.24970.16520.38780.31600.2150
MCSD [39]0.31060.29580.21640.30080.28890.20900.29520.28250.2051
QILV [31]0.30920.13300.08680.34780.26620.18320.43160.35550.2534
CVSSI [42]0.25580.20830.13700.20970.14920.09350.16670.10680.0593
ESSIM [37]0.18650.23400.16310.17540.23540.16480.31600.28680.2026
CSSIM [41]0.15230.10780.07240.12510.07550.05190.12930.08620.0632
SSIM4 [41]0.12830.08520.05650.10310.04470.03040.10850.06730.0462
GSM [35]0.11030.16890.11820.09910.16310.11330.21020.22530.1585
RVSIM [43]0.02670.01980.02190.05460.04330.03950.01140.02470.0304
Table 2. The PLCC values obtained for 107 images using 25 “best” combinations of two elementary metrics assuming the division into 16 regions.
Table 2. The PLCC values obtained for 107 images using 25 “best” combinations of two elementary metrics assuming the division into 16 regions.
MetricsPLCCMetricsPLCCMetricsPLCCMetricsPLCC
FSIM + MCSD0.8192FSIM + SR-SIM0.7862FSIM + SSIM40.7581FSIM + RVSIM0.6875
FSIM + DSS0.8029FSIM + AS-SSIM0.7861FSIM + ESSIM0.7377CSSIM + CSSIM40.6852
FSIM + CVSSI0.8008FSIM + CW-SSIM0.7766FSIM + GSM0.7312FSIM + SSIM0.6809
FSIM + IW-SSIM0.7981FSIM + CSSIM40.7719SSIM + AD-GSIM0.7081FSIM + RVSIM0.6762
FSIM + AD-GSIM0.7921FSIM + CSSIM0.7593FSIM + QILV0.6952SSIM + CW-SSIM0.6731
Table 3. The PLCC values obtained for 107 images using 24 “best” combinations of three elementary metrics assuming the division into 16 regions.
Table 3. The PLCC values obtained for 107 images using 24 “best” combinations of three elementary metrics assuming the division into 16 regions.
MetricsPLCCMetricsPLCCMetricsPLCC
FSIM + CW-SSIM + MCSD0.8472FSIM + MCSD + GSM0.8270FSIM + MCSD + AD-SSIM0.8221
FSIM + CW-SSIM + DSS0.8379FSIM + ESSIM + MCSD0.8256FSIM + MCSD + CSSIM0.8221
FSIM + CW-SSIM + IW-SSIM0.8356FSIM + CW-SSIM + CSSIM40.8250FSIM + CVSSI + GSM0.8219
FSIM + CW-SSIM + CVSSI0.8348FSIM + CW-SSIM + SR-SIM0.8246FSIM + MCSD + RVSIM0.8217
FSIM + AD-SSIM + SR-SIM0.8341FSIM + CW-SSIM + AD-SSIM0.8239FSIM + DSS + AD-SSIM0.8213
FSIM + CW-SSIM + AD-GSIM0.8301FSIM + MCSD + SR-SIM0.8238FSIM + ESSIM + AD-GSIM0.8213
FSIM + DSS + ESSIM0.8284FSIM + MCSD + SSIM40.8225FSIM + MCDS + QILV0.8201
FSIM + DSS + GSM0.8274FSIM + MCSD + SSIM0.8224FSIM + AD-GSIM + GSM0.8199
Table 4. The correlation coefficients obtained for 107 images using the “best” combinations of four and five elementary metrics assuming the division into 4, 9, and 16 regions.
Table 4. The correlation coefficients obtained for 107 images using the “best” combinations of four and five elementary metrics assuming the division into 4, 9, and 16 regions.
Number of RegionsMetricsPLCCSROCCKROCC
4FSIM + CW-SSIM + MCSD + ESSIM0.84050.84310.6639
FSIM + CW-SSIM + MCSD + GSM0.84740.84850.6710
FSIM + CW-SSIM + MCSD + ESSIM + GSM0.85650.85860.6826
9FSIM + CW-SSIM + MCSD + ESSIM0.84630.83570.6558
FSIM + CW-SSIM + MCSD + GSM0.84870.83850.6565
FSIM + CW-SSIM + MCSD + ESSIM + GSM0.84950.83970.6604
16FSIM + CW-SSIM + MCSD + ESSIM0.85020.84530.6660
FSIM + CW-SSIM + MCSD + GSM0.85140.84430.6653
FSIM + CW-SSIM + MCSD + ESSIM + GSM0.85370.84800.6748

Share and Cite

MDPI and ACS Style

Okarma, K.; Fastowicz, J.; Lech, P.; Lukin, V. Quality Assessment of 3D Printed Surfaces Using Combined Metrics Based on Mutual Structural Similarity Approach Correlated with Subjective Aesthetic Evaluation. Appl. Sci. 2020, 10, 6248. https://doi.org/10.3390/app10186248

AMA Style

Okarma K, Fastowicz J, Lech P, Lukin V. Quality Assessment of 3D Printed Surfaces Using Combined Metrics Based on Mutual Structural Similarity Approach Correlated with Subjective Aesthetic Evaluation. Applied Sciences. 2020; 10(18):6248. https://doi.org/10.3390/app10186248

Chicago/Turabian Style

Okarma, Krzysztof, Jarosław Fastowicz, Piotr Lech, and Vladimir Lukin. 2020. "Quality Assessment of 3D Printed Surfaces Using Combined Metrics Based on Mutual Structural Similarity Approach Correlated with Subjective Aesthetic Evaluation" Applied Sciences 10, no. 18: 6248. https://doi.org/10.3390/app10186248

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop