Next Article in Journal
Influence of Sulfurization Time on Sb2S3 Synthesis Using a New Graphite Box Design
Previous Article in Journal
Correlation between the Chemical Structure of (Meth)Acrylic Monomers and the Properties of Powder Clear Coatings Based on the Polyacrylate Resins
Previous Article in Special Issue
Possibilities of Measuring and Detecting Defects of Forged Parts in Die Hot-Forging Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classification of Visual Smoothness Standards Using Multi-Scale Areal Texture Parameters and Low-Magnification Coherence Scanning Interferometry

Department of Mechanical Engineering, UNC Charlotte, Charlotte, NC 28223, USA
*
Authors to whom correspondence should be addressed.
Materials 2024, 17(7), 1653; https://doi.org/10.3390/ma17071653
Submission received: 19 November 2023 / Revised: 7 March 2024 / Accepted: 12 March 2024 / Published: 3 April 2024

Abstract

:
The ability to objectively specify surface finish to ensure consistent visual appearance addresses a vital need in surface coating engineering. This work demonstrates how a computational framework, called surface quality and inspection descriptors (SQuID™), can be leveraged to effectively rank different grades of surface finish appearances. ISO 25178-2 areal surface metrics extracted from bandpass-filtered measurements of a set of ten visual smoothness standards taken on a coherent scanning interferometer are used to quantify different grades of powder-coated surface finish. The ability to automatically classify the standard tiles using multi-scale areal texture parameters is compared to parameters obtained from a hand-held gloss meter. The results indicate that the ten different surface finishes can be automatically classified with accuracies as low as 65% and as high as 99%, depending on the filtering and parameters used to quantify the surfaces. The highest classification accuracy is achieved using only five multi-scale topography descriptions of the surface.

1. Introduction

The surface texture can impact a manufactured surface’s functionality and aesthetic quality. Some examples of where surface texture affects functional performance include wettability [1,2,3], reflectance [4,5], corrosion resistance [6,7], fatigue [7,8], and heat exchange [9,10]. While the surface aesthetic (visual and tactile) includes human perception and thus encroaches on the psychophysics domain [11,12,13,14], the topographical amplitudes and spatial relationships between different features on the surface, along with other traits such as color, thermal conductivity, and modulus, affect how the surface is perceived and valued [12,13,15,16,17,18]. Central to understanding how surface texture affects functionality and aesthetics is to isolate metrics capable of quantifying relevant surface topographies at the appropriate length scales. Similarly, quantitative metrics are needed to assist in fine-tuning processes used to generate the surfaces and for subsequent process control and part acceptance. That said, the identification of appropriate qualitative metrics is far from trivial [19,20]. While surfaces can be easily measured in digital environments, merely assigning a numerical assessment to a surface that does not correlate with its intended function provides incomplete and potentially misleading information. Additionally, while visual comparisons to defined standards are utilized in many instances, especially regarding aesthetics, human visual inspection (HVI) processes are inherently subjective, as discussed in [11]. There is a strong desire to replace HVI with automated systems and a longstanding goal of definitively linking functionality to surface topographies. However, there remains a need to make it easier to find suitable metrics that can reliably discriminate between surfaces grouped or ranked by functionality, subjective appearance, or, indeed, processing-induced differences [21,22,23].

1.1. Challenges in Quantifying Surface Quality

Information on surface quality can be obtained by multiple modes of acquisition and processing, giving different outputs. For example, functional gloss meter readings quantify a surface’s specular reflectance and/or the directional distribution of light reflected by surfaces illuminated under specified conditions [24]. The gloss unit (GU) reports the amount of luminous flux reflected from a specimen to that of a reference glass tile with a known refractive index (1.567) having a specular reflectance of 100 GU at a specified angle and wavelength (λ = 587.6 nm or 546.1 nm) [25]. Typical gloss meter systems are designed around standard measurement angles of 20° for high-gloss surfaces, 60° for semi-gloss surfaces, and 85° for matte surfaces [25,26]. Although multiple surface characteristics influence gloss values derived from these measurement angles, they may fail to characterize certain aspects that reduce appearance quality (haze, orange peel, etc.). As such, additional appearance terminology and parameters characterizing surface quality, such as the distinctness of image DOI, the “aspect of gloss characterized by the sharpness of images of objects produced by reflection at a surface”, have been developed [27] and used to quantify the severity of orange peel. DOI values close to 100 indicate a very high sharpness, and values close to 0 indicate a very high induced waviness in the reflected image [28]. Other industry standards have been developed by Rho-Point, two of which are Rspec for smoothness quantification and the reflected image quality RIQ parameter [29]. On the other hand, conventional imaging techniques provide intensity images of how light interacts with the different spatial regions of the surface. While fast and effective in providing insights into the nature of the surface texture, they are sensitive to lighting conditions and lack topographical height data to provide full three-dimensional representations. Conventional image-processing metrics used to quantify surface textures, such as grayscale histograms, local binary patterns (LBP), and gray-level co-occurrence matrices (GLCMs), are often difficult to interpret with respect to the physical nature of the surface and thus have limited value in understanding what drives a particular functionality. As such, they will not be the focus of this paper.
A surface’s composition and geometric topography ultimately dictate its functional performance. Tactile or optical-based instruments can provide either line or areal height information pertaining to the geometric nature of the surface topography [30]. The speed at which a measurement is taken will vary depending on the system, with coherence scanning interferometry (CSI) being among the fastest but still slower than high-speed intensity imaging systems. Datasets from these systems can provide a wealth of geometrical topographical data with the scale of resolvable features dependent on system-limiting factors, tip radii for tactile measurements, and numerical aperture considerations for optical systems [31]. While the wealth of the available data is advantageous, effectively processing the data to give useful metrics still poses challenges [11,21]. Statistical parameters used to describe surfaces within the manufacturing community include height, spatial, hybrid, functional, and other parameters detailed in the ISO 25178-2 standard [32]. Even though there is a shift away from relying on two of the most commonly utilized areal metrics [33], the arithmetic average roughness, Sa, and the root mean square roughness, Sq, they remain prevalent despite having potentially similar values for surfaces that drastically differ in their topographical arrangements. Using other standardized parameters is promising, but questions about how alternative parameters should be selected for a specific application still arise. Surface geometries can be complicated; using only one or a small finite number of statistical parameters may fail to provide a full description. Increasing the number of parameters used can result in a more accurate description [34], but excessive reporting of redundant and/or irrelevant parameters leads to what Whitehouse coined in 1982 as “parameter rash” [35]. Further complicating the selection process, surface texture and its relationship to different functions is inherently multi-scale [22]. Single or multiple parameters calculated only at one specific spatial bandwidth may not always provide an adequate description. Even for the same surface, texture parameter values can change by orders of magnitude depending on the definition area of the measurement, sampling interval, form-removal operations, and cutoff frequencies used for high-, low-, or band-pass filtering of the data. If the search space is left unbounded, the number of parameter and length scale combinations are seemingly infinite. While particular combinations of areal parameters may provide a strong foundation for correlating process variations, function, or subjective visual appearance with statistical representations of the surface’s topography, it is crucial to understand that a heuristic approach to parameter selection might not always yield the most useful surface descriptions for a particular application. Objective and data-driven methodologies should complement heuristic approaches to ensure more accurate and effective surface characterizations, especially when aiming to capture subjective aspects of surface quality. However, using black-box machine learning (ML) models to carry out data-driven methodologies for classifying surface quality can limit the ability of designers, manufacturers, and researchers alike to use texture parameters to unambiguously communicate surface quality expectations and specifications for inspection. Therefore, achieving a balance between traditional methods and AI-driven approaches is essential. Integrating the strengths of both can lead to more robust and effective surface quality assessments while addressing the limitations of each approach.

1.2. Paper Description and Organization

To address the challenges of selecting the relevant length scale and parameters to reliably differentiate between different surface texture classes, this paper outlines an approach demonstrated on a set of ten ranked visual smoothness panels available from the Powder Coating Institute (PCI). These panels are commonly used in industry for HVI of powder coatings. Currently, there is no clear pathway for automatically classifying a workpiece finish as one of the ten PCI textured samples. The ASTM D3451-06 standard [36] for testing powder coatings notes that subjective HVI comparison is the most common method to quantify differences in the surface profile, primarily the levels of reflectance and orange peel of cured powder coatings. A second method refers to a portable instrument that, when scanned across the surface, acts like the human eye and detects differences in reflectance (light → dark areas), then transforms them into a numerical number relating to orange peel. A third method, alluded to within the standard, mentions a combination of surface amplitude and spatial content but clearly defines neither within.
To circumvent the need for subjective visual analysis, a previously published systematic framework, SQuID™ [21], will be built upon to derive quantitative descriptions using areal maps subdivided into multiple length scale measurements and a set of logical rules to objectively classify the aforementioned set of subjectively graded visual smoothness panels. Section 2.1 includes an overview of the standardized visual smoothness tiles investigated in this study. Section 2.2 includes acquisition and data-collection details of the tiles using a CSI and a gloss meter. Gloss meter readings are included in the analysis due to their simplicity. Areal parameters considered and spatial bandwidths used in generating the datasets for subsequent analysis are covered in Section 2.3. The parameter selection and classification methodology employed for down-selecting metrics and classifying the standard appearance tiles is covered in Section 3. The performance of a decision tree classifier and a summary of the down-selected parameters using this algorithmic methodology is covered in Section 4. A summary of the results and areas for future work is outlined in Section 5.

2. Samples and Measurements

2.1. Visual Smoothness Standards

The PCI visual smoothness standards shown in Figure 1 cost approximately 650 USD and consist of ten 100 mm × 150 mm black powder-coated tiles with varying degrees of smoothness. The samples are numbered such that tile 01 appears to be the “roughest” sample, containing the most orange peel, and tile 10 is the “smoothest”, visually exhibiting the least orange peel and the highest reflectance levels. The orange peel texture appears as its name suggests: visible local surface undulations or waviness. It is most visible in tiles 01 and 02 and within the region of the reflected overhead light in tiles 04 and 05 (see the indicated regions in Figure 1). Industrial equipment manufacturers prescribe different quality designations to surfaces with an orange peel rating of 05–09 versus those with a rating of 02–09 [37]. Typically, classification is carried out via HVI in illuminated cells by comparing the standard tile grades to the part surfaces. Under non-ideal lighting conditions, only PCI tiles 01, 02, and potentially tile 03 can be reliably differentiated. Resolving the differences between tiles 04 to 10 is more challenging, requiring specific lighting conditions, viewing angle, and visual acuity to resolve minor variations in spatial periods and specular reflection. To correctly classify all ten tiles over long periods of time would be considered a difficult inspection task and would undoubtedly lead to inconsistency between inspectors.
The PCI tiles in Figure 1 were kept in a protective case during shipping and storage. Direct handling was limited to the edges of each tile to avoid damage or contamination of the measurement area. Before each measurement described in the next section, the tiles were gently wiped with a clean microfiber cloth to remove any dust or contaminates on the surface.

2.2. Gloss Meter and CSI Measurements

The different measurement strategies for the two different measurement approaches are depicted in Figure 2. Thirty measurements of each PCI tile were collected using a RhoPont-IQ meter [29]. This was achieved by rotating about the z-axis normal to the surface and translating the device across the central region of the 100 mm × 150 mm tiles. The assessment areas were uniformly distributed over approximately 50 mm × 50 mm center region of the tile, maintaining a buffer around the edges to prevent inaccurate readings. The PCI tiles would generally be considered “high-gloss” surfaces; as such, 20° angle gloss measurements are most appropriate for quantifying the tiles compared to 85° and 60° angles. By design, DOI, Rspec, and RIQ assessments are expected to correlate with the severity of orange peel, which is assumed to determine the ascending ordering of the tiles, i.e., from tile 01 to 10. Surface measurements of the PCI visual smoothness standards were obtained using a Zygo NexView CSI (Middlefield, CT, USA) configured with a Michelson 2.75× objective and 0.5× tube lens, providing a numerical aperture (NA) of 0.08 and sampling interval of 5.91 µm. The objective chart for the instrument uses the term spatial sampling in units of µm per pixel to describe the latter, noting it is the pixel size on the sample and is derived from the camera pixel size divided by the system magnification. The instrument is housed in a laboratory environment temperature controlled to 20 ± 0.1 degrees C. Plastic toe clamps were used to secure and flatten each tile to the stage of the CSI. The CSI measurements were taken from the center region of the tile. The measurements comprise partially overlapping 6.05 mm × 6.05 mm scans stitched together to increase the nominal field of view to 30.25 mm × 30.25 mm. Stitching enables a larger field of view without comprising the measurement’s spatial resolution. The stitching process was carried out using Zygo MX™ (v8.0.0) software using 20% overlap between sequential scans in the adaptive adjust mode.

2.2.1. Measured Gloss Metric Values

Figure 3 depicts box and whisker plots highlighting the relationship between the values of selected gloss metrics (y-axis) and PCI tile number (x-axis). Gray boxes represent the interquartile (IQR) range, with whiskers denoting the minimum and maximum quartiles. Each box displays the median and mean of the data. The median is indicated by a horizontal line, while the mean is represented by “×” symbol(s). Outliers are shown as circles.
For Gloss20, there is a clear distinction between tile 01 and tile 02, with average values of approximately 50 and 83 gloss units, respectively. Metrics DOI, Rspec, and RIQ demonstrate a positive correlation for tile 01 through tile 04 (the mean value of the measurements increases as a function of the tile number.). However, even using the data illustrated in Figure 3, it is clear that no one of the functional parameters (Gloss20, DOI, Rspec, RIQ, etc.) would enable the ability to distinguish between all ten tiles, nor facilitate the ability of a machine learning model to automate the classification of new tiles. This is because there is too much overlap between metric values computed on tile 04 through tile 10. For example, the range of Gloss20 values for T04, T05, T06, and T07 are almost identical.

2.2.2. Measured CSI Data and the Power Spectral Density Curves

The images at the top of Figure 4a show stitched 30.25 mm × 30.25 mm CSI measurements, with typical 6 mm × 6 mm single CSI measurements taken from each tile shown below (Figure 4b). The lower section, Figure 4c, depicts the Power Spectral Density (PSD) for each of the ten stitched measurements (30.25 mm × 30.25 mm). The PSD of a surface is the average of the square of the Fourier transform for each row (or column) of an areal measurement [38]. The graph provides insights into the spatial content of a surface, whereby the x-axis covers the range of spatial wavelengths possible to capture within a measurement. In Figure 4, a non-directional PSD was calculated using Zygo MX™ (v8.0.0) for each 30.25 mm × 30.25 mm stitched CSI measurement (see Figure 2c) using default settings. The non-directional PSD is calculated by summing the 2D PSD data for a single frequency in all directions for each frequency for which there is data. Readers are referred to the MX™ (v8.0.0) software manual for complete details. In this case, the highest frequency coincides with the sampling interval of 5.9 µm, limited by the pixel size of the CSI detector and system magnification. The lowest frequency is bounded by the 30.25 mm dimension(s) of the stitched measurement. However, the longest feasible wavelength discernible with any degree of confidence is approximately 7 mm, i.e., one-quarter of the overall length of the stitched measurement. This is slightly larger than the nominal field-of-view of the single 6 × 6 mm CSI measurements (see Figure 2c).
Different surface features (micro-scale surface roughness affecting reflectance or longer-scale surface undulations referred to as orange peel) occupy different length scales on the surface; both influence the classification of the ten different tiles. However, exactly which surface features and length scales correlate with the visual ranking of the tiles is unknown. Therefore, the surface is split into five different spatial bands for further analysis. The bands (cut-off-frequencies) chosen are those commonly used to access paint finish in the automotive industry [39,40,41]: WA (0.1 mm → 0.3 mm), WB (0.3 mm → 1 mm), WC (1 mm → 3 mm), WD (3 mm → 10 mm), and WE (10 mm → 30 mm). The additional band WR (0.018 mm → 0.1 mm) is included to account for the shortest spatial content on the surface, i.e., between three times the sampling interval and the lowest 0.1 mm cutoff of the WA band. The additional P bands indicated the unprocessed surface data of either the larger stitched file PStitched or a single measurement from the stitch, Psingle.
Integrating the PSD curve(s) shown in Figure 4 across the entire range of spatial frequencies would result in the total power of the primary surface(s) comparable to the squared root-mean-square roughness (Sq) of the stitched measurement(s). Band-limited calculations of Sq can be obtained using the stated cut-offs of a given band as the bounds for integration. The WD band shows a more “ideal” (strictly negative) correlation between amplitude parameters and tile number (relative to other bands); T01 has the highest band-limited Sq value (0.37 µm), and Sq generally decreases with every other subsequent tile number. However, the Sq value for T10 (0.124 µm) is approximately equal to that for T09 (0.123 µm), the Sq value of T04 (0.276 µm) is greater than T03 (0.261 µm), similar for T06 (0.243 µm) and T05 (0.230 µm). WD does not provide the required amount of separation that is needed for reliable classification. Using cut-offs for the WA or WR band would result in the Sq of T03 greater than T02. Bands WB and WC would rank T09 and T10 above T04, T05, and T06. Band WE does not rank the sample correctly, either. For subsequent analysis, only WA, WB, and WC bands are used on the 360 individual smaller measurements (6.05 mm × 6.05 mm). The additional band WR that captures shorter wavelengths (0.1 mm → 0.018 mm) and a P band (the primary surface) will also be used. Limiting the evaluation to these bandwidths eliminates the need to obtain stitched measurements to accommodate the field of view (FOV) required for bands WE, WD, which in turn reduces overhead in data storage (24 MB vs. 262 MB), measurement time (~4 s vs. 10 min), and subsequent processing time (~1 s vs. 10 s).

2.3. Dataset Generation

Table 1 summarizes the acquisition details, measurements, and parameters used in the subsequent analysis. The field of view (FOV) for Rhopoint-IQ measurements varies depending on the calculated metric. For the 85° gloss metric, it corresponds to 4.4 mm × 44 mm; for the 60° gloss, a 6 mm × 12 mm area is required; and for the 20° gloss, DOI, RIQ, and Rspec 6 mm × 6.4 mm definition areas are needed. The gloss dataset consists of only these eight parameters provided by the Rhopoint-IQ. Twenty-one ISO 25178-2 parameters were computed for each CSI measurement after the pre-processing operations listed below in Table 2 were applied. The specified filtering operations and subsequent parameter calculations were performed using Zygo MX™ (v8.0.0) software [42]. Each unique dataset, P, WC, WB, WA, and WR, consists of 360 individual measurement sites (6.05 mm × 6.05 mm), 36 per sample, and 22 ISO parameters calculated for each of the 360 areal maps. The tabulated results were exported in .csv format for subsequent analysis. For brevity, only the shorthand abbreviations of the ISO 25178-2 areal parameters are included in Table 1; readers are referred to the standard itself [32] or to references [19,43] for detailed verbal descriptions and numerical definitions of each parameter.
Table 2 includes the processing information for each dataset, specifying the low-pass filter, bandpass filter, form removal, spike removal, and edge clipping applied to data prior to computing ISO 25178-2 parameters. The cut-off wavelength of the initial lowpass Gaussian spline filter used in each case is set to three times the sampling interval to remove high-frequency noise from the data. Note that the sampling interval remains constant, irrespective of the filtering applied to the data. A few false height readings were observed in the measurements, likely caused by dust particles or contamination on the surface. Although these will have little effect on the values of texture parameters such as the average roughness (Sa) or root mean square gradient (Sdq), as the million other pixel values will average them out, a single outlier in the measurement can significantly affect the extreme value parameters such as the maximum peak height (Sp), maximum pit height (Sv), and maximum height (Sz); therefore, spike clipping is used to ensure these parameters characterize the actual topography and not single-pixel measurement artifacts. The spike filter is set at 6σ; the maximum number of data points removed from any measurement was less than 0.3% and visually has no impact on the topography image or values of other ISO parameters. Edge cropping was employed to remove filtering artifacts. Examples of processed height maps after bandpass filtering are shown in Figure 5.
Optimized color representation for each height map shown in Figure 5 makes it much easier to see differences in the spatial aspects of the topography than if a fixed color bar (height scale) was used for the image or any row/column; as such, the colors depict different height ranges for each image. The primary surface (P) of tile 01 exhibits an isotropic topography (no apparent surface structure or directionality). However, traveling down the same column from T01 to T10, notice that as the degree of isotropic orange peel (longer undulations) decreases, there is an increase in anisotropic vertical striations. Still, identifying unique surface characteristics that distinguish between tiles 05, 07, and 10 is extremely difficult, regardless of the applied filtering.
Figure 6 depicts the areal roughness (Sa) and the root-mean-square-gradient (Sdq) of the tile surfaces at the different spatial bands. Notice the lack of a strictly negative correlation between parameter values and the tile number. For the primary surface (the P dataset shown in Figure 5), the areal surface roughness Sa of the 10 tiles ranged from 0.25 µm to 2 µm, Sq from 0.15 µm to 2.02 µm, and Sdq ranged from 0.07° to 0.77°. A fourth-order fit between Sdq and tile number yields correlation coefficient R2 values of 0.98, 0.99, 0.96, and 0.96 for P, WC, WB, and WA datasets, respectively. Similar R2 values were observed for Sa. Although there is a high correlation, the fitted function will not enable explicit classification criteria; there is insufficient separation in the metrics’ values for the higher-numbered tiles. While not shown here, none of the other 20 ISO 25178-2 parameters considered delivered clear distinctions and/or strictly linear trends between all ten tiles.

3. Parameter Selection and Classification Methodology

Surface Quality and Inspection Descriptors (SQuID™)
Instead of relying on preconceived notions of parameter selection for accurately classifying the PCI tiles, an algorithmic method outlined in [21] called SQuID™ (Surface Quality and Inspection Descriptors) is used to down-select optimal surface parameters for classification. SQuID™ uses a selection method that breaks down a user-defined multi-class classification task into a series of binary classification tasks. For instance, in this case, the multi-class task involves using areal texture or gloss descriptions from Nexview CSI or Rhopoint-IQ to determine a measured tile designation (01, 02, 03 … 10). The procedure begins by decomposing the multiclass task into a finite series of binary classification tasks (T01 vs. T02, T01 vs. T03, and so on). For each binary task, a reference set of measurements quantified by a single areal or gloss description is sampled from the dataset. The difference in means between the two groups is divided by the root-mean-square of the standard deviations. Taking the absolute value of this resulting quantity yields a discriminability index referred to as d′, describing the separation between the two groups in terms of normalized standard deviations. The process is repeated for each parameter on each binary classification task derived from the multi-class dataset, resulting in a d′ matrix. In this matrix, parameters are indexed as rows, and binary classification tasks are represented as columns. The optimal set of parameters for the multi-class task is derived by selecting the parameter (row index) with the maximum d′ value for each task (column index). If, for every binary task, the corresponding selected parameter exhibits a d′ value of approximately seven or higher, a reasonably optimized decision tree classifier trained using these selected parameters is expected to achieve near-perfect classification accuracy on the multi-class dataset. Following the down-selection process, SQuID™ employs a decision tree as the default machine-learning model to determine a logical mapping between the selected parameters and the target surface classes. The fitted decision tree can intuitively communicate how the selected surface texture parameters and associated thresholds can be employed as rules for classifying a newly measured surface, providing an objective basis for assessing surface quality in accordance with user-defined grades of surface quality. Readers are referred to the original paper [21] for complete details, including examples of the selection procedure, construction of the d′ matrix, and graphs related to the expected classification outcomes versus d′ values of the selected parameters.

4. Results

4.1. Summary of d′ Matrix

Figure 7 represents the combined d′ matrix generated for the P, WC, WB, WA, and WR dataset(s). Columns are indexed as tasks following the sequence T01 vs. T02, T01 vs. T03, etc., and rows are indexed as ISO parameters for each band (see Table 1 for ordering). For this discussion, reading the fine print associated with the rows and columns on the matrix is not important; what is important is the distribution of the gray values on the d′ matrix. The shading represents the magnitude of the d′ values computed for each row (parameter)–column (task) combination. Ideally, one row (metric) would have a d′ value greater than seven (black shading) across all columns (tasks), though this is not the case for any datasets considered. Notice that the d′ values for most ISO parameters along the first column (task T01 vs. T02) tend to be greater than those of the last column (task T09 vs. T10). This implies that differentiating between the topography of tiles 01 and 02 using ISO parameter values is relatively trivial compared to 09 and 10, regardless of the pre-processing routine applied to the data. The d′ matrix also suggests that a classification model that uses metrics computed for the P and WC bands will perform worse than a model that uses ISO metrics computed for the WB and WA bands. This is because there is an increased coverage of higher d′ values across the entire task space in the WA and WB bands. A combination of metrics with dark values (d′ ≥ 7) across all tasks implies the data are likely separable, and near-perfect classification performance is expected. The expanse of white cells (d′ < 2) in the P and WC bands implies less-than-ideal performance (≈80% classification accuracy or less); see the experimental curve in [21] for the empirical relationship between d′ and classification accuracy. The relevant regions are highlighted by the dashed red and green boxes in Figure 7.

4.2. Identified Parameters and Classification Performance

Table 3 indicates the top metrics identified by SQuID™ for each analysis set and the classification performance of a fitted decision tree that used the selected metrics to predict the tile number (i.e., surface grade). The average and standard deviation of the test accuracy based on stratified five-fold shuffle split cross-validation [44] using train/test splits of 0.1/0.9, 0.5/0.5, and 0.9/0.1 for each dataset is shown in the rightmost columns. The evaluation provides a more comprehensive estimate of the expected classification performance compared to using a single dataset for training and testing. The first row of Table 3 indicates that gloss metrics fail to provide enough information to discern between the ten standard surface tiles. This comes as no surprise given the overlap of the gloss data illustrated in Figure 3; no reduction in the number of selected features is observed, and average accuracies as low as 58% are recorded for the 0.1/0.9 split, 74% for the 0.5/0.5 split, and 75% for the 0.9/0.1 split. For the primary dataset P, a total of five features are selected by SQuID™. Still, they fail to perform any better than the gloss metrics, averaging around 65% classification accuracy or less across all train/test splits. This is unsurprising given that only 2% of metrics had values of d′ > 3 for tasks T04 vs. T > 04 (see the dashed red box in the top right corner of Figure 7). Coincidentally, this is where most incorrect classifications occurred. Classification accuracy marginally improves between 5% and 8% depending on the train/test split for the WC dataset, where seven features are selected. Again, better performance is expected; the d′ values in Figure 7 for this dataset tend to be greater than those of the ISO metrics of the P dataset. For the latter, only 31% of all entries had d′ values greater than three, with the former having 50%. For tasks T04 vs. T05 to T10, 29% of entries were greater than three. Most incorrect classifications occur between T08, T09, and T10. Seven metrics (with higher d′ values) were also selected for the WB dataset; the classification performance ranges between 78% to 82%, approximately a 10% increase over the WC dataset. However, again, most incorrect classifications occur for tasks T08 vs. T09 and T10, where none of the d′ values of the selected metrics were greater than four. A drastic increase in performance is observed for the WA and WR datasets. The former achieved an average accuracy of 97% using ten selected features, and the latter reached 98% based on eleven selected parameters for the 0.5/0.5 split. Both consist of selected metrics with d′ > 4 for tasks T08 vs. T09 and T10. Less than five incorrect classifications in total were recorded in both cases. For the combined dataset (i.e., P, WC, WB, WA), which includes 88 ISO features, near-perfect classification is achieved for the 0.5/0.5 and 0.9/0.1 splits using only five multi-scale metrics selected from the WB and WA datasets. Less than five incorrect predictions were made, each being off by a single tile designation and occurring only on tasks T07 vs. T08 to T10. The fact no ISO metrics were selected from the P and WC datasets indicates there was always an ISO metric computed on either the WB or WA dataset that possessed a greater d′ value for a given classification task (column(s) on the d′ matrix).

4.3. Interpretation of Classification Criteria

Figure 8 depicts the decision tree generated using down-selected parameters, including the auto-correlation length Sal and root-mean-square gradient Sdq computed for the WB and WA bands, in addition to the texture aspect ratio Str computed on the WA band. Figure 9 plots the values of the ISO metrics selected for use in the decision tree for all ten tiles. Examination of the three graphs illustrates that a combination of metrics can exist to differentiate the tiles from each other but that no one metric could differentiate between all ten; the decision tree is required to achieve the latter. Verification of the decision tree logic used for classifying the data is depicted in Figure 9. Following the logic of the tree (Figure 8), the first split for differentiating tiles 01–06 from 07–10 is by comparing autocorrelation length Sal on the WB surface to a threshold of 133 µm (the root node top of the tree). Based on inspection of the data in Figure 9a, it is obvious that if the SalWB value of the surface in question is less than or equal to this value, a classification of grades 07–10 is most appropriate. The next split to the left of the root node (decision node a) compares the Sal of the surface to a value of 64.4 µm but, in this case, on the WA surface, differentiating a tile 07 or 08 from a 09 or 10 designation. Finally, a minor difference in the Sdq of the surface for spatial band WA is used to assign a final designation of 07, 08, 09, or 10 (see decision nodes b and c). Following the logic to the right of the root node (decision node d), the Sdq of the WA band is used to split the group of designations (04, 05, 06) from (01, 02, 03). In the case of the latter group, the presence of what appears to be rolling marks on the surface, exhibiting a dominant lay pattern, becomes more noticeable as the degree of orange peel is reduced, resulting in a departure from an isotropic texture (see Figure 5). This trend in increasing anisotropy is captured by the texture aspect ratio (Str) at the WA band shown at the bottom of Figure 9c, which can be used to resolve the individual differences. For the former group (04, 05, 06), it is important to note that the Sdq for the WB band is less for tile 05 than for 04 and 06. This initial split (decision node e), followed by comparing the Sdq value for the WB band, enables individual designations (see Figure 9b and decision node g in Figure 8).
This result highlights a unique approach for characterizing topography with ISO 25178-2 parameters, which does not limit the evaluation to a single pre-processing routine. Using a script configured to run the CSI and automate the data processing/evaluation steps would enable the classification of new surfaces in less than 10 s, bypassing visual inspection. The parameter values in Figure 9 and decision tree logic in Figure 8 are subject to change for different measurement conditions and data processing. For example, the calculated values of hybrid parameters Sdq and the interfacial area ratio Sdr depend on the sampling interval. This can be modified by adjusting the system magnification of the CSI or increased digitally after the measurement is acquired. The use of different filter types, instrument settings, and environmental disturbances such as temperature and vibrations can also affect parameter values. Further study is required to evaluate the impact of these conditions on classification performance in addition to different modes of acquisition, such as confocal and profilometry.

5. Summary and Conclusions

This work demonstrates the ability of a multi-scale methodology called SQuID™ to down-select both ISO 25178-2 and gloss metrics to quantify ten different grades of powder-coated surfaces objectively. A set of only five multi-scale quantitative descriptions of the surface coupled with interpretable decision logic was provided to classify ten PCI visual smoothness standard tiles with near-perfect classification accuracy. This performance was achieved despite a limited sample size and nonlinear correlation between the qualitative ranking of the tiles (grades 01–10) and the values of parameters obtained from CSI measurements of each sample. Moreover, in cases where incorrect classification occurred, the predictions were only off by a single sample grade. Central to achieving this is leveraging the fact that a tile’s texture comprises features of varying spatial wavelength and amplitude; this is apparent from the varied PSD trends for each tile illustrated in Figure 4c and the bandwidth-filtered measurements depicted in Figure 5 and Figure 6. While features of different wavelength variations may not always be resolvable by a human’s eye (if the amplitude is too low or the wavelength is too short), they all affect a light’s interaction with a surface and, hence, how the same is perceived. Filtering at selected bandwidths enables their isolation, and subsequent quantification provides the additional metrics necessary for successful correct classification. For this work, filtering bandwidths used in the automotive industry for powder-coating and painting processes were utilized. No effort was expended on isolating optimal bandwidths; however, pursuing this avenue could yield even better outcomes. Incorporating additional segmentation methods and feature parameters defined within the ISO 25178-2 standard may also yield even more robust and accurate classification. Of equal importance is to note that the presented classification criteria(s) provide the explicit logic to make predictions; it is not a black box function. It is possible to use this decision rule diagram in an automated way with a machine vision or measurement system and/or as a reference to support visual inspection effects when subjectivity arises. Overall, this approach demonstrates great potential in removing the need for visual inspection. In conclusion, the SQuID™ framework is shown to be a practical, systematic, interpretable, multi-scale approach for selecting standardized statistical parameters to transform subjective grades of surface quality into objective descriptions of surface topography.

Author Contributions

J.R.: Conceptualization, Methodology, Data Curation, Formal Analysis; Investigation, Software, Validation, Visualization, Writing—Original Draft. B.M.: Conceptualization, Methodology, Funding Acquisition, Project Administration, Writing—Review Editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Center for Precision Metrology (CPM) at the University of North Carolina Charlotte.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

Special thanks to Cory Leland and Jeremy Hamilton for discussing the visual inspection of powder-coated surfaces and to Jimmie Miller for reviewing and commenting on this manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shi, Y.; Jiang, Z.; Cao, J.; Ehmann, K.F. Texturing of metallic surfaces for superhydrophobicity by water jet guided laser micro-machining. Appl. Surf. Sci. 2020, 500, 144286. [Google Scholar] [CrossRef]
  2. Nakae, H.; Inui, R.; Hirata, Y.; Saito, H. Effects of surface roughness on wettability. Acta Mater. 1998, 46, 2313–2318. [Google Scholar] [CrossRef]
  3. Kubiak, K.J.; Wilson, M.C.; Mathia, T.G.; Carval, P. Wettability versus roughness of engineering surfaces. Wear 2011, 271, 523–528. [Google Scholar] [CrossRef]
  4. Arumugam, K.; Smith, S.T.; Her, T.-H. Limitations caused by rough surfaces when used as the mirror in displacement measurement interferometry using a microchip laser source. In Proceedings of the American Society of Precision Engineering- 33rd Annual Meeting, Las Vegas, NV, USA, 4–9 November 2018. [Google Scholar]
  5. Vessot, K.; Messier, P.; Hyde, J.M.; Brown, C.A. Correlation between gloss reflectance and surface texture in photographic paper. Scanning 2015, 37, 204–217. [Google Scholar] [CrossRef] [PubMed]
  6. Hilbert, L.R.; Bagge-Ravn, D.; Kold, J.; Gram, L. Influence of surface roughness of stainless steel on microbial adhesion and corrosion resistance. Int. Biodeterior. Biodegrad. 2003, 52, 175–185. [Google Scholar] [CrossRef]
  7. Grzesik, W. Prediction of the functional performance of machined components based on surface topography: State of the art. J. Mater. Eng. Perform. 2016, 25, 4460–4468. [Google Scholar] [CrossRef]
  8. Gockel, J.; Sheridan, L.; Koerper, B.; Whip, B. The influence of additive manufacturing processing parameters on surface roughness and fatigue life. Int. J. Fatigue 2019, 124, 380–388. [Google Scholar] [CrossRef]
  9. Mandloi, K.; Allen, A.; Cherukuri, H.; Miller, J.; Duttrer, B.; Raquet, J. CFD and experimental investigation of AM surfaces with different build orientations. Surf. Topogr. Metrol. Prop. 2023, 11, 034001. [Google Scholar] [CrossRef]
  10. Fox, J.C.; Evans, C.; Mandloi, K. Characterization of laser powder bed fusion surfaces for heat transfer applications. CIRP Ann. 2021, 70, 467–470. [Google Scholar] [CrossRef]
  11. Mullany, B.; Savio, E.; Haitjema, H.; Leach, R. The implication and evaluation of geometrical imperfections on manufactured surfaces. CIRP Ann. 2022, 71, 717–739. [Google Scholar] [CrossRef]
  12. Hekkert, P. Design aesthetics: Principles of pleasure in design. Psychol. Sci. 2006, 48, 157. [Google Scholar]
  13. Yanagisawa, H.; Takatsuji, K. Effects of visual expectation on perceived tactile perception: An evaluation method of surface texture with expectation effect. Int. J. Des. 2015, 9, 39–51. [Google Scholar]
  14. Tymms, C.; Zorin, D.; Gardner, E.P. Tactile perception of the roughness of 3D-printed textures. J. Neurophysiol. 2018, 119, 862–876. [Google Scholar] [CrossRef] [PubMed]
  15. Stylidis, K.; Wickman, C.; Söderberg, R. Defining perceived quality in the automotive industry: An engineering approach. Procedia CIRP 2015, 36, 165–170. [Google Scholar] [CrossRef]
  16. Simonot, L.; Elias, M. Color change due to surface state modification. Color Res. Appl. 2003, 28, 45–49. [Google Scholar] [CrossRef]
  17. Dalal, E.N.; Natale–Hoffman, K.M. The effect of gloss on color. Color Res. Appl. 1999, 24, 369–376. [Google Scholar] [CrossRef]
  18. Briones, V.; Aguilera, J.M.; Brown, C. Effect of surface topography on color and gloss of chocolate samples. J. Food Eng. 2006, 77, 776–783. [Google Scholar] [CrossRef]
  19. Jiang, X.J.; Whitehouse, D.J. Technological shifts in surface metrology. CIRP Ann. 2012, 61, 815–836. [Google Scholar] [CrossRef]
  20. Whitehouse, D.J. Handbook of Surface and Nanometrology; Taylor & Francis: New York, NY, USA, 2002. [Google Scholar]
  21. Redford, J.; Mullany, B. Construction of a multi-class discrimination matrix and systematic selection of areal texture parameters for quantitative surface and defect classification. J. Manuf. Syst. 2023, 71, 131–143. [Google Scholar] [CrossRef]
  22. Brown, C.A.; Hansen, H.N.; Jiang, X.J.; Blateyron, F.; Berglund, J.; Senin, N.; Bartkowiak, T.; Dixon, B.; Le Goic, G.; Quinsat, Y.; et al. Multiscale analyses and characterizations of surface topographies. CIRP Ann. 2018, 67, 839–862. [Google Scholar] [CrossRef]
  23. Das, J.; Linke, B. Evaluation and systematic selection of significant multi-scale surface roughness parameters (SRPs) as process monitoring index. J. Mater. Process. Technol. 2017, 244, 157–165. [Google Scholar] [CrossRef]
  24. Hunter, R.S. Methods of determining gloss. NBS Res. Pap. RP 1937, 18, 958. [Google Scholar] [CrossRef]
  25. ISO 2813:2014; Paints and Varnishes—Determination of Gloss Value at 20°, 60° and 85°. International Organization for Standardization: Geneva, Switzerland, 2014. Available online: https://www.iso.org/standard/56807.html (accessed on 1 January 2024).
  26. ASTM D523; Standard Test Method for Specular Gloss. ASTM International: West Conshohocken, PA, USA, 2018. Available online: https://www.astm.org/d0523-14r18.html (accessed on 1 January 2024).
  27. ASTM E284-17; Standard Terminology of Appearance. ASTM International: West Conshohocken, PA, USA, 2017. Available online: https://cdn.standards.iteh.ai/samples/97708/7e9a7cafb4bb432f9458a1eeff471dcd/ASTM-E284-17.pdf (accessed on 1 January 2024).
  28. Weber, C.F.; Spiehl, D.; Dörsam, E. Comparing measurement principles of three gloss meters and using them for measuring gloss on metallic embellishments produced by the printing industry. Lux Jr. 2021, 15, 327–341. [Google Scholar]
  29. R. Instruments. Rhopoint iq Goniophotometer Datasheet. 2023. Available online: https://www.rhopointamericas.com/download/rhopoint-iq-goniophotometer-datasheet-english/ (accessed on 1 January 2024).
  30. Leach, R. (Ed.) Optical Measurement of Surface Topography; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  31. Conroy, M.; Armstrong, J. A comparison of surface metrology techniques. J. Phys. Conf. Ser. 2005, 13, 458. [Google Scholar] [CrossRef]
  32. ISO 25178-2; Geometrical Product Specifications (GPS)—Surface Texture: Areal—Part 2: Terms, Definitions and Surface Texture Parameters. ISO: Geneva, Switzerland, 2021. Available online: https://www.iso.org/obp/ui/en/#!iso:std:74591:en (accessed on 1 January 2024).
  33. Todhunter, L.D.; Leach, R.K.; Lawes, S.D.; Blateyron, F. Industrial survey of ISO surface texture parameters. CIRP J. Manuf. Sci. Technol. 2017, 19, 84–92. [Google Scholar] [CrossRef]
  34. Gadelmawla, E.S.; Koura, M.M.; Maksoud, T.M.; Elewa, I.M.; Soliman, H.H. Roughness parameters. J. Mater. Process. Technol. 2002, 123, 133–145. [Google Scholar] [CrossRef]
  35. Whitehouse, D.J. The parameter rash—Is there a cure? Wear 1982, 83, 75–78. [Google Scholar] [CrossRef]
  36. ASTM D3451-06; Standard Guide for Testing Coating Powders and Powder Coatings. ASTM International: West Conshohocken, PA, USA, 2006. Available online: https://compass.astm.org/document/?contentCode=ASTM%7CD3451-06%7Cen-US&proxycl=https%3A%2F%2Fsecure.astm.org&fromLogin=true (accessed on 1 January 2024).
  37. Deere, J. Appearance Standard JDHZ610. Available online: https://studylib.net/doc/25779977/610 (accessed on 9 November 2023).
  38. Muralikrishnan, B.; Raja, J. Computational Surface and Roundness Metrology; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  39. Michigan Metrology, L. Multiscale Surface Texture Analysis in Action. Available online: https://michmet.com/multiscale-analysis-in-action/ (accessed on 9 November 2023).
  40. BYK Instruments. Appearance of a Class A Surface: Orange Peel and Brilliance. Available online: https://www.byk-instruments.com/en/t/knowledge/Orange-Peel-Paint (accessed on 9 November 2023).
  41. DigitalMetrology. Bandify Multi-Band Surface Texture Analysis. Available online: https://digitalmetrology.com/solution/bandify/ (accessed on 9 November 2023).
  42. Zygo. Mx™ Surface Texture Parameters. Available online: https://www.zygo.com/insights/blog-posts/-/media/ccf08d090bbe4c8eb98c1292e17d1d65.ashx?la=en&revision=662471a5-53c8-4839-beaa-8c00f88e6238 (accessed on 9 November 2023).
  43. François, B. The areal field parameters. In Characterisation of Areal Surface Texture; Springer: Berlin/Heidelberg, Germany, 2013; pp. 15–43. [Google Scholar]
  44. FPedregosa, G.; Varoquaux, A.; Gramfort, V.; Michel, B.; Thirion, O.; Grisel, M.; Blondel, P.; Prettenhofer, R.; Weiss, V.; Dubourg, J.; et al. Sklearn.Modelselection.Stratifiedshufflesplit. Available online: http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html (accessed on 9 November 2023).
Figure 1. Images of PCI visual smoothness standards captured by an iPhone under non-ideal lighting conditions. Circular regions shown in T01, T04, and T05 illustrate the reflected image distortion caused by the severity of the orange peel.
Figure 1. Images of PCI visual smoothness standards captured by an iPhone under non-ideal lighting conditions. Circular regions shown in T01, T04, and T05 illustrate the reflected image distortion caused by the severity of the orange peel.
Materials 17 01653 g001
Figure 2. (a) Illustration of sampling locations on the PCI tiles for gloss meter (b) and CSI measurements (c).
Figure 2. (a) Illustration of sampling locations on the PCI tiles for gloss meter (b) and CSI measurements (c).
Materials 17 01653 g002
Figure 3. Box and whisker plots of select Rho-Point IQ gloss meter readings vs. PCI tile number.
Figure 3. Box and whisker plots of select Rho-Point IQ gloss meter readings vs. PCI tile number.
Materials 17 01653 g003
Figure 4. (a) Examples of stitched 30.25 mm × 30.25 mm CSI measurements. (b) Examples of single 6.05 mm × 6.05 mm CSI measurements. (c) Power spectrum density curves of stitched 30.25 × 30.25 mm CSI measurements of PCI tiles 01–10 with spatial bands WE, WD, WC, WB, WA, and WR overlaid.
Figure 4. (a) Examples of stitched 30.25 mm × 30.25 mm CSI measurements. (b) Examples of single 6.05 mm × 6.05 mm CSI measurements. (c) Power spectrum density curves of stitched 30.25 × 30.25 mm CSI measurements of PCI tiles 01–10 with spatial bands WE, WD, WC, WB, WA, and WR overlaid.
Materials 17 01653 g004
Figure 5. Examples of CSI measurements after pre-processing operations used to create datasets P, WC, WB, WA, and WR. Color ranges have been optimized for each image to improve viewing quality. Red and blue colors correspond to areas of higher or lower elevation, respectfully.
Figure 5. Examples of CSI measurements after pre-processing operations used to create datasets P, WC, WB, WA, and WR. Color ranges have been optimized for each image to improve viewing quality. Red and blue colors correspond to areas of higher or lower elevation, respectfully.
Materials 17 01653 g005
Figure 6. (a) Sdq (P, WC, WB) vs. tile number. (b) Sdq (WA, WR) vs. tile number. (c) Sa (P, WC, WB) vs. tile number. (d) Sa (WA, WR) vs. tile number.
Figure 6. (a) Sdq (P, WC, WB) vs. tile number. (b) Sdq (WA, WR) vs. tile number. (c) Sa (P, WC, WB) vs. tile number. (d) Sa (WA, WR) vs. tile number.
Materials 17 01653 g006
Figure 7. Example of the d′ matrix generated for combined datasets. P, WC, WB, WA, and WR comprise 110 metrics (rows) and 45 tasks (columns).
Figure 7. Example of the d′ matrix generated for combined datasets. P, WC, WB, WA, and WR comprise 110 metrics (rows) and 45 tasks (columns).
Materials 17 01653 g007
Figure 8. Decision tree generated from SQuID™ based on five parameters selected from combined P + WC + WB + WA datasets. For visualization, parameter values are truncated to two decimal places. Callouts (ah) represent the logic used to classify a surface as one of the ten tile grades.
Figure 8. Decision tree generated from SQuID™ based on five parameters selected from combined P + WC + WB + WA datasets. For visualization, parameter values are truncated to two decimal places. Callouts (ah) represent the logic used to classify a surface as one of the ten tile grades.
Materials 17 01653 g008
Figure 9. Decision tree verification by graphical assessment of the parameters selected by SQuID™ for classification. (a) Measured auto-correlation length Sal and (b) rms-slope Sdq values for WA and WB bands plotted against tile designation T01–T10. (c) Texture aspect ratio Str values computed for the WA band of the ten different tiles.
Figure 9. Decision tree verification by graphical assessment of the parameters selected by SQuID™ for classification. (a) Measured auto-correlation length Sal and (b) rms-slope Sdq values for WA and WB bands plotted against tile designation T01–T10. (c) Texture aspect ratio Str values computed for the WA band of the ten different tiles.
Materials 17 01653 g009
Table 1. Instrument specifications and datasets collected for analysis.
Table 1. Instrument specifications and datasets collected for analysis.
InstrumentZygo NexViewRho-Point IQ
Objective2.75×n/a
Tube Lens0.5×n/a
Field of View (FOV)6.05 mm × 6.05 mm6.04 mm × 6 mm
6 mm × 12 mm
4 mm × 44 mm
Sampling Interval5.9 µm n/a
Sampling
Area
30.25 mm × 30.25 mm50.0 mm × 50.0 mm
Number of Measurements36/Sample30/Sample
MetricsSa, Sq, Ssk,
Sku, Sp, Sv, Sz, Sdq, Sdr,
Sal, Str, Std, Sk, Spk,
Smr1, Smr2, Svk, Vmc, Vmp,
Vvc, Vvv, Sxp
Gloss20, Gloss60, Gloss85, Haze, LogHaze, DOI, RIQ, Rspec
References[32,42][27,29]
Note: n/a refers to either not available or not applicable.
Table 2. Detailed information on datasets and filtering operations: low-pass, bandpass, form removal, spike removal, and edge clipping.
Table 2. Detailed information on datasets and filtering operations: low-pass, bandpass, form removal, spike removal, and edge clipping.
DatasetLow-PassBandpassForm RemovalSpike RemovalEdge Clipping
P18.0 µm-Best fit plane6σ from mean planeNo
WC18.0 µm1.0 mm–3.0 mmBest fit plane6σ from mean plane24 pixels
WB18.0 µm0.3 mm–1.0 mmBest fit plane6σ from mean plane24 pixels
WA18.0 µm0.1 mm–0.3 mmBest fit plane6σ from mean plane24 pixels
WR18.0 µm0.018 mm–0.1 mmBest fit plane6σ from mean plane24 pixels
Table 3. Classification accuracy of decision tree based on selected features for seven different datasets.
Table 3. Classification accuracy of decision tree based on selected features for seven different datasets.
Dataset# Input FeaturesSelected FeaturesTest Accuracy *
Stratified 5-Fold Shuffle Split
Gloss8Gloss20, Gloss60, Gloss85, Haze, LogHaze,
DOI, RIQ, Rspec
0.58 ± 0.0450.74 ± 0.0130.75 ± 0.063
P22Sv, Sp,
Sdq, Std, Sz
0.56± 0.040.65 ± 0.0250.65 ± 0.075
WC22Sz, Vmc, Sa,
Vvc, Sv, Sq, Sdq
0.61 ± 0.0630.68 ± 0.0130.73 ± 0.025
WB22Sp, Vmc, Sdq,
Sal, Sdr, Str,
Sz, Sk
0.78 ± 0.020.81 ± 0.020.82 ± 0.02
WA22Sz, Vvv, Vmp,
Sp, Sal, Sa,
Vvc, Sv,
Sdq, Sdr
0.91 ± 0.040.97 ± 0.011.00 ± 0.00
WR22Spk, Sdq, Sdr,
Sal, Vvv, Sxp,
Str, Svk, Vvc,
Sk, Vmp
0.91 ± 0.020.98 ± 0.020.99 ± 0.01
P+WC + WB
+ WA +WR
88SdqWB, SalWB,
StrWA, SdqWA,
SalWA
0.91 ± 0.020.99 ± 0.010.99 ± 0.01
Training/TestingSplit:0.1/0.90.5/0.50.9/0.1
** Number Samples:40/320180/180320/40
** Number Samples per Tile:4/3218/1832/4
* Accuracy is computed as the number of correct predictions divided by the total number of predictions made on the test set. ** The number of samples and samples per class is different for the gloss dataset. # Refers to the number of input features to SQuID™.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Redford, J.; Mullany, B. Classification of Visual Smoothness Standards Using Multi-Scale Areal Texture Parameters and Low-Magnification Coherence Scanning Interferometry. Materials 2024, 17, 1653. https://doi.org/10.3390/ma17071653

AMA Style

Redford J, Mullany B. Classification of Visual Smoothness Standards Using Multi-Scale Areal Texture Parameters and Low-Magnification Coherence Scanning Interferometry. Materials. 2024; 17(7):1653. https://doi.org/10.3390/ma17071653

Chicago/Turabian Style

Redford, Jesse, and Brigid Mullany. 2024. "Classification of Visual Smoothness Standards Using Multi-Scale Areal Texture Parameters and Low-Magnification Coherence Scanning Interferometry" Materials 17, no. 7: 1653. https://doi.org/10.3390/ma17071653

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop