Next Article in Journal
Sensing Classroom Temperature, Relative Humidity, Illuminance, CO2, and Noise: An Integral Solution Based on an IoT Device for Dense Deployments
Previous Article in Journal
Enhanced Predefined-Time Control for Spacecraft Attitude Tracking: A Dynamic Predictive Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantitative Prediction and Analysis of Rattle Index Using DNN on Sound Quality of Synthetic Sources with Gaussian Noise

AI & Mechanical System Center, Institute for Advanced Engineering, Youngin-si 17180, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(16), 5128; https://doi.org/10.3390/s24165128
Submission received: 3 July 2024 / Revised: 6 August 2024 / Accepted: 7 August 2024 / Published: 8 August 2024
(This article belongs to the Section Physical Sensors)

Abstract

:
This study researched the prediction of the BSR noise evaluation quantitative index, Loudness N10, for sound sources with noise using statistics and machine learning. A total of 1170 data points was obtained from 130 automotive seats measured at 9-point positions, with Gaussian noise integrated to construct synthetic sound data. Ten physical quantities related to sound quality and sound pressure were used and defined as dB and fluctuation strength, considering statistical characteristics and Loudness N10. BSR quantitative index prediction was performed using regression analysis with K-fold cross-validation, DNN in hold-out, and DNN in K-fold cross-validation. The DNN in the K-fold cross-validation model demonstrated relatively superior prediction accuracy, especially when the data quantity was relatively small. The results demonstrate that applying machine learning to BSR prediction allows for the prediction of quantitative indicators without complex formulas and that specific physical quantities can be easily estimated even with noise.

1. Introduction

BSR (Buzz, Squeak, Rattle) noise is a common quality issue found in interior parts for automobiles, with over 50% of these issues occurring in panels, seats, and doors of automobiles [1,2,3]. Addressing consumer complaints resulting from BSR noise requires substantial costs for improvements. In the structural domain, BSR noise is linked to performance degradation and durability issues in components. BSR noise is classified into Buzz, Squeak, and Rattle problems. Buzz and squeak problems have a clearly structural mechanism, allowing for established theoretical and interpretive approaches to improvement methods. Particularly, studies on friction-induced noise focus on dynamic instability mechanisms that occur in systems based on linear theory, leading to mechanism-based solutions.
Kang et al. [4] developed a comprehensive mathematical mechanism for dynamic instability in brakes, providing a theoretical understanding of friction-induced noise. Nam et al. [5] analyzed the mechanism of the point contact friction model for friction-induced noise using a pin-on-disk friction system, which efficiently described the cause of friction-induced noise by describing the characteristics of the friction curve through experiments. Despite extensive research on friction-induced noise based on linearization theory through various applications, analyzing the mechanism of rattle noise remains challenging because of its nonlinear characteristics.
Rattle can be described mathematically through an impact oscillator that includes Hertz’s contact model. However, researching rattle is challenging owing to extreme nonlinearities such as chaos [6,7]. Shin et al. [8] introduced a dynamic stiffness analysis technique, a degradation BSR analysis technique, and a direct virtual method developed from the BSR perspective to ensure the robustness of the BIW (Body-in-White) body system—a load transfer medium—and the corresponding modules for each part. Lee et al. [9] improved the E-Line method, commonly used to predict BSR noise, by utilizing a statistical method to determine the tolerances between parts expressed as dispersion and dynamic deformations. To directly express the behavior of rattle noise in the seats of autonomous vehicles, Kim et al. [10] calibrated an analysis model based on sinusoidal wave experiments and described the location and characteristics of impact noise through explicit analysis. Despite advancements in hardware and software enabling quicker analytical approaches to simulation-based studies on the rattle, significant time and cost are still required for these analyses. BSR mainly occurs in automotive interior parts and is evaluated during the final phase of performance verification of automotive seats to evaluate quality. Choi et al. [11] analyzed BSR characteristics after performing the excitation and operating durability tests on automotive seats and tracked the major noise sources. In another study, they analyzed the BSR vibration characteristics of the seat cushion frame before and after durability tests to assess how changes in the stiffness of the frame affected the BSR characteristics [12]. Wan et al. [13] conducted a study on an efficient noise diagnostic method using the STRE-VK method, which calculated measurement criteria for identifying various types of BSR by separating signal components and demonstrated that BSR could be predicted based on signal processing.
Predicting BSR is challenging and does not provide a clear solution. Furthermore, comprehensive system analysis requires significant cost and time. Solutions are mainly carried out using experimental measurement methods, which require complex systems, expensive equipment, and engineers with expertise because the relevant regulations and calculation methods are complex.
Deep learning has surpassed human cognitive abilities in various fields through its rapid advancements. Algorithms built on nonlinearity achieve highly accurate predictions for unstructured data. Predictions using deep learning do not require equations of motion and rely solely on data, encompassing uncertainty and nonlinearity without the need for complex calculations. Wiercioch et al. [14] proposed a novel deep neural network (DNN)-based model to predict the characteristics of molecules and demonstrated accurate prediction for chemical characteristics. Additionally, Yu et al. [15] proposed a strategy to compromise the correlation between output variables through shared and separated parts by suggesting an MD-DNN (Multi-channel Decoupled DNN) model. DNNs are widely used in various fields to predict nonlinear systems.
In vibration analysis, deep learning is used to predict and analyze vibrations in numerous applications. Nam et al. [16] visualized the chaos phenomenon—the most complex phenomenon in dynamical systems—using various signal processing methods. They also described how recurrence plots can be used to classify chaos phenomena utilizing convolutional neural networks (CNNs). Recurrence plots require a reconstructed phase space to address self-crossing issues, but it is challenging to reconstruct the ideal geometric dimensions of complex trajectories with noise, such as real-world phenomena [17]. Thus, predictions and classifications based on experimental data are expected to provide the most straightforward and purposeful direction for BSR studies. Huang et al. [18] proposed a theoretical architecture for diagnosing acoustic faults based on time-frequency analysis and machine learning using Support Vector Machine (SVM) techniques. They presented research results on fault identification based on signals measured using smartphones and discussed the accuracy of their results.
BSR evaluation is determined based on Loudness N10, a quantitative metric. Since the calculation method for BSR evaluation is complex and requires expensive equipment and specialized software, predicting and verifying BSR characteristics in the design and development phases of components proves challenging. Conversely, it is impossible to obtain substantial BSR data from similar systems except from relevant development companies. Furthermore, even for these companies, it is unlikely to acquire substantial data through measurement methods. The sound source in the field inherently includes variability, which can differ from the ideally measured noise. Therefore, making predictions based on machine learning faces the following challenges: insufficient data, issues with informal data such as noise, and data classification problems.
In this study, we simulated real-world noise to reconstruct seat noise to predict Loudness N10 a quantitative metric used to evaluate BSR noise. Particularly, we aimed to estimate the BSR characteristics of the developed system by predicting Loudness N10 a quantitative metric for BSR in an anechoic chamber through simple field tests using noise containing sound sources. Loudness N10 predictions are based on statistics, and in this study, we described a method for predicting quantitative metrics solely based on the characteristics of physical quantities without requiring special equipment or calculations. We analyzed significant physical quantities from a statistical perspective and the characteristics of Loudness N10 through correlation analysis and derived two significant physical quantities. Data augmentation was not utilized as it can distort data, and the method of increasing the amount of data was not used because it is a common method for enhancing the performance of models. Instead, we employed the K-fold cross-validation technique to address data limitations. Loudness N10 predictions were made using the physical quantities analyzed through a DNN. Figure 1 illustrates the flow diagram of the prediction procedure and performance verification of the proposed method.

2. Method

2.1. Construction of BSR Dataset and Physical Quantity Information

BSR measurements and Loudness N10 calculations were performed based on GMW 14011, as illustrated in Figure 2 [19]. The BSR data were extracted from positions 150 mm away from each point on the car seat, as shown in Figure 2a, in accordance with GMW 14011. A multi-axis silent shaker was used, as depicted in Figure 2b. The background noise of the anechoic chamber was within 30 dB(A) under the operating conditions of the shaker, and the environmental chamber allowed for temperature control from −40 °C to 50 °C. The experimental conditions of the configured dataset were measured at low temperature (−20 ± 5 °C), room temperature (23 ± 5 °C), and high temperature (50 ± 5 °C).
Loudness N10 was calculated using software(ArtemiS Classic V12) based on Zwicker Loudness. BSR data were measured using nine microphones across the 130 different seat models used, resulting in a total of 1170 data points. The data used varied in environmental conditions, such as temperature and seat position, during the measurement process. Since this study aims to estimate Loudness N10, which requires complex calculations based on various physical quantities related to sound quality and acoustics, environmental conditions were not considered. However, the same test method was used for all measurement conditions. An exciter with operating background noise less than or equal to 30 dB(A) and a 300 Hz high-pass filter were utilized in the experiment. Loudness N10 estimation was performed by analyzing the characteristics of a total of ten physical quantities related to sound quality and acoustics. Each physical quantity was based on the lowest level (N10) in the top 10% positions. Ten physical quantities relating to sound quality and acoustics were used: Loudness (M1), 3rd octave (M2), sound pressure level (M3), fluctuation strength (M4), Roughness (M5), Sharpness (M6), Tonality (M7), Harmonic distortion (M8), Speech intelligibility index (M9), and Articulation index (M10). Given the significance of magnitude in BSR, the selection of the physical quantities was defined as the physical quantities for sound pressure level and those that determine emotional quality.
The measured signals contain noise due to the external environment and structural issues. Data measurement involves considering the measurement process and analyzing the signals through a filtering process using specialized hardware and software for system characteristics. Noise can be implemented using various methods, but in numerical analysis methods, it is generally implemented using Gaussian noise. The probability density function of the noise applied to BSR sound sources is defined as follows:
f ( z ) = 1 / σ 2 π e x p ( ( z μ ) 2 / 2 σ 2 ) .
In this equation, σ and μ represent the standard deviation and mean of the noise signal, respectively, and z denotes the noise signal. Noise was implemented using a Gaussian random distribution, and the standard deviation was modeled at the 2/3 level of the basic data. The characteristics of the signals with noise are illustrated in Figure 3. Figure 3a,b illustrate the results in the time domain and frequency domain, respectively. Gaussian noise was introduced into the raw data to exhibit characteristics of the added noise that did not exist previously. Particularly in the frequency domain, the characteristics of the added noise are exhibited across all frequencies except the fundamental frequency.
Each physical quantity was normalized according to the physical quantity calculation method because the absolute magnitude varies depending on the calculation method. Common normalization methods include the min–max normalization method and the z-score normalization method. The z-score method is suitable for handling outlier problems and is sensitive to the mean and standard deviation of the data. However, the constructed data were measured at equal intervals using microphones with similar specifications at nine positions. Since the microphones had similar characteristics, the possibility of outliers occurring is minimal. Thus, the min–max normalization was performed. The normalization results are not a conclusion of this study, and the relationship between Loudness (M1) and each metric was intuitively compared using the minimum value (0) and maximum value (1). Although 130 data points were analyzed through the data analysis, only the results for representative samples were described. Table 1 illustrates the results of the samples containing normalized noise.
As shown in the normalized results, Loudness exhibits extremely similar characteristics to the acoustic physical quantities M2, M3, M4, M5, and M6. Conversely, Loudness shows contrasting results with M7, M9, and M10. Alternatively, Loudness is presumably determined by the magnitude of the noise and the frequency of the sound. Although complex factors enable a precise analysis of systems, they complicate the polynomials. Hence, it is necessary to exclude physical quantities with low impact. Therefore, covariance analysis and correlation analysis were performed to define the relationship for each physical quantity and derive significant factors.

2.2. Variables for the Physical Quantity Correlation Analysis and Determination of the Variables

Physical quantities calculated using different methods represent the characteristics of sound quality. Thus, the selection of physical quantities analyzes the correlation of related variables to derive the final physical quantities that will be used in regression and deep learning. Correlation analysis examines the strength of the linearity between the physical quantities and identifies the presence of linear relationships as a statistical result. It also defines the correlation between variables by performing covariance analysis and defines the relationship based on the levels of variables, regardless of units. A positive correlation exists between two variables when an increase in the value of one variable corresponds to an increase in the value of the other. Conversely, a negative correlation occurs when an increase in one variable results in a decrease in the value of the other. A covariance of zero indicates that the two variables are independent of each other. The results of the covariance analysis are listed in Table 2.
Since the results of the covariance analysis define the relationship based on the level of the variables, they were expressed based on the normalized physical quantities. As shown in the covariance analysis results, each physical quantity exhibits a correlation with each other. Similar to the normalized data analysis results, the covariance analysis results indicate a correlation of approximately 0.07 between the magnitude-based quantities M2 and M3 and the physical quantities corresponding to frequency characteristics M4, M5, and M6. Considering vocal aspects, BSR noise is an unclear signal, suggesting that voice-related metrics may exhibit a high negative correlation. Since covariance does not include the degree of the relationship (the degree of the relationship according to the level of two variables), the relationship between the two variables was analyzed through correlation analysis. However, the previous covariance analysis was performed based on normalization to minimize the error in the deviation of levels. Therefore, it can be predicted that the correlation analysis results will exhibit similar characteristics to the normalized covariance analysis results. The results are listed in Table 3.
The analysis of correlation coefficients was performed using multiple correlation analyses for a total of ten physical quantities. The sample correlation coefficient indicates the linear correlation between variables. The results of the correlation analysis for each physical quantity demonstrated that the physical quantities related to the sound pressure level (M2, M3) exhibit the highest linear correlation, and the physical quantities corresponding to frequency characteristics (M4, M5, M6) also exhibit high linear correlation. Roughness and Sharpness demonstrated relatively high correlations, likely attributed to the low-frequency and high-frequency characteristics due to Gaussian noise instead of the correlation of the pure system. Figure 3 illustrates the correlation analysis results for Loudness N10 of the data with and without noise.
As illustrated in Figure 4, metrics related to the sound pressure level equally exhibited high correlations regardless of the presence or absence of noise. However, Roughness (M5) and Sharpness (M6), which correspond to the frequency characteristics, showed relatively low correlations in the absence of noise, while they exhibited high correlations when noise was present. This outcome can be attributed to the characteristics of Gaussian noise, which adds noise across the entire region. Hence, the correlation between Sharpness, which represents high-frequency characteristics, and Roughness, which represents low-frequency regions, increased. Therefore, sound pressure level (M2), which exhibits a high correlation with Loudness N10 regardless of the presence of noise, and fluctuation strength (M4), which can partially reflect the frequency characteristics, were selected as effective factors.

2.3. Method of K-fold Cross-Validation

K-fold cross-validation is a method that evaluates a model by randomly partitioning the dataset into k sub-groups. It uses one of the sub-groups as the test data and the remaining k-1 sub-groups as the training data. This was repeated k times. The model is evaluated based on the average prediction error derived from each iteration. Typically, five or ten is used as the value of k to balance (Trade-off) the bias and variance of the regression model [20]. In general regression models, overfitting may occur, which only reflects biased characteristics. The K-fold cross-validation method can prevent this issue by randomly partitioning the dataset into training and test data and building and evaluating the model k times. Since BSR signals are collected during the final stage of the process, it is impractical to obtain a large amount of data. K-fold cross-validation is a representative method that leverages all data for both training and testing, thereby enabling the creation of a more generalized model and effective detection of overfitting and underfitting. Consequently, to address the issue of limited data, we employed K-fold cross-validation in this study, as illustrated in Figure 5.
Regression models can be divided into linear and nonlinear models, depending on the distribution of the data. No particular model is superior to the others. Rather, it is important to select the optimal model based on the type of data. In this study, the final model was selected by comparing the multiple linear regression model and a multiple nonlinear regression model.
Multiple linear regression is a regression analysis technique that models the linear relationships between a dependent variable and two or more independent variables. The multiple linear regression model is expressed using a linear equation, as shown in the equation below. Y and x i are both independent variables. Β i is a regression coefficient and represents the influence of each independent variable.
y = β 0 + β 1 x 1 + β 2 x 2 + + β n X n
Linear regression uses the method of least squares, which minimizes the sum of the squares of the residuals, to estimate the regression coefficient. However, as the number of independent variables increases, multi-collinearity may occur due to the correlations between the variables. Hence, the variance of the least squares regression coefficient estimates increases, thus reducing the stability of the prediction accuracy of the regression equation [21].
In this study, a nonlinear regression model in the form of an exponential function was constructed through logarithmic transformation, as shown in the equation below. y and x i are both independent variables. Β i is a regression coefficient and represents the influence of each independent variable.
y = β 0 x 1 β 1 x 2 β 2 x n β n
When there is a nonlinear relationship between an independent variable and a dependent variable, logarithmic transformation can be used to model this relationship linearly, making it a linear relationship. The regression coefficient of the linear model can be derived by applying the least squares method. Logarithmic transformation can linearly transform variables using natural logarithms, as shown in Equation (4).
L n y = l n β 0 + β 1 l n x 1 + β 2 l n x 2 + + β n l n x n
Here, the regression model can be expressed as Equation (5) for i datasets through matrix transformation.
L n y 1 l n y 2 l n y 3 = 1 l n x 1 1 l n x 2 1 l n x n 1 1 l n x 1 2 l n x 2 2 l n x n 2 1 l n x 1 i l n x 2 i l n x n j l n β 0 β 1 β 2 β n
Assuming Y = X β _ , the least squares estimate can be expressed as shown in Equation (6) when ( X X ) 1 exists [22]. The regression coefficient is determined through Equation (6). If Equation (6) is substituted into Equation (4), reverse exponential transformation can be performed to derive a multiple nonlinear regression equation similar to Equation (3).
Β _ = l n β 0 β 1 β 2 β n = ( X X ) 1 X Y

2.4. Machine Learning Model

Since deep learning is performed based on data, a substantial amount of data is generally required to improve accuracy. Accuracy typically improves with the increase in network depth, and an optimized model can be constructed through careful tuning of hyperparameters. A DNN, also known as a feedforward neural network or a multi-layer perceptron, is a neural network that has two or more hidden layers [23,24,25]. As illustrated in Figure 6, the DNN described in the example has three input dimensions and five neurons in the hidden layers. The output of the hidden layers is expressed as follows:
z i 1 = σ ( w i , 1 0 x 1 + w i , 2 0 x 2 + w i , 3 0 x 3 + w i , 4 0 x 4 + b i 0 ) , ( i = 1,2 , , 5 )
In this equation, z i 1 is the output of the i -th neuron of hidden layer 1. δ denotes the activation function, and ReLU is typically used as the activation function [26,27]. W i , j 0 is the connection weight between the j -th input and the i -th neuron of hidden layer 1. Additionally, b i 0 denotes the bias of the i -th neuron in hidden layer 1. Assuming there are n neurons in layer k 1 , the output of the i -th neuron in layer k is calculated as follows:
z i k = σ ( j = 1 n w i , j k 1   z 1 k 1 + b i k 1 ) , ( k = 1,2 , , K )
The feedforward neural network performs computations using the outputs of preceding layers, beginning with the input layer and going through to the results of the output layer. The neural network uses a loss function to measure the deviation between the predictions made by the model and the actual values and utilizes the gradient descent method to update the weights and biases of each layer to gradually bring the prediction values of the model closer to the actual values. The regression loss function for the K -th layer is calculated as follows:
L ( w , b , x , y ) = 1 2 z K y 2 2 = 1 2 σ ( w K 1 z K 1 + b K 1 ) y 2 2
In this equation, z K represents the predicted value of the DNN model, and y denotes the actual value. The gradient of the loss function can be calculated as follows:
L ( w , b , x , y ) w K 1 = L ( w , b , x , y ) z K z K w K 1 = ( z K y ) σ ( w K 1 z K 1 + b K 1 ) ( z K 1 ) T
L ( w , b , x , y ) b K 1 = L ( w , b , x , y ) z K z K b K 1 = ( z K y ) σ ( w K 1 z K 1 + b K 1 )
In this equation, denotes the Hadamard product.
Assuming δ k = L ( w , b , x , y ) / z k = ( z K y ) σ ( w K 1 z K 1 + b K 1 ) as the inactive output, the inactive output of the K -th layer, z K , can be expanded as follows:
δ k = L ( w , b , x , y ) w k = L ( w , b , x , y ) z K z K z K 1 z K 1 z K 2 z K + 1 z K
Subsequently, the gradient of the loss function can be transformed as follows:
L ( w , b , x , y ) w k 1 = L ( w , b , x , y ) z k z k w k 1 = δ k ( z k 1 ) T
L ( w , b , x , y ) b k 1 = L ( w , b , x , y ) z k z k b k 1 = δ k
Through mathematical derivation, the relationship between δ k and δ k + 1 can be obtained as follows:
δ k = δ k + 1 z k z k = ( w k ) T δ k + 1 σ ( z k )
The changes in the weights and biases of the k -th layer due to gradient descent are as follows:
w k w k α j = 1 n δ j k ( z j k 1 ) T
b k b k α j = 1 n δ j k
In this equation, α denotes the step size, and optimization functions that use gradient descent include SGD (Stochastic Gradient Descent), RMSprop (Root Mean Square Propagation), and Adam (Adaptive Moment Estimation) [28,29,30]. This study was not research on optimizing parameters; hence, the architecture was kept simple, and Adam was used as the optimization function. The architecture for predicting Loudness is listed in Table 4.
The activation and optimization functions were defined as the ReLU function and Adam (learning rate: 0.001), respectively. Various methods have been developed for initial weights, such as Xavier and He initialization, to minimize the occurrence of convergence problems and local minimum issues [31,32]. However, since the aim of this study was not to optimize machine learning models, initialization issues were not addressed, and hyperparameters were not optimized. The dataset is divided into three parts: training, validation, and the test dataset. The dataset split ratio and the number of data points used for training are listed in Table 5. Since the order of data can also have a significant impact on the accuracy of training, data were shuffled to prevent overfitting due to sequential data. The input data were standardized to eliminate errors because of varying data sizes. The training was performed 1000 times, and early stopping was applied, which stops the training if there is no improvement in error after training 20 times.

3. Results

3.1. Multiple Linear and Nonlinear Regression Results

As explained in Section 2.3, 5-fold cross-validation was performed, and the model with the lowest mean absolute error (MAE) was selected as the regression model. The independent variables were determined using correlation analysis, and sound pressure level and fluctuation strength were selected as the independent variables. The result of the final selected regression equation is as follows:
y = 36.62 + 0.75 x 1 15.54 x 2 , y = L o u d n e s s x 1 = S o u n d P r e s s u r e L e v e l x 2 = F l u c t u a t i o n S t r e n g t h
In this equation, the coefficients of the final selected linear regression equation are shown in Equation (18), where x 1 denotes sound pressure level and x 2 denotes fluctuation strength. The significance level of the F-statistic was ≤0.05, and the coefficient of determination ( R 2 ) was 0.81, confirming the significance of the regression equation. The significance levels of the p-value test for the two variables were ≤0.05, verifying the significance of the regression coefficients. However, the mean relative error of the regression Equation (18) was 38.59%. Therefore, the results suggest that the linear regression model is incapable of predicting results, including noise.
As illustrated in Figure 7, the results of the multiple linear model showed a relatively large error. Thus, the results suggest that the multiple linear model is not suitable for Loudness estimation, and the regression equation was derived using a nonlinear model as follows: Equation (18) shows the regression coefficients of the nonlinear model, where x 1 denotes sound pressure level and x 2 denotes fluctuation strength.
y = e 18.31 × x 1 4.82 x 2 0.12 , y = L o u d n e s s   x 1 = S o u n d P r e s s u r e L e v e l x 2 = F l u c t u a t i o n S t r e n g t h
The significance level of the F-statistic was ≤0.05. Additionally, R 2 was 0.97, confirming the significance of the regression equation. The significance levels of the p-value test for the two variables were both ≤0.05, confirming the significance of the regression coefficients. The mean relative error of Equation (19) on the test dataset was 8.9%. As shown in Figure 8, the results based on the nonlinear regression equation showed relatively higher prediction accuracy compared to the predictions estimated using the linear regression equation. However, since the quantitative BSR evaluation method determines Fail or Pass within a one sone range, high accuracy is required. Therefore, the prediction model needs to generate more precise results. Consequently, a prediction model using a DNN was constructed.

3.2. Loudness Prediction Results Based on Machine Learning

The Loudness prediction was performed using machine learning based on the same data used in K-fold cross-validation. The traditional data partitioning method is a hold-out, where data are randomly categorized into training and testing datasets, using the training data to train the model. If there is sufficient data, the hold-out method can be used to construct a sophisticated model. Otherwise, a model created may not fully represent the entire dataset. Figure 9 illustrates the test results of the DNN model using hold-out. Relatively large errors occur in results with high Loudness levels. BSR from automotive seats primarily occurs at or below 10 sones; hence, results at or above 25 sones can be considered outliers from a statistical perspective. The data distribution shows prediction errors tend to be closer to the actual values compared to regression-based prediction errors.
In machine learning, the number of data has a significant impact on the performance of the model, provided the model avoids overfitting and underfitting problems. BSR experiments are conducted at the final phase of development, and not all products are tested. Consequently, it is challenging to obtain substantial data through experiments. Thus, prediction models are needed to achieve robust interpretation accuracy with a small amount of data. Data augmentation based on data transformations is a common method for acquiring data. However, they occur in data augmentation and must be used appropriately according to the purpose and context. Therefore, in this study, K-fold cross-validation was utilized, which can create stable models using a small amount of data without distorting the data. The results are shown in Figure 10.
Compared to methods that incorporated the linear and nonlinear regression equations along with the hold-out method, the DNN that utilized K-fold cross-validation predicted results that were closer to the actual values. The absolute errors of all results are shown in Figure 11.
The mean absolute error was relatively the lowest for the K-fold cross-validation in DNN, with a value of 0.54, while it was highest for linear regression, with a value of 2.08. In this study, the hold-out method in DNN did not establish a validation dataset; hence, a relatively large dataset was used. This indicates that the validation data cannot be used for training. With limited data, overfitting and underfitting are common problems in the model, necessitating data augmentation. Therefore, the proposed model applied K-fold cross-validation to the DNN model to solve the issue with the number of data points and predict Loudness by incorporating all data trends. Although the absolute errors for the hold-out method and K-fold cross-validation are similar, the deviation is likely to be large if the data are small. The proposed model aimed to solve the issue with the number of data points and predict Loudness by incorporating all data trends based on the K-fold cross-validation in DNN. The value of K was defined as 5 in the model used. Figure 12 illustrates the training process.
As illustrated in Figure 12a,b, the training progress result was similar to that of most machine learning models. The loss is initially large but quickly converges as the training proceeds. This indicates that although BSR is a phenomenon with characteristics that are difficult to physically investigate and elucidate, it can be predicted relatively easily using machine learning.

4. Conclusions

In this study, we aimed to propose a deep learning method for predicting Loudness N10, a quantitative metric for BSR. This metric requires difficult conditions and is complex to calculate based on the physical quantities related to the acoustics and sound quality of automotive seats for sound sources containing noise. Among various physical quantities, sound pressure level and fluctuation strength were derived as significant factors based on the analysis of variance results. In addition, the traditional K-fold cross-validation method was utilized to derive linear and nonlinear regression equations. However, the prediction results showed relatively large errors, with values of 2.08 and 0.69. This outcome indicates that BSR cannot be predicted using regression equations.
Conversely, predictions using DNN in hold-out estimated Loudness accurately, with a value of 0.55. We obtained numerous datasets from other studies. However, it is nearly impossible to acquire a large amount of data and various types of datasets from experiments. The K-fold cross-validation method can achieve maximum efficiency within a limited dataset for development purposes and from a methodology perspective. Therefore, we proposed the method of applying K-fold cross-validation to a DNN as a method of predicting Loudness. Consequently, we attained the best-performing prediction model within an error range of 0.54. Since we could not acquire extensive BSR noise datasets in a limited environment, we utilized the proposed DNN method to verify that the proposed model has relatively superior performance. It is predicted that the quantitative test index for BSR can be estimated using a few sound-quality physical quantities, even when noise is included. Therefore, the results of this study suggest that it is feasible to estimate the results of complex noise and vibration experiments, including BSR experiments with limited datasets. This demonstrates the significance of applying machine learning-based prediction methods to various engineering experiments that involve nonlinearity.
In future research, we aim to establish a methodology that utilizes several physical quantities to apply machine learning so that the BSR characteristics of the seat can be estimated from all positions in actual tests.

Author Contributions

Conceptualization, S.K. and J.N.; methodology, J.N.; software, S.K.; validation, J.N. and D.K.; formal analysis, J.N.; investigation, S.K. and J.N.; resources, S.K. and J.N.; data curation, S.K. and J.N.; writing—original draft preparation, S.K. and J.N.; writing—review and editing, J.N. and D.K.; visualization, S.K. and J.N.; supervision, D.K.; project administration, J.N. and D.K.; funding acquisition, D.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Technology Innovation Program (20018493; Development of autonomous vehicle seats for advanced crash safety and convenience) funded by the Ministry of Trade, Industry, and Energy (MOTIE, Republic of Korea).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gosavi, S. Automotive buzz, squeak and rattle (BSR) detection and prevention. SAE Tech. Pap. 2005, 26, 56. [Google Scholar]
  2. Sohmshetty, R.; Kappagantu, R.; Naganarayana, B.; Shankar, S. Automotive body structure enhancement for Buzz, Squeak and Rattle. SAE Tech. Pap. 2004, 1, 388. [Google Scholar]
  3. Shin, S.; Cheong, C. Experimental characterization of instrument panel Buzz, Squeak, and Rattle (BSR) in a vehicle. Appl. Acoust. 2010, 71, 1162–1168. [Google Scholar] [CrossRef]
  4. Kang, J.; Krousgrill, C.M.; Sadeghi, F. Comprehensive stability analysis of disc brake vibrations including gyroscopic, negative friction slope and mode-coupling mechanisms. J. Sound Vib. 2009, 324, 387–407. [Google Scholar] [CrossRef]
  5. Nam, J.; Kang, J. Friction-induced vibration by dust. KSNVE 2020, 30, 201–210. [Google Scholar] [CrossRef]
  6. Serweta, W.; Okolewski, A.; Blazejczyk-Okolewska, B.; Czolczynski, K.; Kapitaniak, T. Lyapunov exponents of impact oscillators with Hertz’s and Newton’s contact models. Int. J. Mech. Sci. 2014, 89, 194–206. [Google Scholar] [CrossRef]
  7. Serweta, W.; Okolewski, A.; Blazejczyk-Okolewska, B.; Czolczynski, K.; Kapitaniak, T. Mirror hysteresis and lyapunov exponents of impact oscillator with symmetrical soft stops. Int. J. Mech. Sci. 2015, 101, 89–98. [Google Scholar] [CrossRef]
  8. Shin, Y.; Lee, H.; Kim, J.; Choi, S. A study on CAE Technology for Establishing the Vehicle BSR Virtual Development system. In Proceedings of the 2022 Annual Spring conference on Division of CAE and Applied Mechanics, Busan, Republic of Korea, 18–21 May 2022; pp. 149–150. [Google Scholar]
  9. Lee, H.; Kong, E.; Hong, C.; Kim, Y. Study on Rattle Noise Analysis of In-vehicle in Infotainment System Considering Tolerance. In Proceedings of the 2020 KSAE Annual Spring Conference, Samcheok, Republic of Korea, 1–4 July 2020; p. 670. [Google Scholar]
  10. Kim, S.; Ko, D.; Lee, H.; Nam, J.; Lee, J. A study on design method for BSR noise reduction based on simulation of vehicle seat. Trans. Korean Soc. Automot. Eng. 2023, 31, 199–208. [Google Scholar]
  11. Choi, H.; Kang, J.; Park, J. BSR test method for vehicle seat using excitation and operating durability test. J. KAIS 2015, 16, 2436–2441. [Google Scholar] [CrossRef]
  12. Choi, H.; Nam, J.; Kang, J.; Park, J. Investigation of the BSR noise characteristics in seat cushion-frame with respect to vibration durability test using multi-simulator. J. Korea Acad. Cooperation Soc. 2014, 15, 4776–4783. [Google Scholar]
  13. Wan, B.; Zheng, X.; Zhou, Q.; Yang, M.; Qiu, Y. A study on BSR diagnosis of power seat frame based on STRE-VK method. Appl. Acoust. 2024, 216, 109794. [Google Scholar] [CrossRef]
  14. Wiercioch, M.; Kirchmair, J. DNN-PP: A novel deep neural network approach and its applicability in drug-related property prediction. Expert Syst. Appl. 2023, 213, 119055. [Google Scholar] [CrossRef]
  15. Yu, H.; Qin, C.; Tao, J.; Liu, C.; Liu, Q. A Multi-channel decoupled deep neural network for tunnel boring machine torque and thrust prediction. Tunn. Undergr. Space Technol. 2023, 133, 104949. [Google Scholar] [CrossRef]
  16. Nam, J.; Kang, J. Classification of chaotic squeak and rattle vibrations by CNN using recurrence pattern. Sensors. 2021, 21, 8054. [Google Scholar] [CrossRef] [PubMed]
  17. Nam, J.; Kang, J. Comparison of vibration visualization methods for classification of chaos based on CNN. J. Mech. Sci. Technol. 2022, 36, 2749–2757. [Google Scholar] [CrossRef]
  18. Huang, X.; Teng, Z.; Tang, Q.; Yu, Z.; Hua, J.; Wang, X. Fault diagnosis of automobile power seat with acoustic analysis and retrained SVM based on smartphone. Measurement 2022, 202, 111699. [Google Scholar] [CrossRef]
  19. GMW 14011; Objective Subsystem/Component Squeak and Rattle Test. General Motors Worldwide: Detroit, MI, USA, 2019.
  20. Hastie, T.; Tibshirani, R.; Friedman, J.; Friedman, J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed.; Springer: New York, NY, USA, 2009; pp. 241–249. [Google Scholar]
  21. Kim, D.; Seo, K. Comparison of linear and nonlinear regressions and elements analysis for wind speed prediction. J. Korean Inst. Intell. Syst. 2015, 25, 477–482. [Google Scholar]
  22. Sen, A.; Srivastava, M. Regression Analysis: Theory, Method and Application, 1st ed.; Springer: New York, NY, USA, 1990; pp. 28–34. [Google Scholar]
  23. Zhang, K.; Lyu, H.; Shen, S.; Zhou, A.; Yin, Z. Evolutionary hybrid neural network approach to predict shield tunneling-induced ground settlements. Tunn. Undergr. Space Technol. 2020, 106, 103594. [Google Scholar] [CrossRef]
  24. Lü, Q.; Chan, C.; Low, B. Probabilistic evaluation of ground-support interaction for deep rock excavation using artificial neural network and uniform design. Tunn. Undergr. Space Technol. 2012, 32, 1–18. [Google Scholar] [CrossRef]
  25. Koopialipoor, M.; Fahimifar, A.; Ghaleini, E.; Momenzadeh, M.; Armaghani, D. Development of a new hybrid ANN for solving a geotechnical problem related to tunnel boring machine performance. Eng. Comput. 2020, 36, 345–357. [Google Scholar] [CrossRef]
  26. Eckle, K.; Schmidt-Hieber, J. A comparison of deep networks with ReLU activation function and linear spline-type methods. Neural Netw. 2019, 110, 232–242. [Google Scholar] [CrossRef] [PubMed]
  27. Schmidt-Hieber, J. Nonparametric regression using deep neural networks with ReLU activation function. Ann. Statist. 2020, 48, 1875–1897. [Google Scholar]
  28. Yazan, E.; Talu, M. Comparison of the Stochastic Gradient Descent based Optimization Techniques. In Proceedings of the 2017 International Artificial Intelligence and Data Processing Symposium (IDAP), Malatya, Turkey, 16–17 September 2017; pp. 1–5. [Google Scholar]
  29. Kingma, D.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  30. Zaheer, R.; Shaziya, H. A Study of the Optimization Algorithms in Deep learning. In Proceedings of the 2019 Third International Conference on Inventive Systems and Control (ICISC), Coimbatore, India, 10–11 January 2019; pp. 536–539. [Google Scholar]
  31. Glorot, X.; Bengio, Y. Understanding the Difficulty of Training Deep Feedforward Neural Networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Chia Laguna Resort, Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
  32. Koturwar, S.; Merchant, S. Weight initialization of Deep Neural Networks (DNNs) using data statistics. arXiv 2017, arXiv:1710.10570. [Google Scholar]
Figure 1. Flow diagram of prediction method for Loudness N10.
Figure 1. Flow diagram of prediction method for Loudness N10.
Sensors 24 05128 g001
Figure 2. Experimental setup: (a) sensor position; (b) test equipment.
Figure 2. Experimental setup: (a) sensor position; (b) test equipment.
Sensors 24 05128 g002
Figure 3. Characteristics of added noise in raw data: (a) time domain; (b) frequency domain.
Figure 3. Characteristics of added noise in raw data: (a) time domain; (b) frequency domain.
Sensors 24 05128 g003
Figure 4. Comparison of correlation coefficient.
Figure 4. Comparison of correlation coefficient.
Sensors 24 05128 g004
Figure 5. Schematic of K-fold cross-validation.
Figure 5. Schematic of K-fold cross-validation.
Sensors 24 05128 g005
Figure 6. Structure of traditional DNN model.
Figure 6. Structure of traditional DNN model.
Sensors 24 05128 g006
Figure 7. Prediction error by linear regression.
Figure 7. Prediction error by linear regression.
Sensors 24 05128 g007
Figure 8. Prediction error by nonlinear regression.
Figure 8. Prediction error by nonlinear regression.
Sensors 24 05128 g008
Figure 9. Prediction error by DNN using hold out.
Figure 9. Prediction error by DNN using hold out.
Sensors 24 05128 g009
Figure 10. Prediction error by DNN using K-fold cross-validation.
Figure 10. Prediction error by DNN using K-fold cross-validation.
Sensors 24 05128 g010
Figure 11. Comparing the performance of each model.
Figure 11. Comparing the performance of each model.
Sensors 24 05128 g011
Figure 12. Results of the experiment: (a) loss: (b) root mean square.
Figure 12. Results of the experiment: (a) loss: (b) root mean square.
Sensors 24 05128 g012
Table 1. Normalized value of #D1.
Table 1. Normalized value of #D1.
MetricsP1P2P3P4P5P6P7P8P9
M10.281.000.420.370.660.240.540.000.22
M20.481.000.260.470.770.350.600.000.23
M30.471.000.440.510.760.380.680.000.21
M40.661.000.490.660.880.540.820.000.00
M50.451.000.440.600.930.420.590.000.15
M60.361.000.110.110.210.010.190.000.38
M70.140.000.390.220.460.170.041.000.38
M80.571.000.270.350.380.180.000.250.65
M90.650.000.770.740.500.850.541.000.84
M100.770.000.880.890.780.950.731.000.81
Table 2. Analysis of variance (ANOVA) of #D1.
Table 2. Analysis of variance (ANOVA) of #D1.
M1M2M3M4M5M6M7M8M9M10
M10.08
M20.070.08
M30.070.080.08
M40.080.090.090.11
M50.080.080.080.100.09
M60.060.060.050.040.050.08
M7−0.05−0.05−0.06−0.07−0.05−0.040.08
M80.040.040.030.010.030.07−0.020.08
M9−0.07−0.07−0.07−0.07−0.07−0.070.05−0.050.08
M10−0.07−0.06−0.06−0.06−0.06−0.080.04−0.060.070.08
Table 3. Correlation analysis of # s1.
Table 3. Correlation analysis of # s1.
M1M2M3M4M5M6M7M8M9M10
M11.00
M20.951.00
M30.950.981.00
M40.830.910.941.00
M50.900.960.970.941.00
M60.830.710.650.430.551.00
M7−0.66−0.68−0.71−0.70−0.61−0.491.00
M80.560.450.360.140.330.88−0.241.00
M9−0.99−0.95−0.93−0.80−0.87−0.860.61−0.601.00
M10−0.93−0.80−0.77−0.59−0.68−0.960.53−0.750.941.00
Table 4. Architecture of DNN.
Table 4. Architecture of DNN.
LayerOutput ShapeParam #
Input Layer(None, 2) 0
Dense 1(None, 512)1536
Dense 2(None, 256)131,328
Dense 3(None, 128)32,896
Dense 4(None, 64)8256
Dense 5(None, 32)2080
Output(None, 1)33
Total params:176,129
Trainable params: 176,129
Non-trainable params: 0
Table 5. Dataset split ratio.
Table 5. Dataset split ratio.
DatasetPercentageNumber of Samples
Training64%783
Validation16%196
Testing20%245
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nam, J.; Kim, S.; Ko, D. Quantitative Prediction and Analysis of Rattle Index Using DNN on Sound Quality of Synthetic Sources with Gaussian Noise. Sensors 2024, 24, 5128. https://doi.org/10.3390/s24165128

AMA Style

Nam J, Kim S, Ko D. Quantitative Prediction and Analysis of Rattle Index Using DNN on Sound Quality of Synthetic Sources with Gaussian Noise. Sensors. 2024; 24(16):5128. https://doi.org/10.3390/s24165128

Chicago/Turabian Style

Nam, Jaehyeon, Seokbeom Kim, and Dongshin Ko. 2024. "Quantitative Prediction and Analysis of Rattle Index Using DNN on Sound Quality of Synthetic Sources with Gaussian Noise" Sensors 24, no. 16: 5128. https://doi.org/10.3390/s24165128

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop