Next Article in Journal
Effect of Moringa oleifera Seed Extract Administered through Drinking Water on Physiological Responses, Carcass and Meat Quality Traits, and Bone Parameters in Broiler Chickens
Next Article in Special Issue
Can Gestural Filler Reduce User-Perceived Latency in Conversation with Digital Humans?
Previous Article in Journal
A Review on Two-Phase Volumetric Expanders and Their Applications
Previous Article in Special Issue
Comparison of Cognitive Differences of Artworks between Artist and Artistic Style Transfer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Sound Imagery of Electric Shavers Based on Kansei Engineering and Multiple Artificial Neural Networks

1
School of Design, Straits Institute of Technology, Fujian University of Technology, Fuzhou 350011, China
2
Design Innovation Research Center of Humanities and Social Sciences, Research Base of Colleges and Universities in Fujian Province, Fuzhou 350118, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(20), 10329; https://doi.org/10.3390/app122010329
Submission received: 7 September 2022 / Revised: 5 October 2022 / Accepted: 10 October 2022 / Published: 13 October 2022
(This article belongs to the Special Issue User Experience for Advanced Human-Computer Interaction II)

Abstract

:
The electric shaver market in China reach 26.3 billion RMB by 2021. Nowadays, in addition to functional satisfaction, consumers are increasingly focused on the emotional imagery conveyed by products with multiple-senses, and electric shavers are not only shaped to attract consumers, but their product sound also conveys a unique emotional imagery. Based on Kansei engineering and artificial neural networks, this research explored the emotional imagery conveyed by the sound of electric shavers. First, we collected a wide sample of electric shavers in the market (230 types) and obtained the consumers’ perceptual vocabulary (85,710 items) through a web crawler. The multidimensional scaling method and cluster analysis were used to condense the sample into 34 representative samples and 3 groups of representative Kansei words; then, the semantic differential method was used to assess the users’ emotional evaluation values. The sound design elements (including item and category) of the samples were collected and classified using Heardrec Devices and ArtemiS 13.6 software, and, finally, multiple linear and non-linear correlation prediction models (four types) between the sound design elements of the electric shaver and the users’ emotional evaluation values were established by the quantification theory type I, general regression neural network, back propagation neural network, and genetic algorithm-based BPNN. The models were validated by paired-sample t-test, and all of them had good reliability, among which the genetic algorithm-based BPNN had the best accuracy. In this research, four linear and non-linear Kansei prediction models were constructed. The aim was to apply higher accuracy prediction models to the prediction of electric shaver sound imagery, while giving specific and accurate sound design metrics and references.

1. Introduction

With the upgrading of consumers’ demand, consumers are no longer concerned with products that tend to be homogeneous in terms of function, but with products that convey emotional imagery to meet user needs [1]. Emotional imagery is a cognitive experience proposed by cognitive psychology. It represents the emotional experience that the information conveyed by objects brings to the user [2,3]. As the research on emotionality deepens, the way of conveying product emotional imagery is no longer limited to the innovation of product modelling. Sound imagery, the introspective persistence of the auditory experience, is an important channel for the transmission of perceptual information, and it can elicit different emotional experiences [4]. Auditory perception, the degree of accuracy in conveying imagery, as an intuitive way of feeling a product, has become more and more important in influencing consumers’ emotional experience [5].
As an important approach to emotional design, Kansei engineering aims to quantify design elements and users’ evaluation values to build predictive models for correlations, and it has been widely used in product modelling innovation [6,7,8]. However, there is little research in the area of sound, which is equally important for conveying emotional imagery. As sound becomes increasingly important in product design, the application of Kansei engineering to the research of product sound can objectively and accurately guide the optimization of sound design and the ability of products to convey emotional imagery to a greater extent.
Traditional Kansei engineering mainly uses the KJ method, literature research, and user interviews to collect the pre-required perceptual vocabulary for product modelling, and it has now accumulated a large number of imaginative adjectives for product modelling [9,10]. However, there is few research in Kansei engineering that focus on sound imagery, so research may suffer from a shortage of objectivity due to the lack of collection of sound imagery in the traditional method. With the development of computer technology and the Internet, consumers are giving feedback on product imagery when they purchase products [11].
This research used Python Crawler and Natural Language Processing (NPL) to mine and analysed users’ reviews to collect users’ sound imagery in a more specific way, so as to solve the problem of time-consuming and inefficient preliminary data collection in Kansei engineering and to provide a complete and realistic users’ emotional evaluation of the product’s sound.
For the construction of posterior prediction models, traditional Kansei engineering uses the quantification theory type I (QTTI) and a back propagation neural network (BPNN) to construct linear and non-linear correlation models [8,12].
Although such methods can understand the difference in prediction effectiveness between linear and non-linear methods and give better design metrics and users’ emotional evaluation predictions, considering that the BPNN construction algorithm is based on the gradient descent approach to find the local optimal solution, it will lead to the convergence of the model into local extreme points, resulting in a lack of prediction accuracy improvement [13].
Therefore, this research introduces the general regression neural network (GRNN), which is based on the mathematical and chemical theory of multiple regression analysis, and the genetic algorithm-based BPNN (GA-BPNN) used to build two kinds of prediction models. In this way, it is possible to explore the application of different classes of neural networks to the research of sound in the field of Kansei engineering and, ultimately, to compare the prediction results of the four models and to apply the better model to guide the optimal design of product sound.

2. Literature Review

2.1. Web Crawler and Natural Language Processing

Web crawler and natural language processing are data acquisition and processing techniques applied to crawl online shopping platforms for real consumer reviews [14]. By using Python and Word2vec, this research can obtain first-hand real users’ emotional evaluations after using the product, avoiding the shortcomings of insufficient objectivity in preliminary data collection and improving the accuracy of building predictive models of design elements and users’ emotional evaluation values. Lai used the natural language Word2vec [15,16] to process the crawled user reviews of new energy vehicles, enabling automotive companies to obtain a clearer picture of what users really think of the car’s styling, thus providing objective guidance and suggestions for subsequent product optimization design [17]. Liu used Python Crawler to crawl detailed reviews of smartwatches on e-commerce websites to mine users’ emotional needs based on the word frequency of emotional words, thereby building a library of users’ emotional needs [11].
This research is based on a Python web crawler’s ability to automatically traverse and download user comments on China e-commerce websites for a faster method of user emotional requirement gathering and to provide more complete pre-requisite information for Kansei engineering.

2.2. Kansei Engineering and Quantification Theory Type I

Kansei engineering is a qualitative and quantitative product emotional research method proposed by Sanso Naganawa in Japan that aims to guide the design process by quantitatively modelling the correlation between design elements and users’ emotional evaluation values [1]. QTTI is a common method used for the multiple linear regression analysis of Kansei engineering to establish correlation models, often using design elements (product modelling) as a multivariate mapping of users’ perceptual imagery (Kansei words) dependent variables to form a multiple linear regression equation, which has been widely used in the field of form design [6,7,8,9]. The advantage is that the predictive models constructed have excellent reliability and accuracy and provide clear design metrics to avoid black-box operations in the design process. Based on Kansei engineering, Mele used QTTI to quantify the design elements that affect the user’s emotional imagery in the kettle and combined it with computer technology to build a design aid system to help designers balance the emotional impact of each design element [7]. Myung Hwan Yun used Kansei engineering and the semantic differential (SD) method to construct a correlation model between vehicle instrument panel design and users’ emotional evaluation values, which can provide objective advice and guidance on the design of vehicle instrument panel design [8]. Therefore, this research constructed a prediction model between product sound and users’ emotional evaluation values through QTTI and explored its reliability and prediction accuracy.

2.3. General Regression Neural Network

GRNN is a modified model of the radial basis function network based on mathematical statistics proposed by Professor Specht in the United States [18] based on multiple linear regression analysis as the theoretical basis, which approximates the function by activating neurons in order to finish building the prediction model. GRNN predicts the outcome with sample data independent variables and finally calculates the regression values between the independent and dependent variables. The advantage of GRNN is that it has good predictive effects with small sample data and has been widely used in design studies in different fields. Tomasz applied GRNN and BPNN to the innovation of optimization of yacht design parameters and confirmed that GRNN has excellent mapping capabilities with a high degree of fault tolerance. It is suitable for building correlation prediction models for products [19]. Salman used GRNN and Kansei engineering to build a predictive model to assist companies in making decisions about refrigerator design solutions, thereby increasing consumer satisfaction with optimal design [20].

2.4. Back Propagation Neural Network and Genetic Algorithm

BPNN is a multilayer feedforward artificial neural network trained according to the error back propagation algorithm proposed by Rumelhart and McClelland [21], which is trained in a model structure of input, implicit, and output layers for data simulation so as to simulate the human neural network learning process and establish non-linear mapping relationships. BPNN is often used to construct predictive models in a non-linear manner to assist in generating design decisions [12,22,23]. The genetic algorithm (GA) is based on the natural evolutionary principle of “survival of the fittest” and builds up an “artificial genetic system” through gene crossover, mutation, and replication, which eventually converges to a near-optimal solution of the problem according to specific convergence criteria. It can be used to optimize BPNN to solve the problem of non-convergence or falling into local extremes, thereby improving prediction accuracy. By comparing the prediction accuracy of BPNN and GA-BPNN on agricultural tractor morphology design, Yu-En Yeh demonstrated that the GA can effectively improve the prediction accuracy of BPNN on modelling morphology [24]. Runliang used GA-BPNN to input watch design elements and users’ emotional evaluation values as training samples to develop a product customization system that reflects user satisfaction, improving the difficulty for companies to predict user satisfaction [22].
In summary, the QTTI multiple linear regression model and the GRNN, BPNN, and GA-BPNN non-linear multiple linear regression models have been well studied in perceptual design, but which one is the best method to be applied in the construction of perceptual prediction models for sound? In this research, these four prediction models were constructed, and the reliability and accuracy of their predictions were verified by t-test and mean error comparison method in order to apply the optimal prediction solution to the imagery prediction and design guidance of sound Kansei engineering.

3. Representative Samples and Kansei Words Selection

In this research, the quantitative and qualitative analysis of electric shaver design was carried out by combining QTTI, BPNN, and GA-BPNN through Kansei engineering. The specific research can be divided into the following aspects (as shown in Figure 1): Ⅰ. Representative samples are selected through multidimensional scaling and cluster analysis. Ⅱ. Representative Kansei words were chosen by Web crawler and shortest Euclidean distance method. Ⅲ. Items and categories were deconstructed based on product sound design elements. Ⅳ. The prediction models were built based on QTTI, GRNN, BPNN, and GA-BPNN. Ⅴ. The accuracy of the three prediction models was compared and analyzed to select the optimal prediction model.

3.1. Representative Sample Screening

Different brands of electric shavers differ in sound imagery; in order for this research to cover all brands of electric shavers, a total of 230 models were obtained after collecting an extensive sample and eliminating images with overly complex backgrounds, as shown in Figure 2.
After two rounds of screening (eliminating those with high similarity), 80 samples were obtained. After grey-scale processing of the samples, 25 design postgraduates and industrial design experts with relevant product design background were invited to classify the 80 samples according to their band representativeness into 12 to 19 groups, which were coded into an 80 × 80 dissimilarity matrix. The six-dimensional coordinates of the samples were analyzed in SPSS 23.0 software, with a stress factor = 0.04334 and RSQ = 0.97915; finally, the cluster analyses were clustered by Ward’s method to obtain the clustering tree of the 17 clusters, as shown in Figure 3.
In order to balance the representativeness of the selected samples and the accuracy of the prediction models, five experts with relevant product design background were invited to vote for two samples in 17 clusters, and the top two samples with the highest votes in each cluster were selected [25], making a total of 34 representative samples, and one sample from each of the four clusters was selected as the validation sample, as shown in Figure 4.

3.2. Web Crawler Collect Representative Kansei Words

The basic process of obtaining web data through a Python web crawler is ⅰ. send request: initiate a request to the server HTTP through the Urllib library and the requests library in Python; ⅱ. obtain web page: the service will receive a response, and after the normal response, the content may include HTML, Json string, or binary data; ⅲ. parse the page: according to the returned content to parse, if it is an HTML code, you can use the web page parser to parse, if it is Json data, they can be converted into a Json object for parsing, etc.; ⅳ. extract and store the content: after parsing, the data will be saved, and the crawl result will be saved in text format.
The keyword “electric shavers” was used to crawl through a total of 18 brands in the E-commerce; in a sample purchase page of 10 electric shavers for each brand, 50 pages of user reviews were collected in descending order. The function was called to loop through the crawl process and set the page to 50, resulting in a total of 85.71 million valid comments.

3.3. Representative Kansei Words Screening

The collected comment texts were extracted through the Word2Vec neural network to find the Kansei words regarding the evaluation of sound imagery. Then, the Kansei words were objectively filtered again through cluster analysis and the shortest Euclidean distance method, and, finally, three representative Kansei words were identified; the details are as follows.
The text of the comments was cleaned using Jieba splitting and data cleaning, and the semantic network was generated using the co-occurrence frequency matrix of high-frequency vocabulary to analyze the distribution of users’ comments on sound imagery. The Word2Vec model was then used to set “sound evaluation” as a search term, the output word nature was set to adjective word nature using the skip-gram algorithm, and, finally, a perceptual vocabulary of 82 sound imagery phrases was extracted.
In order to reduce the cognitive load of the participants in the later SD method of rating perceptual imagery, and to eliminate semantically similar and ambiguous phrases, 15 Kansei words were first filtered through two rounds of screening by the focus group method and then combined with the sample to form the Likert-7-scale questionnaire, and 50 participants with a long experience of using the products were invited to evaluate them; the 50 male participants involved in this research had at least 3 years of experience with electric shavers. The age range was 20–55 years, with a mean age of 28.5 years. The statistics were analyzed by principal component analysis through maximum variance rotation, with eigenvalues extracted above 4 and pivoted by the maximum variance method. Observation of the gravel plot allowed the extraction of a factor number of 3 and obtained a rotated component matrix, which, after cluster analysis, allowed the classification of the 15 Kansei words into three clusters, as shown in Figure 5.
The shortest Euclidean distance was used as the representative Kansei words and paired anonymously. The final three sets of representative Kansei words were: “Weak-Powerful”, “Inexpensive-Premium”, and “Annoyance-Comfortable”, as shown in Table 1.

3.4. Classification of Sound Design Elements

By dividing sound design elements into five items (including A-weighted sound pressure level, loudness, sharpness, roughness, and tonality), this research constructed linear and non-linear correlation models between sound elements and Kansei words through QTTI and artificial neural networks.
A-weighted sound pressure level [26], for which the symbol is LA, in dB mainly reflects the effect of frequency on the perception of loudness of sound. The sound pressure level can be derived from the effective values of the sound pressure ( ρ e ). A-weighted sound pressure level is a more accurate representation of the impact of loudness at different frequencies [27], as shown in Equation (1), and this research introduces the IEC 61672-1 [28] standard as the calculation model, as shown in Equation (2).
The formula n is the total number of octave bands; L p i and Δ A i are the weighted correction values for the sound pressure level in the i-th octave band. A1000 are normalization constants, in decibels, representing the electrical gain needed to provide frequency weightings of zero decibels at 1 kHz [28], A1000 = −2.000 dB.
L A = 10 lg i = 1 n 10 0.1 L p i + Δ A i .
A ( f ) = 20 lg f 4 2 f 4 ( f 2 + f 1 2 ) ( f 2 + f 2 2 ) 1 2 ( f 2 + f 3 2 ) 1 2 ( f 2 + f 4 2 ) A 1000
Loudness [29], for which the symbol is N, in sone is used to reflect the user’s perception of volume level, and, in this research, electric shaver loudness affects the user’s perception of power strength. It is usually calculated using the method of deriving loudness (N) based on the specific loudness (N’) [30], where the specific loudness reflects the regional distribution of loudness in the frequency band in soneG/Bark. The specific loudness can be calculated from the excitation (E), as shown in Equation (3) [27].
N’ in the formula stands for the critical band specific loudness, z stands for the corresponding critical band (pure tone frequency as the central frequency of the noise frequency within a certain band width), E Q T stands for the absolute listening value under the excitation, and E 0 stands for the benchmark sound intensity under the excitation, for which the integration of the specific loudness in the Bark domain can be obtained by the total loudness (N).
N = 0.08 E T Q E 0 0.23 0.5 + 0.5 E E T Q 0.23 1 s o n e G / B a r k N = 0 24 B a r k N z d z   S o n e
Sharpness [31], for which the symbol is S, in acum is used to describe the timbral characteristics of a sound, with the user perceiving more sharpness for higher frequency sounds and less vice versa. In this research, the Aures algorithm, based on the optimized Von Bismarck model [32], was used, and its calculation model in HEAD Acoustics’ Artemis software [33,34] takes into account the effect of loudness, making the results more accurate, as shown in Equation (4).
In the formula, K1 is a weighting factor of 0.11, N is the total loudness values, N’ is the critical band specific loudness, and z is the critical band Bark value.
S = K 1 × 0 24 B a r k N z × g z d z l n N 20 + 1 a c u m g z = 1 z 16 0.0625 × e 0.1733 z z > 16
Roughness [35], for which the symbol is R, in asper is used to describe the different fluctuation characteristics of the sound. Generally, when the sound frequency is below 20 Hz, the fluctuation characteristics are shown, and above 20 Hz the roughness characteristics are shown. When the roughness is larger, it conveys fluctuation and complex feelings, and vice versa, it conveys stable feelings, as the electric shaver sounds are above 20 Hz; thus, this research did not consider fluctuation due to its unsuitability. The traditional roughness calculation method was proposed by Aures and optimized by Zwicker and Fast [36], as shown in Equation (5).
The formula ƒmod denotes the modulation frequency,   Δ L E z denotes the excitation level difference calculation model in the characteristic frequency band, and N m a x and N m i n denote the maximum and minimum values of the specific loudness, respectively.
R = 0.3 f m o d 0 24 B a r k Δ L E z d z   A s p e r Δ L E z = 2 l o g 10 N max z N min z
In this research, the roughness procedure of the software ArtemiS 13.6 (version 13.6.22143.02, HEAD Acoustics Company, Herzogenrath, Germany) was used, as shown in Figure 6 (data from HEAD acoustics company), and the results were calculated based on loudness rather than sound pressure alone and, therefore, were more appropriate to human ear perception [37].
Tonality [36], for which the symbol is T, in tu is a sound quality metric aimed at identifying and quantifying the strength of tones in a given noise spectrum. Ideally, tonality metrics should align well with the human perception of tones and help the user to differentiate between tones that may be objectionable and those that may not be apparent to the listener (information from SIEMENS company). The current method of calculation was proposed by Terhardt and Aures, as shown in Equation (6).
In the formula, W 1 Δ z i is the relationship between the i-th single frequency component and the critical band difference; W 2 f i is the relationship between the i-th single frequency component and the frequency; W 3 Δ L i is the sound level surplus effect of the ith single frequency component.
T = i = 1 N W 1 ( Δ z i ) W 2 f i W 3 Δ L i 2

3.5. Sound Sample Collection and Analysis

In this research, 34 electric shavers were sampled by non-contact skin-based recording in a silent environment. The samples were placed at a distance of 15 cm from the HEADREC headset, were sampled by the Headrec device, and were analyzed by ArtemiS 13.6 software for sound metrics. The five items included in this research were A-weighted sound pressure level (LA), loudness (N), sharpness (S), tonality (T), and roughness (R).
As shown in Figure 7, “Applsci 12 10329 i001 Ch1” represents the data collected in the left ear of the device, and “Applsci 12 10329 i002 Ch2” represents the data collected in the right ear of the device. The average of the data recorded in the left and right ears was taken after obtaining the single values [38].
The 34 electric shaver operating sounds were collected as reference material and then collated and recorded for five items, resulting in data corresponding to each item in the sample, as shown in Table 2.
By inviting five master’s students with a music background to form a focus group, the given samples were divided into categories based on the different parameters that brought different imagery to the different parameter intervals. The final table of design elements is shown in Table 3.

3.6. Users’ Emotional Evaluation Values Questionnaires

In order to obtain accurate user emotional evaluation values of sound, this research used the SD method to pair and combine each of the 34 audios with three pairs of Kansei words and form the Likert-7 scale questionnaire.
This survey was recruited offline and was conducted from May to July 2022. The headphones tested were SHP9500 playback headphones, sampled by a Binaural Microphone, with a headphone equalizer (hardware) and ArtemiS (software) for general equilibrium at the playback level [39]. After recruiting 142 men (age range 20–56 years, average age 27.8 years) with experience in using electric shavers to complete the test, the average questionnaire length was 15.3 min. Then, 126 valid questionnaires were obtained after eliminating 16 questionnaires that were incomplete (Equation (9)) and those that took less than 10 min to complete (Equation (7)). Users were asked to rate the perception of sound on a Likert-7 scale to indicate different levels of consumer psychological perception. The final sound element and questionnaire rating values were used in subsequent linear and non-linear regression analyses to construct predictive models.

4. Predictive Modelling of Sound Design Elements and User’s Emotional Evaluation

4.1. QTTI Prediction Model Construction and Analysis

QTTI is often used to establish a linear relationship between quantitative and qualitative variables in the form of a multiple linear regression equation (as shown in Equation (7)). The equation was constructed using the sound design elements (13 items) as the independent variables and each user emotional evaluation values as the dependent variable, e.g., the coefficient represents the score point of the b-th class under the a-th item as the coefficient of this multiple linear regression equation; X11, X12 …… X52, X53 represent the individual categories; k denotes the constant term of the multiple linear regression equation under the x-th group of Kansei words [40].
Y x = a 11 X 11 + a 12 X 12 + a 13 X 13 + a 21 X 21 + a 22 X 22 + a 23 X 23 + a 31 X 31 + a 32 X 32 + a 33 X 33 + a 41 X 41 + a 42 X 42 + a 51 X 51 + a 52 X 52 + k
The design element codes of the 30 experimental samples and user emotional evaluation values were analyzed by SPSS 23.0 for QTTI. A predictive model was constructed, and the results are shown in Table 4.
The table above shows that the coefficient of determination R2 for “Annoyance-comfortable” is 0.891, meaning that this multiple linear regression equation is able to explain 89.1% of the variation in the dependent variable, while the coefficients of determination for the other 2 groups of perceptual words are 89.9% and 85.9%, respectively, and the three equations have a good fit.
By comparing the partial correlation coefficients of the items with the category score points, the degree of influence of each sound design element on the user’s imagery can be obtained.
(I) Item impact analysis
The partial correlation coefficient of the items indicates the relevance of each item to the Kansei words, with higher values indicating a stronger relevance to the imagery, i.e., a higher influence on that imagery.
The most relevant item for the word “Annoyance-Comfortable” is loudness, with a partial correlation coefficient of 0.837, which is the strongest correlation with this word. The order is loudness > sharpness > A-weighted sound pressure level > roughness > tonality. When conducting sound design, priority can be given to items with a higher number of partial correlations. In the word “Inexpensive-Premium”, the influence of each item is ranked as sharpness > loudness > roughness > tonality > A-weighted sound pressure level. In the words “Weak-Powerful”, the influence of each item is ranked as loudness > sharpness > tonality > A-weighted sound pressure level > roughness.
(II) Category impact analysis
The category score points indicates the positive and negative correlation between each category and the discourse, with larger positive values indicating more correlation with the positive discourse and larger negative values indicating more correlation with the negative discourse.
For example, in the case of the “Annoyance-Comfortable” word, the item with the highest partial correlation coefficient is ‘loudness’, where a positive value for the category score point belongs to ‘comfortable’ imagery, indicating that the lower the loudness, the more comfortable the imagery conveyed, and, conversely, the louder the loudness, the more annoying it is. The second-ranked item is sharpness, with a positive point score in the sharpness category indicating that the lower the sharpness the more comfortable the imagery is conveyed, and vice versa, the more annoying. The influence of the other categories of the word can be followed in this way (as shown in Table 4).

4.2. Multiple Linear Regression Equation Construction

Based on the QTTI multiple linear regression equation, three sets of equations between design elements and Kansei words were constructed using the 13 categories of the sample as independent variables, the users’ emotional evaluation values as dependent variables, and the category score points as coefficients, as shown in Equations (8)–(10).
Y1 Annoyance-Comfortable”: “Y2 Inexpensive-Premium”: “Y3 Weak-Powerful”:
Y 1 = 0.544 X 11 0.012 X 12 0.121 X 13 + 0.332 X 21 0.245 X 22 0.296 X 23 + 0.074 X 31 0.071 X 32 0.083 X 33 + 0.371 X 41 0.186 X 42 + 0.078 X 51 0.136 X 52 + 3.714
Y 2 = 0.290 X 11 0.008 X 12 0.062 X 13 + 0.223 X 21 0.159 X 22 0.223 X 23 + 0.074 X 31 0.071 X 32 0.083 X 33 + 0.462 X 41 0.231 X 42 + 0.156 X 51 0.269 X 52 + 3.460
Y 3 = 0.317 X 11 + 0.031 X 12 + 0.040 X 13 0.268 X 21 + 0.202 X 22 + 0.216 X 23 + 0.133 X 31 0.125 X 32 0.164 X 33 + 0.289 X 41 0.145 X 42 0.054 X 51 + 4.236 X 52 + 4.246

4.3. GRNN Prediction Model Construction and Analysis

GRNN is a modified model of radial basis function networks based on mathematical and statistical foundations, using multiple linear regression analysis as a theoretical basis to approximate functions by activating neurons in order to complete predictions [14]. GRNN has strong mapping capability and high fault tolerance, so it can be used for simulation prediction with a smaller number of samples because its approximation capability, classification capability, and learning speed are better than BPNN. The meaning of each symbol in the GRNN training model (Figure 7) is shown below:
P is the input vector; Q is the number of input vectors; b1 is the hidden layer threshold; ||dist|| is the distance function; R is the number of elements of each set of vectors; Lw1,1 is the weight of the input layer; Lw2,1 is the weight matrix; n2 is the output vector; a2 is the linear transfer function, as shown in Figure 8.
In this research, GRNN was used to investigate its reliability and accuracy in predicting the sound imagery of electric shavers. The spread of the model was taken as 1, and 30 and 4 samples were used as training and validation samples, respectively. The prediction model was constructed after simulation training with MATLAB, as shown in Figure 9.

4.4. BPNN Prediction Model Construction and Analysis

A total of 13 neural nodes in the input layer among the three-layer network were determined, with 12 neural nodes in the hidden layer (determined by the hidden layer equation) and 3 neural nodes in the output layer, as shown in Table 5.
The ‘tansig’ and ‘purelin’ functions were defined as the combination of the transfer functions in the input and hidden layers. In addition, the ‘trainlm’ algorithm was identified as the heuristic algorithm [20], and 30 identified samples were used for training, with four samples as verification samples. Data in the input layer were normalized to enable the BP neural network to recognize data. In that case, the frequency of the network training was set to 10,000 times, with an error of 0.0001. On this basis, the prediction model was constructed after training, as shown in Figure 10.

4.5. GA-BPNN Prediction Model Construction and Analysis

Different initial weights might lead to the problem of converging network trapping into local extreme points. Since the BPNN-based construction algorithm’s purpose is to seek the locally optimal solution based on the gradient descent method. In order to enhance the prediction effect of BPNN, weights and thresholds between BPNN neutrons were optimized using the GA in this research.
The GA-BPNN was built by tuning the neural network weights and thresholds through the fitness function based on the previously constructed BPNN, and, then, the BP algorithm corrected the network weights and thresholds in the negative gradient direction for network training. The GA-BPNN algorithm flow is shown in Figure 11.
The GA-BPNN was implemented using the roulette wheel method, with the crossover process using the two-point crossover method and the variation process using the Gaussian variation method; the initial population size was 30, the maximum number of iterations was 50, and the variation probability was 0.2; the number of training sessions was set to 1000, the learning rate was set to 0.01, and the minimum training error was set to 0.0001. When the generation best fitness is 1.9421E-02, the mean fitness is 3.564 × 10−2, the best validation performance is 1.8671 × 10−2 at epoch 4, and the GA-BPNN is constructed, as shown in Figure 12.
The four test sample design elements were coded as GA-BPNN input data for prediction, and the results showed that the GA-BPNN prediction (Applsci 12 10329 i003) was closer to the user evaluation values (Applsci 12 10329 i004 than the BPNN prediction (Applsci 12 10329 i005)), as shown in Figure 13.

4.6. Comparative Analysis of Linear and Non-Linear Prediction Models

The experimental samples were simulated by QTTI, GRNN, BPNN, and GA-BPNN models to obtain their respective predicted values. Then, they were tested by paired sample t-test with the user emotional evaluation values and the predicted values, and the results showed that their p-values were all greater than 0.05. It shows no significant difference between the predicted and the assessed values, indicating the reliability of the application of QTTI multiple linear regression analysis, GRNN, BPNN, and GA-BPNN non-linear analysis predictions.
The four validation samples were put into each of the four prediction models for prediction, the prediction results were compared and analyzed with emotional evaluation values of users, and the error comparison method was used to make a comparison of the three prediction accuracies. The relative error rate for each sample was calculated as ({|users’ evaluation value—predicted value|/users’ evaluation value} × 100%) [24], which was used to determine a better prediction model, as shown in Table 6.
The comparison shows that the average error values for GA-BPNN are 6.23%, 8.08%, and 8.43%; the average error values for QTTI are 4.10%, 5.40%, and 7.40%; the average error values for GRNN are 6.23%, 8.08%, and 8.43%; the average error values for BPNN are 15.87%, 6.89%, and 8.65%.
The results showed that GA-BPNN has a better prediction accuracy, followed by QTTI. The four prediction models were ranked in order of accuracy: 1. GA-BPNN, 2. QTTI, 3. GRNN, 4. BPNN.

5. Discussion and Conclusions

This research was based on Kansei engineering and artificial neural networks and used web crawler and natural language processing techniques to mine consumers’ Kansei words on the sound of electric shavers online and the cluster analysis to extract representative product samples and Kansei words. Through ArtemiS 13.6 analysis, the sound design elements were divided into five items and 13 categories. Lastly, the QTTI multiple linear regression, GRNN, BPNN, and GA-BPNN non-linear analysis were combined with the SD method to construct prediction models between sound design elements and users’ emotional evaluation values and to compare and select the best. The results of this research are as follows:
The web crawler and Word2Vec neural network can collect a wide range of data on users’ imagery and feelings when using the product, which can compensate for the time-consuming and subjective problem of collecting data in the early stages of traditional Kansei engineering, while providing complete and effective preliminary data support.
The QTTI, GRNN, BPNN, and GA-BPNN methods were applied to the research of product sound imagery. Multiple linear regression and non-linear perceptual prediction models were constructed and finally proved to be reliable, with GA-BPNN being the best in terms of accuracy and the rest being QTTI, GRNN, and BPNN, in that order. GA-BPNN uses GA to optimize BPNN, which can help designers to more accurately grasp the users’ emotional evaluation of sound, while QTTI multiple linear regression modelling can provide designers with clear design indicators and references.
A linear correlation model between sound design elements and imagery vocabulary was constructed using QTTI. It was able to provide a ranking of the degree of influence of each design item and category on the imagery, and the influence of its items and categories on the imagery was as follows:
  • In the ranking of the impact of each item, the items in the “Annoyance-Comfortable” word are ranked in order of loudness > sharpness > A-weighted sound pressure level > roughness > tonality; in “Inexpensive-Premium” the items are ranked in order of influence as sharpness > loudness > roughness > tonality > A-weighted sound pressure level; in “Weak-Powerful” the items are ranked in order of influence as loudness > sharpness > tonality > A-weighted sound pressure level > roughness.
  • In the ranking of the impact of each category, the ‘loudness’ of “Annoyance-Comfortable” is taken as an example. Its score points indicate that the lower the loudness, the more comfortable the imagery conveyed, and, conversely, the higher the loudness, the more annoying it is. The second ranked item is sharpness, and its category score points indicate that the lower the sharpness, the more comfortable the imagery is conveyed, while the opposite is more annoying. The effects of the other categories can be followed in this way, as shown in Table 4.
In conclusion, this research adopted a systematic approach, applying Kansei engineering and artificial neural networks to the research of product sound imagery, and provides an objective and accurate understanding of the relationship between the sound of electric shavers and consumers’ emotional evaluation, and provides designers with explicit sound design indicators and references for optimal design.

Author Contributions

Conceptualization, Z.-H.L. and J.-C.W.; methodology, Z.-H.L. and J.-C.W.; software, Z.-H.L. and J.-C.W.; validation J.-C.W. and F.L.; formal analysis, Y.-T.C.; investigation, J.-C.W.; resources, Z.-H.L.; data curation, Z.-H.L.; writing—original draft preparation, Z.-H.L.; writing—review and editing, J.-C.W. and F.L.; visualization, Y.-T.C.; supervision, J.-C.W.; project administration, J.-C.W.; funding acquisition, J.-C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Fujian University of Technology (grant numbers GY-S21081, 2021), and Design Innovation Research Center of Humanities and Social Sciences Research Base of Colleges and Universities in Fujian Province.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to “Not applicable” for studies not involving humans or animals.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

The author is grateful to thank I. The support of Fund GY-S21081. II. Thanks to the academic editors and anonymous reviewers for their review and advice on this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nagamachi, M. Kansei engineering: A new ergonomic consumer-oriented technology for product development. Int. J. Ind. Ergon. 1995, 15, 3–11. [Google Scholar] [CrossRef]
  2. Ji, J.L.; Heyes, S.B.; MacLeod, C.; Holmes, E.A. Emotional Mental Imagery as Simulation of Reality: Fear and Beyond—A Tribute to Peter Lang. Behav. Ther. 2015, 47, 702–719. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Lang, P.J. A Bio-Informational Theory of Emotional Imagery. Psychophysiology 1979, 16, 495–512. [Google Scholar] [CrossRef] [PubMed]
  4. Dudschig, C.; MacKenzie, I.G.; Strozyk, J.; Kaup, B.; Leuthold, H. The Sounds of Sentences: Differentiating the Influence of Physical Sound, Sound Imagery, and Linguistically Implied Sounds on Physical Sound Processing. Cogn. Affect. Behav. Neurosci. 2016, 16, 940–961. [Google Scholar] [CrossRef] [PubMed]
  5. Jung, H.; Wiltse, H.; Wiberg, M.; Stolterman, E. Metaphors, materialities, and affordances: Hybrid morphologies in the design of interactive artifacts. Des. Stud. 2017, 53, 24–46. [Google Scholar] [CrossRef]
  6. Yun, M.H.; Han, S.H.; Kim, K.J.; Han, S. Measuring Customer Perceptions on Product Usability: Development of Image and Impression Attributes of Consumer Electronic Products. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Houston, TX, USA, 27 September–1 October 1999; Volume 43, pp. 486–490. [Google Scholar] [CrossRef]
  7. Mele, M.; Campana, G. Prediction of Kansei engineering features for bottle design by a Knowledge Based System. Int. J. Interact. Des. Manuf. (IJIDeM) 2018, 12, 1201–1210. [Google Scholar] [CrossRef]
  8. Shin, G.W.; Park, S.; Kim, Y.M.; Lee, Y.; Yun, M.H. Comparing Semantic Differential Methods in Affective Engineering Processes: A Case Study on Vehicle Instrument Panels. Appl. Sci. 2020, 10, 4751. [Google Scholar] [CrossRef]
  9. Lee, Y.-C.; Huang, S.-Y. A new fuzzy concept approach for Kano’s model. Expert Syst. Appl. 2009, 36, 4479–4484. [Google Scholar] [CrossRef]
  10. Schütte, S.; Eklund, J. Design of rocker switches for work-vehicles—An application of Kansei Engineering. Appl. Ergon. 2005, 36, 557–567. [Google Scholar] [CrossRef]
  11. Liu, M.; Ben, L. Research on Demand Forecasting Method of Multi-user Group Based on Big Data. In Proceedings of the International Conference on Human-Computer Interaction, London, UK, 21–24 July 2022; Springer: Cham, Switzerland, 2022; pp. 45–64. [Google Scholar] [CrossRef]
  12. Wang, C.; Wu, F.; Shi, Z.; Zhang, D. Indoor positioning technique by combining RFID and particle swarm optimization-based back propagation neural network. Optik 2016, 127, 6839–6849. [Google Scholar] [CrossRef]
  13. Zhu, Y.; Chen, G. Research on the head form design of service robots based on Kansei engineering and BP neural network. In Proceedings of the Seventh International Conference on Electronics and Information Engineering, Shenzhen, China, 21–23 July 2017; Volume 10322, pp. 556–560. [Google Scholar] [CrossRef]
  14. Guo, Z.; Lin, L. Application of Group Cognitive Kansei Information Acquisition Based on Big Data. In Proceedings of the IEEE 2020 International Conference on Computer Information and Big Data Applications (CIBDA), Guiyang, China, 17–19 April 2020; pp. 99–102. [Google Scholar] [CrossRef]
  15. He, M.; Ma, C.; Wang, R. A Data-Driven Approach for University Public Opinion Analysis and Its Applications. Appl. Sci. 2022, 12, 9136. [Google Scholar] [CrossRef]
  16. Lilleberg, J.; Zhu, Y.; Zhang, Y. Support vector machines and word2vec for text classification with semantic features. In Proceedings of the 2015 IEEE 14th International Conference on Cognitive Informatics & Cognitive Computing (ICCI* CC), Beijing, China, 6–8 July 2015; pp. 136–140. [Google Scholar] [CrossRef]
  17. Lai, X.; Zhang, S.; Mao, N.; Liu, J.; Chen, Q. Kansei engineering for new energy vehicle exterior design: An internet big data mining approach. Comput. Ind. Eng. 2021, 165, 107913. [Google Scholar] [CrossRef]
  18. Frost, F.; Karri, V. Performance comparison of BP and GRNN models of the neural network paradigm using a practical industrial application. In Proceedings of the ICONIP’99. ANZIIS’99 & ANNES’99 & ACNN’99, 6th International Conference on Neural Information Processing, Perth, WA, Australia, 16–20 November 1999; Volume 3, No. 99EX378. pp. 1069–1074. [Google Scholar] [CrossRef]
  19. Cepowski, T. An estimation of motor yacht light displacement based on design parameters using computational intelligence techniques. Ocean Eng. 2021, 231, 109086. [Google Scholar] [CrossRef]
  20. Nazari-Shirkouhi, S.; Keramati, A.; Rezaie, K. Improvement of customers’ satisfaction with new product design using an adaptive neuro-fuzzy inference systems approach. Neural Comput. Appl. 2013, 23, 333–343. [Google Scholar] [CrossRef]
  21. Gao, Y. The application of artificial neural network in watch modeling design with network community media. J. Ambient Intell. Humaniz. Comput. 2020, 1–10. [Google Scholar] [CrossRef]
  22. Kim, Y.M.; Son, Y.; Kim, W.; Jin, B.; Yun, M.H. Classification of Children’s Sitting Postures Using Machine Learning Algorithms. Appl. Sci. 2018, 8, 1280. [Google Scholar] [CrossRef] [Green Version]
  23. Yeh, Y.-E. Prediction of Optimized Color Design for Sports Shoes Using an Artificial Neural Network and Genetic Algorithm. Appl. Sci. 2020, 10, 1560. [Google Scholar] [CrossRef] [Green Version]
  24. Dou, R.; Li, W.; Nan, G.; Wang, X.; Zhou, Y. How can manufacturers make decisions on product appearance design? A research on optimal design based on customers’ emotional satisfaction. J. Manag. Sci. Eng. 2021, 6, 177–196. [Google Scholar] [CrossRef]
  25. Lai, H.-H.; Lin, Y.-C.; Yeh, C.-H. Form design of product image using grey relational analysis and neural network models. Comput. Oper. Res. 2005, 32, 2689–2711. [Google Scholar] [CrossRef]
  26. Nilsson, M.E. A-weighted sound pressure level as an indicator of short-term loudness or annoyance of road-traffic sound. J. Sound Vib. 2007, 302, 197–207. [Google Scholar] [CrossRef]
  27. Glasberg, B.R.; Moore, B.C.J. A model of loudness applicable to time-varying sounds. J. Audio Eng. Soc. 2002, 50, 331–342. Available online: http://www.aes.org/e-lib/browse.cfm?elib=11081 (accessed on 3 May 2022).
  28. IEC 61672-1; Electroacoustics-Sound Level Meters-Part 1: Specifications. IEC: London, UK, 2002.
  29. Moon, S.; Park, S.; Park, D.; Kim, W.; Yun, M.H.; Park, D. A Study on Affective Dimensions to Engine Acceleration Sound Quality Using Acoustic Parameters. Appl. Sci. 2019, 9, 604. [Google Scholar] [CrossRef] [Green Version]
  30. Zwicker, E.; Fastl, H.; Widmann, U.; Kurakata, K.; Kuwano, S.; Namba, S. Program for calculating loudness according to DIN 45631 (ISO 532B). J. Acoust. Soc. Jpn. 1991, 12, 39–42. [Google Scholar] [CrossRef] [Green Version]
  31. Genuit, K.; Fiebig, A.; Schulte-Fortkamp, B. Relationship between environmental noise, sound quality, soundscape. J. Acoust. Soc. Am. 2012, 132, 1924. [Google Scholar] [CrossRef]
  32. Scherer, K. Vocal communication of emotion: A review of research paradigms. Speech Commun. 2003, 40, 227–256. [Google Scholar] [CrossRef]
  33. Note, A.; Psychoacoustics, I.I. Calculating Psychoacoustic Parameters in ArtemiS SUITE; HEAD Acoustics GmbH: Herzogenrath, Germany, 2016; pp. 1–9. [Google Scholar]
  34. Kwon, G.; Jo, H.; Kang, Y.J. Model of psychoacoustic sportiness for vehicle interior sound: Excluding loudness. Appl. Acoust. 2018, 136, 16–25. [Google Scholar] [CrossRef]
  35. Terhardt, E. On the perception of periodic sound fluctuations (roughness). J. Acta Acust. United Acust. 1974, 30, 201–213. [Google Scholar]
  36. Huang, Y.; Zheng, Q. Sound quality modelling of hairdryer noise. J. Appl. Acoust. 2022, 197, 108904. [Google Scholar] [CrossRef]
  37. Guski, R. Psychological methods for evaluating sound quality and assessing acoustic information. J. Acta Acust. United Acust. 1997, 83, 765–774. [Google Scholar]
  38. Atamer, S. Estimation of Electric Shaver Sound Quality using Artificial Neural Networks. In Proceedings of the INTER-NOISE and NOISE-CON Congress and Conference, Hamburg, Germany, 21–24 August 2016; Volume 253, No. 3. pp. 5185–5192. [Google Scholar]
  39. Kim, W.; Ryu, T.; Lee, Y.; Park, D.; Yun, M.H. 2C2-2 Modelling of the Auditory Satisfaction Function for the Automobile Door Opening Quality. Jpn. J. Ergon. 2015, 51, S478–S483. [Google Scholar] [CrossRef]
  40. Woo, J.C.; Luo, F.; Lin, Z.H.; Chen, Y.T. Research on the Sensory Feeling of Product Design for Electric Toothbrush Based on Kansei Engineering and Back Propagation Neural Network. J. Internet Technol. 2022, 23, 863–871. [Google Scholar]
Figure 1. Research flow chart.
Figure 1. Research flow chart.
Applsci 12 10329 g001
Figure 2. Collection of 230 electric shaver samples.
Figure 2. Collection of 230 electric shaver samples.
Applsci 12 10329 g002
Figure 3. Diagram of clustering of 80 samples to 34 representative samples.    is 17 cluster training sample cut-offs; ...... is 4 cluster validation sample cut-offs.
Figure 3. Diagram of clustering of 80 samples to 34 representative samples.    is 17 cluster training sample cut-offs; ...... is 4 cluster validation sample cut-offs.
Applsci 12 10329 g003
Figure 4. The 34 representative samples.
Figure 4. The 34 representative samples.
Applsci 12 10329 g004
Figure 5. Clustering diagram of three groups of representative Kansei words.    is the 3-cluster training sample cut-off.
Figure 5. Clustering diagram of three groups of representative Kansei words.    is the 3-cluster training sample cut-off.
Applsci 12 10329 g005
Figure 6. Flow chart for roughness index calculation.
Figure 6. Flow chart for roughness index calculation.
Applsci 12 10329 g006
Figure 7. Diagram of the ArtemiS 13.6 for electric shaver non-contact skin-based recording: (a) A-weighted sound pressure level, (b) loudness, (c) sharpness, (d) tonality, and (e) roughness.
Figure 7. Diagram of the ArtemiS 13.6 for electric shaver non-contact skin-based recording: (a) A-weighted sound pressure level, (b) loudness, (c) sharpness, (d) tonality, and (e) roughness.
Applsci 12 10329 g007aApplsci 12 10329 g007b
Figure 8. GRNN model prediction flow chart.
Figure 8. GRNN model prediction flow chart.
Applsci 12 10329 g008
Figure 9. GRNN model of (a) “Annoyance-Comfortable”, (b) “Inexpensive-Premium”, and (c) “Weak-Powerful”.
Figure 9. GRNN model of (a) “Annoyance-Comfortable”, (b) “Inexpensive-Premium”, and (c) “Weak-Powerful”.
Applsci 12 10329 g009
Figure 10. BPNN model prediction flow chart.
Figure 10. BPNN model prediction flow chart.
Applsci 12 10329 g010
Figure 11. GA used to optimize BPNN model flow chart.
Figure 11. GA used to optimize BPNN model flow chart.
Applsci 12 10329 g011
Figure 12. (a) GA iterative process, (b) GA-BPNN training completion chart.
Figure 12. (a) GA iterative process, (b) GA-BPNN training completion chart.
Applsci 12 10329 g012
Figure 13. Predicted results for (a) “Annoyance-Comfortable”, (b) “Inexpensive-Premium”, and (c) “Weak-Powerful” of Kansei words.
Figure 13. Predicted results for (a) “Annoyance-Comfortable”, (b) “Inexpensive-Premium”, and (c) “Weak-Powerful” of Kansei words.
Applsci 12 10329 g013
Table 1. Results of factor analysis and shortest Euclidean distance method for three Kansei words.
Table 1. Results of factor analysis and shortest Euclidean distance method for three Kansei words.
GroupsKansei WordsFactor1Factor2Fator3Euclidean DistanceDistance Squared
1Coordinated−0.6690.028−0.1480.9680.937
Modern
Popular
0.132−0.0150.7470.6690.448
Modern0.167−0.371−0.6051.0041.008
Perfect0.2640.7080.2150.9920.984
Premium0.518−0.0440.0460.3250.106
Center coordinates of cluster 0.0820.0610.051
2Powerful−0.5610.5410.0130.0140.012
Stable−0.1300.1120.1120.0990.010
Rigid−0.058−0.025−0.2860.3640.132
Flashy−0.0150.6680.1110.7690.591
Pleasant0.3550.286−0.7930.1470.022
Hardy−0.608−0.0220.2700.3550.126
Center coordinates of cluster 0.6640.26−0.096
3Comfortable0.7030.135−0.6020.1200.011
Safety0.2050.0470.2450.2810.079
Pleasing−0.015−0.544−0.0720.8480.719
Relaxing0.5780.1400.0460.5480.300
Center coordinates of cluster 0.368−0.056−0.096
Table 2. Parameters corresponding to each item in the 34 representative samples.
Table 2. Parameters corresponding to each item in the 34 representative samples.
No.LA
(dB(A))
Loudness
(sone)
Tonality
(tu)
Sharpness
(acum)
Roughness
(asper)
16919.71.434.00.69
27426.90.285.20.09
37125.61.325.01.05
47525.90.965.00.04
56831.70.825.60.03
66934.70.714.90.08
76935.80.704.31.11
86033.10.254.50.05
97118.10.405.50.39
106136.70.264.40.07
116923.40.154.50.09
127133.61.603.50.97
136528.11.164.40.57
146333.11.644.60.79
156727.90.534.70.13
167727.80.733.80.07
176118.10.405.00.26
186134.11.102.50.61
197427.61.074.60.97
206231.00.434.60.02
216333.90.264.40.01
226728.81.264.70.76
237139.50.315.10.29
246936.70.246.40.11
256448.30.284.60.01
266019.60.494.30.14
276643.31.184.30.58
286344.91.024.30.78
297118.10.405.50.39
306829.00.695.10.81
317021.90.674.70.69
326552.70.933.61.03
337150.30.315.10.29
346813.20.213.80.60
Table 3. Classification of items and categories of electric shaver sounds.
Table 3. Classification of items and categories of electric shaver sounds.
ItemCategory
LA60 ≤ x > 6565 ≤ x > 7070 ≤ x > 75
X1X11X12X13
Loudness10 ≤ x > 2525 ≤ x > 4040 ≤ x > 55
X2X21X22X23
Tonality0.1 ≤ x > 0.60.6 ≤ x > 1.21.2 ≤ x > 1.7
X3X41X42X43
Sharpness2.2 ≤ x > 4.44.4 ≤ x > 6.5
X4X41X42
Roughness0 ≤ x > 0.50.5 ≤ x > 1.2
X5X51X52
Table 4. Kansei impact analysis for each item and category in the QTTI prediction model.
Table 4. Kansei impact analysis for each item and category in the QTTI prediction model.
ItemCategoryY1 Annoyance-ComfortableY2 Inexpensive-PremiumY3 Weak-Powerful
CPSCPSCPS
LA
X1
60 ≤ x > 650.5440.6233rd0.2900.4195th−0.3170.5114th
65 ≤ x > 70−0.012−0.0080.031
70 ≤ x > 75−0.121−0.0620.040
Loudness
X2
10 ≤ x > 250.3320.8371st0.2230.7482nd−0.2680.8451st
25 ≤ x > 40−0.245−0.1590.202
40 ≤ x > 55−0.296−0.2230.216
Tonality
X3
0.1 ≤ x > 0.60.0740.3755th0.1290.6084th0.1330.6533rd
0.6 ≤ x > 1.2−0.071−0.125−0.125
1.2 ≤ x > 1.7−0.083−0.144−0.164
Sharpness
X4
2.2 ≤ x > 4.40.3710.7122nd0.4620.8091st0.2890.7082nd
4.4 ≤ x > 6.5−0.186−0.231−0.14 5
Roughness
X5
0 ≤ x > 0.50.0780.4194tht0.1560.7043rd−0.0540.3525th
0.5 ≤ x > 1.2−0.136−0.2694.236
Constant term3.7143.4604.246
Multiple correlation coefficient (R)0.9520.9480.92h
Coefficient of determination (R2)0.8910.8990.859
C/category score points; P/partial correlation coefficient; S/ranking.
Table 5. Neural nodes and corresponding information at each layer in the BPNN.
Table 5. Neural nodes and corresponding information at each layer in the BPNN.
Network LayersNeural NodesMeaning
Input layer1313 sound categories
Hidden layer12Processing data
Output layer33 Kansei words
Table 6. Comparative analysis of prediction results of linear and non-linear prediction models.
Table 6. Comparative analysis of prediction results of linear and non-linear prediction models.
Sample×AdjY1 Annoyance-ComfortableY2 Inexpensive-PremiumY3 Weak-Powerful
Test1QTTIAEV *0.2400.4000.200
REV *5.15%9.01%4.57%
BPNNAEV0.5290.2640.347
REV11.4%5.94%7.92%
GA-BPNNAEV0.1810.1790.523
REV3.89%4.04%11.9%
GRNNAEV0.2030.3730.269
REV4.37%8.42%6.15%
Test2QTTIAEV0.2100.0300.400
REV5.50%0.85%8.66%
BPNNAEV0.8190.3600.131
REV21.5%10.2%2.84%
GA-BPNNAEV0.1450.0940.117
REV3.80%2.65%2.54%
GRNNAEV0.2430.1630.092
REV6.39%4.64%2.01%
Test3QTTIAEV0.1100.0900.300
REV3.49%2.71%7.25%
BPNNAEV0.7990.3570.340
REV25.3%10.8%8.22%
GA-BPNNAEV0.0890.1720.167
REV2.83%5.19%4.03%
GRNNAEV0.0810.0680.336
REV2.56%2.06%8.14%
Test4QTTIAEV0.1000.4000.300
REV2.24%9.03%9.10%
BPNNAEV0.2400.0300.515
REV5.38%0.68%15.6%
GA-BPNNAEV0.1540.4070.361
REV3.46%9.18%10.9%
GRNNAEV0.5170.7610.574
REV11.6%17.2%17.4%
AEV
(Total)
QTTI4.10%5.40%7.40%
BPNN15.87%6.89%8.65%
GA-BPNN3.50%5.27%7.37%
GRNN6.23%8.08%8.43%
* REV/Relative error value; AEV/Average error value.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lin, Z.-H.; Woo, J.-C.; Luo, F.; Chen, Y.-T. Research on Sound Imagery of Electric Shavers Based on Kansei Engineering and Multiple Artificial Neural Networks. Appl. Sci. 2022, 12, 10329. https://doi.org/10.3390/app122010329

AMA Style

Lin Z-H, Woo J-C, Luo F, Chen Y-T. Research on Sound Imagery of Electric Shavers Based on Kansei Engineering and Multiple Artificial Neural Networks. Applied Sciences. 2022; 12(20):10329. https://doi.org/10.3390/app122010329

Chicago/Turabian Style

Lin, Zhe-Hui, Jeng-Chung Woo, Feng Luo, and Yu-Tong Chen. 2022. "Research on Sound Imagery of Electric Shavers Based on Kansei Engineering and Multiple Artificial Neural Networks" Applied Sciences 12, no. 20: 10329. https://doi.org/10.3390/app122010329

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop