Next Article in Journal
Design and Experiment of a Novel Façade Cleaning Robot with a Biped Mechanism
Next Article in Special Issue
Fast Cylinder Shape Matching Using Random Sample Consensus in Large Scale Point Cloud
Previous Article in Journal
Underwater Laser Treatment of PET: Effect of Processing Parameters on Surface Morphology and Chemistry
Previous Article in Special Issue
Infrared Image Super-Resolution Reconstruction Based on Quaternion Fractional Order Total Variation with Lp Quasinorm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Product Innovation Design Based on Deep Learning and Kansei Engineering

1
School of Mechanical Engineering, Guizhou University, Guiyang 550025, China
2
Department of Computer Science and Engineering, University of South Carolina, Columbia, SC 29208, USA
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2018, 8(12), 2397; https://doi.org/10.3390/app8122397
Submission received: 17 October 2018 / Revised: 12 November 2018 / Accepted: 20 November 2018 / Published: 26 November 2018
(This article belongs to the Special Issue Machine Learning and Compressed Sensing in Image Reconstruction)

Abstract

:
Creative product design is becoming critical to the success of many enterprises. However, the conventional product innovation process is hindered by two major challenges: the difficulty to capture users’ preferences and the lack of intuitive approaches to visually inspire the designer, which is especially true in fashion design and form design of many other types of products. In this paper, we propose to combine Kansei engineering and the deep learning for product innovation (KENPI) framework, which can transfer color, pattern, etc. of a style image in real time to a product’s shape automatically. To capture user preferences, we combine Kansei engineering with back-propagation neural networks to establish a mapping model between product properties and styles. To address the inspiration issue in product innovation, the convolutional neural network-based neural style transfer is adopted to reconstruct and merge color and pattern features of the style image, which are then migrated to the target product. The generated new product image can not only preserve the shape of the target product but also have the features of the style image. The Kansei analysis shows that the semantics of the new product have been enhanced on the basis of the target product, which means that the new product design can better meet the needs of users. Finally, implementation of this proposed method is demonstrated in detail through a case study of female coat design.

1. Introduction

Creativity is a rising topic in current society. The emphasis on creativity is valued as a key factor for success in all areas, including, but not limited to, politics, economics, culture, arts and product design. Effective approaches that inspire product designers are thus strongly desirable. One of the effective approaches is visualization. Steven Jobs said: “Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something.” In this study, we propose a novel image-to-image visual generation approach for product innovation by combining deep learning-based neural style transfer algorithm with Kansei engineering.
In recent years, deep learning methods, such as convolutional neural networks (CNN), have made breakthroughs in many fields, such as computer vision, and are widely applied in object recognition, detection and segmentation [1,2,3]. In the area of product design, Pedro et al. [4] proposed to train a CNN with standard usability heuristics for evaluating usability, which is an easy method for evaluating usability in thermostats, based on images. Pan et al. [5] used a scalable deep learning approach to predict and interpret customer perceptions of design attributes for heterogeneous markets. Wang et al. [6] present a deep learning-based approach to automatically link customer needs to product design parameters. Zhu et al. [7,8] extended CNN to generative adversarial networks (GAN) and built a system that can implement the following three applications: (1) manipulating an existing product photo based on an underlying generative model to achieve different looks (shape and color); (2) “generative transformation” of one product image onto another product; (3) generating a new product image from scratch based on user’s scribbles and warping user interface (UI). In addition, Chai et al. [9] achieved automatic coloring of product sketches. Kim et al. [10] generated a new product image of one domain given an image from another domain. For example, they took a handbag (or shoe) image as input, and generated its corresponding shoe (or handbag) image. The above studies have been successful. They can generate product images directly. However, they only focus on the exploration of image generation while ignoring user requirements for products. In this paper, we propose a Kansei engineering-based neural style transfer for product innovation (KENPI) framework, which can directly generate high-quality product images and meet user preferences.
Kansei engineering emerged in Japan in the 1970s with the purpose of connecting customers’ affective responses to the design process of products, in an attempt to translate emotions into measurable and physical design specifications [11]. Many scholars pointed out that satisfying the requirements of users is the key to product design [12,13,14]. As a user-driven method, Kansei engineering has been widely applied in various product designs [15,16,17], such as USB flash drives, running shoes, in-vehicle rubber keypads, and so on. Chang et al. [18] used Kansei engineering to construct a relationship model between user requirements and steering wheel design parameters. Through this model, a steering wheel was designed to meet the user’s preferences. Although Kansei engineering can maximize user satisfaction, it can only provide theoretical guidance for designers and cannot directly generate products, which is a fatal defect of product form design.
Deep learning-based neural style transfer technology can generate images with high quality. In 2015, Gatys et al. [19,20,21] demonstrated that representations of content and style are separable. We can manipulate both of them independently. They proposed a neural style transfer algorithm to recombine the content of a given photograph and the style of well-known artworks. Although this method can show results of high perceptual quality, it relies on a slow and memory-consuming optimization process, which limits its practical application. Ulyanov et al. [22] used a feed-forward generation convolution network to replace the optimization process, thereby improving the speed greatly and opening the door for real-time applications. Huang et al. [23] implemented an arbitrary style transfer by introducing an adaptive instance normalization (AdaIN) layer. Inspired by the above research, we combine Kansei engineering with neural style transfer to develop a novel product innovation design approach.
An evaluation method is needed to verify that the style migration is successful or not. In the product design field, back-propagation (BP) networks are usually used to establish a relationship between product parameters and user evaluation of the product. Chen et al. [24] developed an integrated design approach based on the numerical definition of product form to design a knife. They also used a BP network to establish the model between the product form features and the consumers’ perception of the product image. Based on the model, the consumer’s evaluation of the knife can be predicted. Alibi et al. [25] used artificial neural networks to establish the relationship between functional properties and structural parameters of knitted fabrics.
The main contributions of this paper are summarized as follows:
(1)
We propose to use the deep learning-based neural style transfer technique for new product innovation by reconstructing and merging the color and pattern features of the style image, and then migrating them to the target product. The generated new product design can not only preserve the shape of the target product, but also have features of the style image.
(2)
To assess whether the style image has been migrated to the product or not, we introduce factor analysis into Kansei engineering and analyze product styles from four perspectives: occasion, fashion, age and structure. Then, we combine Kansei engineering with BP neural networks to establish a relationship model between product properties and styles. Employing the Kansei engineering approach to capture user preferences for neural style transfer-based product design is one of our major contributions here.
(3)
We applied the proposed KENPI framework to the female coat design problem to demonstrate the value of our method.
The overall structure of this paper is as follows. In Section 2, we firstly present the overall research framework. Then, we describe the Kansei engineering method and style transfer neural network algorithm. In Section 3, an example is given to verify the feasibility and effectiveness of the proposed framework, and the related experimental results are shown. Finally, we give our concluding remarks in Section 4.

2. Methods

2.1. Research Framework

In order to exploit neural style transfer for product innovation design, we combined Kansei engineering, BP networks, and style transfer neural network models to form a generative approach, the KENPI. As shown in Figure 1, the KENPI framework contains of three parts. In part 1, factor analysis is used to compress Kansei words into product styles to span the product semantic space. Then morphological analysis is used to decompose the product shape into design elements to span the product property space. Finally, a BP network is used to construct a nonlinear mapping model between these two spaces. Because of the cognitive differences between the designer and users, building a quantitative model to obtain product semantics is more objective than subjective evaluation by the designer directly. Through this model, users can also understand their own preference style. In part 2, the selection of style images is guided by the product semantics obtained from the BP model. Then the product image and style image are imported into the style transfer model to generate a new product image. In part 3, we compare the semantics of the product before and after style transfer to assess whether the transfer is successful. Due to the complexity involved in modeling color and texture of the style image and the generated product image, we have chosen a user-oriented semantic differential (SD) method to evaluate the style images and generated products.

2.2. Kansei Engineering

Kansei engineering is one of the main areas of ergonomics (human factors). The term ‘‘Kansei” is a Japanese word that covers the meanings of sensibility, impression, and emotion. It is related to a customer’s physiological and psychological feelings and refers to the cognitive processes of human perception. Kansei engineering has been developed as a consumer-oriented technique to better understand customers’ emotional responses and further translate them into the design elements of a product. In Kansei engineering, consumers often use an adjective, which is called Kansei word [11], to describe their perceptions of products. Typically, Kansei engineering studies follow a model with four main steps:
(1) Choosing the product domain:
The task in the first step is to define the research object and collect data, including Kansei words and product form images. In order to achieve a complete semantic and formal description as much as possible, we collect these from various sources, such as magazines, e-commerce platforms, the Internet, etc.
The number of Kansei words and sample size affects the quality of the result. Typically, we collect 50–600 Kansei words and samples, so a reasonable reduction must be carried out [26]. Card sorting is one of the effective methods, using invited experts to screen. First, words (or sample images) are grouped by experts based on their affinity, then a representative for every group is chosen, and, finally, we can obtain representative Kansei words and samples.
(2) Semantic space spanning:
In this step, we usually use cluster analysis, term frequency-inverse document frequency (TF-IDF), factor analysis or other methods to filter the Kansei words, so that the semantic space is more rigorous. In Kansei engineering, cluster analysis and TF-IDF require word vector training. Since the adjectives we collect are separate, factor analysis is more suitable in this paper. The basic purpose of factor analysis is to use fewer factors to reflect the most information of the original variables. In order to obtain the data required by factor analysis, we constructed a questionnaire between Kansei words and sample images, and invited users to complete it.
(3) Properties space spanning:
Kansei engineering approaches usually apply morphological analysis to divide the product into independent items (product properties), and subdivide items into categories. For product design, product form features are commonly defined in graphical terms because it is simple and comprehensible for people to understand complex shapes and patterns [27].
In this step, we need to divide the products and construct the questionnaire by combining the samples and Kansei words with the 7-point SD scale. The data generated by the questionnaire is applied to the next step, namely, relationship model building.
(4) Relationship model building:
In this step, we need to associate the properties space with the semantic space. Commonly used methods are multiple regression analysis, Hayashi’s quantification method I, artificial neural networks, and so on. Because BP neural networks have high nonlinear mapping ability and good fault tolerance, it is more suitable to build the relationship model between the properties space (product items) and semantic space (Kansei words). We used the data obtained in the previous step to train the model.
The details of the above steps are presented in Section 3 where we show the analysis step when applying the technique to female coat design.

2.3. Neural Style Transfer Network

Style transfer is the technique of recomposing one image in the style of another. A content image and a style image are used to create an output image, whose “content” mirrors the content image and whose style resembles that of the style image. Batch normalization (BN), instance normalization (IN) and AdaIN are commonly used in neural style transfer. BN calculates the mean and variance of each channel for a batch of samples, while IN independently calculates the mean and variance for each channel and sample. The AdaIN layer is similar to IN, but it has no learnable affine parameters. Instead, it adaptively computes the affine parameters from the feature representations of an arbitrary style image. Figure 2 shows an overview of our neural style transfer network. We adopt the “Encoder-AdaIN-Decoder” architecture.
Our style transfer network T takes a content image c and a style image s as inputs, and synthesizes an output image that recombines the content of the former and the style of the latter. The encoder f is a fixed Visual Geometry Group (VGG) 19 which is pre-trained on the ImageNet dataset for image classification. The structure of VGG-19 is shown in Figure 3. In VGG 19, each layer takes the output of the previous layer to extract more complex features until the object is identified. Each layer can be considered as an extractor of many local features.
After encoding the content and style images in feature space, we feed both feature maps to an AdaIN layer that aligns the mean and variance of the content feature maps to those of the style feature maps, producing the target feature maps t :
t = AdaIN ( f ( c ) , f ( s ) ) = σ ( f ( s ) ) ( f ( c ) μ ( f ( c ) ) σ ( f ( c ) ) ) + μ ( f ( s ) )
where: f is the encoder; c and s are the content and style image; μ and σ are the mean and variance.
A randomly initialized decoder g is trained to map t back to the image space, generating the stylized image T   ( c ,   s ) :
T ( c , s ) = g ( t )
The decoder mostly mirrors the encoder, with all pooling layers replaced by nearest up-sampling to reduce checkerboard effects. We use reflection padding in both f and g to avoid border artifacts.
We use the pre-trained VGG-19 to compute the loss function to train the decoder:
L = α L c + β L s
where: L is the loss function; L c and α are the content loss and its weight; and L s and β are the style loss and its weight.
The content loss is the Euclidean distance between the target features and the features of the output image. We use the AdaIN output t as the content target, instead of the commonly used feature responses of the content image:
L c = f ( g ( t ) ) t 2
Since our AdaIN layer only transfers the mean and variance of the style features, our style loss only matches these statistics:
L s = i = 1 L μ ( ϕ i ( g ( t ) ) ) μ ( ϕ i ( s ) ) 2 + i = 1 L σ ( ϕ i ( g ( t ) ) ) σ ( ϕ i ( s ) ) 2
where each ϕ i denotes a layer in VGG-19 used to compute the style loss. The objective is to minimize the content and style losses.

3. Empirical Study

In this research, a case study of designing female coats was conducted to verify practicality and effectiveness of the proposed framework. It had the following steps: (1) use factor analysis to extract Kansei words and divide the coat into eight styles; (2) adopt speciation analysis to obtain the properties of the coat; (3) use a BP network to establish the relationship model between the coat style and its properties; (4) use the neural style transfer model to transfer the style image to the target product; and, (5) evaluate the new coat and check if it has been transferred successfully.

3.1. Product Domain Selection

In this stage, we collected 100 Kansei words and 200 female coat images from magazines, e-commerce platforms and the Internet. Five fashion designers were invited to reduce the number of samples and Kansei words using the card sorting method. This left 100 coat samples and 30 Kansei words (Relaxed, Natural, Peaceful, Formal, Strict, Capable, Modern, Fashionable, Particular, Classical, Traditional, Conservative, Mature, Steady, Sweet, Young, Energetic, Simple, Plain, Delicate, Luxurious, Dynamic, Clear, Romantic, Warm, Soft, Noble, Female, Sexy and Elegant).

3.2. Semantic Space Spanning

Questionnaires were constructed by combining the 100 samples and 30 Kansei words. We invited 15 women (5 designers and 10 consumers) to evaluate the coat images on a 7-point Likert scale (7 corresponding to ‘‘strongly agree’’ that the image-word is very close to the image, and 1 corresponding to ‘‘strongly disagree’’). Their ages ranged from 18 to 45. Through the questionnaire, we obtained dataset 1 with a dimension of 15 × 30 × 100, where 15 indicates the number of women who completed questionnaires, 30 indicates the number of Kansei words, and 100 indicates the number of sample images. We also calculated the average of 100 images (based on dataset 1) to obtain dataset 2 (15 × 30).
Factor analysis is a statistical method used to convert many observable variables into few latent factors. That is to say, several related variables are classified into the same class, and each class becomes a factor. The cumulative percentage of the factors is considered to determine the number of factors. Generally, the number of factors should account for more than 60% of the total variance [28]. We can use factor analysis to convert 30 Kansei words into few latent factors. In MATLAB (MathWorks, Natick, MA, USA), firstly, we converted dataset 2 into a matrix of 15 × 30, then the principal component method was used as an extraction technique and the rotation of varimax was adopted as an orthogonal rotation method [29]. Finally, the results shown in Table 1 were obtained.
As shown in Table 1, Factor 1, Factor 2, Factor 3, and Factor 4 account respectively for 17%, 17%, 15%, and 15% of the variance. The cumulative percentage of variance is 64%, which is more than 60%. Clearly, it is appropriate to divide 30 adjectives into four main factors. The four factors are named by considering the Kansei words’ loading coefficient: Factor 1 describes the degree of professional or leisure style. It is defined based on the occasion of usage, comprising these adjectives: “Relaxed”, “Natural”, “Peaceful”, “Formal”, “Strict” and “Capable”. Factor 2 describes the degree of vogue or classic style. It is defined based on the fashion degree, comprising these words: “Modern”, “Fashionable”, “Particular”, “Classical”, “Traditional” and “Conservative”. Factor 3 describes the degree of grand or youth style. It is defined based on the age of users, comprising these words: “Mature”, “Steady”, “Sweet”, “Young” and “Energetic”. Factor 4 describes the degree of simple or delicate style. It is defined from the structure of the coat, comprising these words: “Simple”, “Plain”, “Delicate” and “Luxurious”. Through factor analysis, we obtain eight kinds of coat style.

3.3. Property Space Spanning

Since we are studying the form semantics of products at this stage, we need to remove the interference from other information such as color, pattern, texture, and so on. By simple Photoshop processing, we obtained the cutting illustration of 100 samples. Figure 4 shows some of the samples.
The morphological analysis is used to divide the female coat into seven items: model, waist, length, collar, sleeve, pocket and opening. Then, these items are subdivided into 24 categories, as shown in Table 2.
We constructed another questionnaire (Figure 5) by combining the cutting illustrations of the samples’ eight styles with the 7-point SD scale. Fifteen women were invited to finish the 100 questionnaires. By sorting the data from the questionnaires, we obtained dataset 3 with a dimension of 15 × 100 × 8. We calculated the average of 15 females to obtain dataset 4 (100 × 8).

3.4. Relationship Model Building

In this section, we establish a BP network-based relationship model between product parameters and styles. To make the structure of the relationship model easy to design and have good functional performance, a three-layer neural network structure was selected. Specific steps were as follows:
(1) Model Construction
The structure of BP network is shown in Figure 6. The input layer consists of seven coat parameters ( j 1 , j 2 , j 3 , j 4 , j 5 , j 6 , j 7 ). Hence, the number of neurons is seven. The output layer is composed of four groups of styles (professional-leisure, vogue-classic, grand-youth, simple-delicate), so the number of neurons is four. The empirical formula for the number of hidden neurons is:
p = n + q + z
In Equation (6), p , n and q are the number of neurons in the hidden, input, and output layers, respectively, and z is the empirical value ( 1     z     10 ). Through repeated trials, the number of neurons in the hidden layer is determined.
(2) Model training and results
As stated above, we had 100 samples in dataset 4. We used the k-fold cross-validation (CV) method to evaluate our model. First, the sample set was randomly divided into five subsets, each with 20 samples. In these five subsets, each subset was used as the verification set while the remaining four subsets constituted the training sets. The 100 samples and their corresponding style evaluation values (dataset 4) were used to train the BP networks. We repeated trials with different numbers of neurons (p) in the hidden layer. The comparison results are shown in Table 3.
As can be seen from Table 3, when p = 11, the minimum CV error of 0.324 is obtained. Since Kansei evaluation is a qualitative analysis and the value is a range, our result is satisfactory. The result shows that our relationship model has high reliability for female coat style prediction.

3.5. Female Coat Style Transfer

Our KENPI framework was developed using Python. Specifically, both the BP relational model and the style transfer model were built using the Python programming language along with Python modules such as NumPy, TensorFlow, SciPy, etc. All experiments were run on a Dell Precision workstation (Dell Inc., Round Rock, TX, USA) with Intel i9-7900X and Nvidia Tian XP and Ubuntu 16.04. operating system (Canonical Ltd., London, England). We used the Microsoft common objects in context (MS-COCO) dataset [30] for content images to train our network, and datasets mostly collected from WikiArt for the style images, following the setting of [31]. Each dataset contained roughly 80,000 training examples. We used the Adam optimizer [32] and a batch size of eight content–style image pairs. During training, we first resized the smallest dimension of both images to 512 while preserving the aspect ratio, then randomly cropped regions of size 256 × 256. Since our network is fully convolutional, it can be applied to images of any size during testing.
We randomly selected six female coats as the content images (Figure 7a), and inputted their parameters into the BP model to obtain their styles. Then we chose the same style of image as the style image for style transferring (Figure 7b). We constructed questionnaires as shown in Figure 8 to obtain images’ styles. Thirty women consumers were invited to finish the questionnaires. By sorting the data from questionnaires, we obtained the final results shown in Table 4.
We inputted the content images (Figure 7a) and the style images (Figure 7b) into our style transfer model to obtain new product images.
To verify whether the style of image was transferred successfully to the product, we constructed another questionnaire (Figure 9). Thirty women consumers were invited to finish the questionnaires. By sorting the data from questionnaires, and calculating the averages, we obtained the results shown in Figure 10. For result 1, its leisure-style score changed from 2.5 to 2, which indicates that the leisure semantic is weakened while professional semantic is enhanced. The classic-style score of result 1 changed from 4.3 to 5, which indicates that the vogue semantic is weakened while classic semantic is enhanced. Although this change was the largest in result 1, it does not mean it is the strongest, because the closer to the two ends of the scale, the harder it is for the semantics to be enhanced [33]; the score of 4.3, however, is close to the middle. The grand-style and simple-style of result 1 both changed from 3.2 to 3, which indicates that the grand and simple semantics are enhanced. Although they have the same value, since the standard deviation of the grand-style (1.3) is greater than the simple-style (0.5), the evaluation results of the grand-style are more discrete, indicating that the semantic of the simple-style is stronger. The same enhancements also happened in result 2 to result 6.
In order to illustrate the effectiveness of the proposed framework, we chose another 20 samples to repeat the experiment. The results are shown in Figure 11. There were two samples (sample 15 and 18) whose scores did not increase and the corresponding images are shown in Figure 12. Out of the 20 samples, ten samples’ scores increased by 0–0.5, five samples’ scores increased by 0.5–1, and three samples’ scores increased by 1–3. About 90% of the samples had increased scores while the remaining 10% had neither increased nor weakened scores. Table 5 shows the evaluation results of the generated images of sample 15 and 18. For image 15, eight people thought it was a grand-style, while seven people thought it was a youth-style. Similarly, six people thought that image 18 was a fashion-style, while seven people thought it was a classic-style. Their votes were very close and the styles were difficult to decide. Finally, we defined image 15 as a grand-style and image 18 as a classic-style. The evaluation results of samples 15 and 18 are shown in Figure 13. Both of them show small changes, and their maximum standard deviation (1.7) is greater than the samples in Figure 10, indicating that the results of samples 15 and 18 are more discrete. In summary, the controversy of the style image affects the style evaluation of the generated product.
These conclusions are also consistent with human subjective analysis. The above results show that transferring the style image to female coats’ form can enhance its semantics. That is, the new product generated from the same style of product form and image will have a stronger style semantic. The results show that the styles of the style images have been migrated to the target products successfully.

3.6. Other Results

In order to illustrate the universality of the proposed framework, we conduct another experiment in this section. As shown in Figure 14, by taking the product images and style images as inputs, our framework can automatically generate new product images blending with the new style while preserving the basic design. We have shown style transfer results for a child’s shoe, handbag, dress, sofa, and car. Since our framework can change the color and texture of an image, it can be used for automatic coloring of sketches. Furthermore, this framework allows the user to select arbitrary style images, which can be applied to product customization and other production pattern designs, such as packaging box design, fashion design, advertisement design, etc.

4. Conclusions

With the development of the social economy, customer demands have become more diversified and personalized, thereby motivating designers to break with conventional wisdom and seek new ways of innovative product design. In this research, we propose KENPI, a deep learning and Kansei engineering-based framework for product innovation design. Firstly, we use Kansei engineering to obtain user preferences and establish the BP mapping model between product properties and semantics. Through the BP model, we obtain the semantics of the selected content image and use it to guide the selection of the style image. Secondly, we construct a style transfer model to transfer the style image to the content image and generate the new product. Finally, we compare the semantics of the product before and after transfer to assess whether the style image has been migrated to the product or not. Taking the female coat as an example, we demonstrate the effectiveness and feasibility of KENPI. While deep learning-based neural style transfer has been used to generate product or regular images before, our work is the first to combine it with user preferences captured via Kansei engineering, which provides a solid foundation for neural style transfer-based product design.
Although our framework can automatically generate new products without human intervention, the evaluation of style images is based on questionnaires rather than objective models, which imbues our framework with a certain degree of subjectivity. Therefore, in the future, we will focus on improving the objectivity of style image evaluation to improve our framework.

Author Contributions

H.Q., J.H. and S.L. conceived of and designed the study, performed the experiments and analyzed the data. H.Q. and J.H. wrote the manuscript. H.Q. and J.H. revised and polished the manuscript. All authors have read and approved the final manuscript.

Funding

This work is supported by the National Natural Science Foundation of China under Grant Nos. 51475097, 91746116, 51741101, and Science and Technology Foundation of Guizhou Province under Grant Nos. [2015]4011, [2016]5013, [2015]02 and [2017]239.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks; MIT Press: Cambridge, MA, USA, 2017; Volume 39, pp. 1137–1149. [Google Scholar] [CrossRef]
  3. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  4. Pedro, P.; Balderas, D.; Peffer, T.; Arturo, M. Deep Learning for Automatic Usability Evaluations Based on Images: A Case Study of the Usability Heuristics of Thermostats. Energy Build. 2018, 163, 111–120. [Google Scholar] [CrossRef]
  5. Pan, Y.; Burnap, A.; Hartley, J.; Gonzalez, R.; Papalambros, P.Y. Deep Design: Product Aesthetics for Heterogeneous Markets. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; pp. 1961–1970. [Google Scholar] [CrossRef]
  6. Wang, Y.; Mo, D.Y.; Tseng, M.M. Mapping Customer Needs to Design Parameters in the Front End of Product Design by Applying Deep Learning. CIRP Ann. Manuf. Technol. 2018, 67, 145–148. [Google Scholar] [CrossRef]
  7. Zhu, J.Y.; Krähenbühl, P.; Shechtman, E.; Alexei, A.E. Generative Visual Manipulation on the Natural Image Manifold. ECCV 2016, 9909, 597–613. [Google Scholar] [CrossRef] [Green Version]
  8. Isola, P.; Zhu, J.Y.; Zhou, T.; Alexei, A.E. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar] [CrossRef]
  9. Chai, C.L.; Liao, J.; Zou, N.; Sun, L.Y. A One-to-many Conditional Generative Adversarial Network Framework for Multiple Image-to-image Translations. Multimed. Tools Appl. 2018, 1–28. [Google Scholar] [CrossRef]
  10. Kim, T.; Cha, M.; Kim, H.; Lee, J.K.; Kim, J. Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. Available online: https://arxiv.org/abs/1703.05192v1 (accessed on 6 May 2018).
  11. Naamachi, M. Kansei engineering as a powerful consumer-oriented technology for product development. Appl. Ergon. 2002, 33, 289–294. [Google Scholar] [CrossRef]
  12. Liu, C.Y.; Tong, L.I. Developing Automatic Form and Design System Using Integrated Grey Relational Analysis and Affective Engineering. Appl. Sci. 2018, 8, 91. [Google Scholar] [CrossRef]
  13. Chen, H.Y.; Chang, H.C. Consumers’ perception-oriented product form design using multiple regression analysis and backpropagation neural network. AI EDAM 2016, 30, 64–77. [Google Scholar] [CrossRef]
  14. Izabela, K.P. Application of neural network in QFD matrix. J. Intell. Manuf. 2013, 24, 397–404. [Google Scholar] [CrossRef]
  15. Chou, J.R. A Kansei evaluation approach based on the technique of computing with words. Adv. Eng. Inform. 2016, 30, 1–15. [Google Scholar] [CrossRef]
  16. Shieh, M.D.; Yeh, Y.E. Developing a design support system for the exterior form of running shoes. Comput. Ind. Eng. 2013, 65, 704–718. [Google Scholar] [CrossRef]
  17. Vieira, J.; Osório, J.M.A.; Mouta, S.; Delgado, P.; Portinha, A.; Meireles, J.F.; Santos, J.A. Kansei engineering as a tool for the design of in-vehicle rubber keypads. Appl. Ergon. 2017, 61, 1–11. [Google Scholar] [CrossRef] [PubMed]
  18. Chang, Y.M.; Chen, C.W. Kansei assessment of the constituent elements and the overall interrelations in car steering wheel design. Int. J. Ind. Ergon. 2016, 56, 97–105. [Google Scholar] [CrossRef]
  19. Gatys, L.A.; Ecker, A.S.; Bethge, M. A Neural Algorithm of Artistic Style. J. Vis. 2016, 16, 326. [Google Scholar] [CrossRef]
  20. Gatys, L.A.; Ecker, A.S.; Bethge, M. Image Style Transfer Using Convolutional Neural Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2414–2423. [Google Scholar] [CrossRef]
  21. Gatys, L.A.; Ecker, A.S.; Bethge, M. Texture Synthesis Using Convolutional Neural Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef]
  22. Ulyanov, D.; Lebedev, V.; Vedaldi, A.; Lempitsky, V. Texture Networks: Feed-forward Synthesis of Textures and Stylized Images. Comput. Vis. Pattern Recogn. 2017. [Google Scholar] [CrossRef]
  23. Huang, X.; Belongie, S. Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization. arXiv, 2017; arXiv:1703.06868v2. [Google Scholar] [Green Version]
  24. Chen, H.Y.; Chang, Y.M. Development of a Computer Aided Product-form Design Tool Based on Numerical Definition Scheme and Neural Network. J. Adv. Mech. Des. Syst. Manuf. 2014, 8, JAMDSM0033. [Google Scholar] [CrossRef]
  25. Alibi, H.; Fayala, F.; Bhouri, N.; Jemni, A.; Zeng, X. An Optimal Artificial Neural Network System for Designing Knit Stretch Fabrics. J. Text. Inst. 2013, 104, 766–783. [Google Scholar] [CrossRef]
  26. Simon, T.W.S.; Eklund, J.; Jan, R.C.A.; Nagamachi, M. Concepts, methods and tools in Kansei engineering. Theor. Issues Ergon. Sci. 2004, 5, 214–231. [Google Scholar]
  27. Tang, C.Y.; Fung, K.Y.; Lee, E.W.M.; Ho, G.T.S.; Siu, K.W.M.; Mou, W.L. Product Form Design Using Customer Perception Evaluation by a Combined Superellipse Fitting and ANN Approach. Adv. Eng. Inform. 2013, 27, 386–394. [Google Scholar] [CrossRef]
  28. Shieh, M.; Li, Y.; Yang, C. Comparison of Multi-objective Evolutionary Algorithms in Hybrid Kansei Engineering system for product form design. Adv. Eng. Inform. 2018, 36, 31–42. [Google Scholar] [CrossRef]
  29. Ma, M.Y.; Chen, C.Y.; Wu, F.G. A design decision-making support model for customized product color combination. Comput. Ind. 2007, 58, 504–518. [Google Scholar] [CrossRef]
  30. Lin, T.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Ar, P.D. Microsoft COCO: Common Objects in Context. Lecture Notes Comput. Sci. 2014, 740–755. [Google Scholar] [CrossRef]
  31. Chen, T.Q.; Schmidt, M. Fast Patch-based Style Transfer of Arbitrary Style. arXiv, 2016; arXiv:1612.04337v1. [Google Scholar]
  32. Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. arXiv, 2015; arXiv:1412.6980v4. [Google Scholar]
  33. Mindak, W.A. Fitting the semantic differential to the marketing problem. J. Mark. 1961, 24, 28–33. [Google Scholar] [CrossRef]
Figure 1. The Kansei engineering-based neural style transfer for product innovation (KENPI) framework. The framework contains of three parts. In part 1, a relationship model of the product semantics and the product properties is constructed. By training the relationship model, we can predict product style through its properties. In part 2, a style transfer model is constructed. By training the style transfer model, we can turn the content image and style image into a stylized image, which is the generated new product. In part 3, the semantics of products are compared.
Figure 1. The Kansei engineering-based neural style transfer for product innovation (KENPI) framework. The framework contains of three parts. In part 1, a relationship model of the product semantics and the product properties is constructed. By training the relationship model, we can predict product style through its properties. In part 2, a style transfer model is constructed. By training the style transfer model, we can turn the content image and style image into a stylized image, which is the generated new product. In part 3, the semantics of products are compared.
Applsci 08 02397 g001
Figure 2. Style transfer algorithm.
Figure 2. Style transfer algorithm.
Applsci 08 02397 g002
Figure 3. The structure of VGG-19.
Figure 3. The structure of VGG-19.
Applsci 08 02397 g003
Figure 4. Sample cutting illustration.
Figure 4. Sample cutting illustration.
Applsci 08 02397 g004
Figure 5. One of the questionnaires.
Figure 5. One of the questionnaires.
Applsci 08 02397 g005
Figure 6. The structure of back-propagation (BP) neural network relationship model.
Figure 6. The structure of back-propagation (BP) neural network relationship model.
Applsci 08 02397 g006
Figure 7. Product designs based on neural style transfer. (a) The content images. (b) The style images. (c) The results. We inputted (a,b) into the style transfer model, after simple processing, such as removing background color not related to the product, to obtain the results (c). The results retained the shape of (a), while obtaining details such as color and pattern similar to (b).
Figure 7. Product designs based on neural style transfer. (a) The content images. (b) The style images. (c) The results. We inputted (a,b) into the style transfer model, after simple processing, such as removing background color not related to the product, to obtain the results (c). The results retained the shape of (a), while obtaining details such as color and pattern similar to (b).
Applsci 08 02397 g007
Figure 8. One of the questionnaires.
Figure 8. One of the questionnaires.
Applsci 08 02397 g008
Figure 9. One of the questionnaires.
Figure 9. One of the questionnaires.
Applsci 08 02397 g009
Figure 10. New product evaluation results. We compared the new coat’s style scores from the BP model and questionnaires. The ”Before” scores were obtained by inputting the parameters of the coat into the trained BP neural network model. The “Std-After” and “Ave-After” scores were obtained through questionnaires, the former is the standard deviation and the latter is the average.
Figure 10. New product evaluation results. We compared the new coat’s style scores from the BP model and questionnaires. The ”Before” scores were obtained by inputting the parameters of the coat into the trained BP neural network model. The “Std-After” and “Ave-After” scores were obtained through questionnaires, the former is the standard deviation and the latter is the average.
Applsci 08 02397 g010aApplsci 08 02397 g010b
Figure 11. The statistical results of 20 samples.
Figure 11. The statistical results of 20 samples.
Applsci 08 02397 g011
Figure 12. Style transfer results of sample 15 and 18.
Figure 12. Style transfer results of sample 15 and 18.
Applsci 08 02397 g012
Figure 13. The evaluation results of samples 15 and 18.
Figure 13. The evaluation results of samples 15 and 18.
Applsci 08 02397 g013aApplsci 08 02397 g013b
Figure 14. Examples of style transfer.
Figure 14. Examples of style transfer.
Applsci 08 02397 g014
Table 1. Factor loadings of 30 Kansei words using four factors.
Table 1. Factor loadings of 30 Kansei words using four factors.
Kansei WordFactor 1Factor 2Factor 3Factor 4
OccasionFashionAgeStructure
Relaxed0.930.11−0.10−0.04
Natural0.910.070.140.12
Peaceful0.840.070.100.08
Formal−0.78−0.280.070.02
Strict−0.86−0.24−0.10−0.14
Capable−0.81−0.250−0.12
Modern0.010.84−0.030.23
Fashionable0.240.72−0.110.22
Particular00.860.130.13
Classical−0.25−0.690.12−0.04
Traditional−0.05−0.940.03−0.17
Conservative−0.16−0.840.06−0.14
Mature0.040.030.92−0.13
Steady−0.01−0.120.92−0.05
Sweet0.060.12−0.860.25
Young0.020.13−0.900.11
Energetic−0.060.16−0.790.20
Simple0−0.100.04−0.67
Plain−0.12−0.090.05−0.84
Delicate0.100.13−0.040.91
Luxurious0.110.120.080.90
Dynamic0.06−0.020.19−0.43
Clear−0.04−0.310.200.07
Romantic0.340.39−0.110.41
Warm0.200.29−0.170.47
Soft0.290.32−0.220.34
Noble0.36−0.060.030.18
Female0.300.52−0.040.38
Sexy0.230.270.150.56
Elegant0.170.110.590.26
Proportion0.170.170.150.15
Cumulative0.170.340.490.64
The bold numbers indicate the groups of adjectives associated with Factors 1–4.
Table 2. Female coat design elements.
Table 2. Female coat design elements.
Model   ( j 1 ) Waist   ( j 2 ) Length   ( j 3 ) Collar   ( j 4 ) Sleeve   ( j 5 ) Pocket   ( j 6 ) Opening   ( j 7 )
X: 1High: 1Short: 1Collarless: 1Set-in: 1Patch: 1Single breasted: 1
H: 2Normal: 2Mid: 2Stand: 2Raglan: 2Vertical: 2Double breasted: 2
A: 3Low: 3Long: 3Turnover: 3Pocket less: 3Zipper: 3
No: 4 Lapel: 4 Hidden placket: 4
Hoodie: 5
Table 3. K-fold cross-validation (CV) results.
Table 3. K-fold cross-validation (CV) results.
SampleCV Error
Training SetValidation Setp = 4p = 5p = 6p = 7p = 8p = 9p = 10p = 11p = 12p = 13
2, 3, 4, 510.5650.470.480.4980.5340.5050.4590.3240.490.387
1, 3, 4, 52
1, 2, 4, 53
1, 2, 3, 54
1, 2, 3, 45
The bold number represents the minimum error.
Table 4. Image style evaluation results.
Table 4. Image style evaluation results.
Style Image 1Style Image 2Style Image 3Style Image 4Style Image 5Style Image 6
professional-style
leisure-style
vogue-style
classic-style
grand-style
youth-style
simple-style
delicate-style
Table 5. The style evaluation results of images 15 and 18.
Table 5. The style evaluation results of images 15 and 18.
Professional-StyleLeisure-StyleVogue-StyleClassic-StyleGrand-StyleYouth-StyleSimple-StyleDelicate-Style
Image 1505328 ()741
Image 182167 ()5234

Share and Cite

MDPI and ACS Style

Quan, H.; Li, S.; Hu, J. Product Innovation Design Based on Deep Learning and Kansei Engineering. Appl. Sci. 2018, 8, 2397. https://doi.org/10.3390/app8122397

AMA Style

Quan H, Li S, Hu J. Product Innovation Design Based on Deep Learning and Kansei Engineering. Applied Sciences. 2018; 8(12):2397. https://doi.org/10.3390/app8122397

Chicago/Turabian Style

Quan, Huafeng, Shaobo Li, and Jianjun Hu. 2018. "Product Innovation Design Based on Deep Learning and Kansei Engineering" Applied Sciences 8, no. 12: 2397. https://doi.org/10.3390/app8122397

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop