Next Article in Journal
Carbon Emission Analysis of RC Core Wall-Steel Frame Structures
Previous Article in Journal
Assessing the Fatigue Stress Behavior of Starch Biodegradable Films with Nanoclay Using Accelerated Survival Test Methods
Previous Article in Special Issue
Enhanced Feature Selection Using Genetic Algorithm for Machine-Learning-Based Phishing URL Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Can Stylized Products Generated by AI Better Attract User Attention? Using Eye-Tracking Technology for Research

College of Furnishings and Industrial Design, Nanjing Forestry University, Nanjing 210037, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(17), 7729; https://doi.org/10.3390/app14177729 (registering DOI)
Submission received: 18 July 2024 / Revised: 22 August 2024 / Accepted: 28 August 2024 / Published: 2 September 2024

Abstract

:
The emergence of AIGC has significantly improved design efficiency, enriched creativity, and promoted innovation in the design industry. However, whether the content generated from its own database meets the preferences of target users still needs to be determined through further testing. This study investigates the appeal of AI-generated stylized products to users, utilizing 12 images as stimuli in conjunction with eye-tracking technology. The stimulus is composed of top-selling gender-based stylized Bluetooth earphones from the Taobao shopping platform and the gender-based stylized earphones generated by the AIGC software GPT4.0, categorized into three experimental groups. An eye-tracking experiment was conducted in which 44 participants (22 males and 22 females, mean age = 21.75, SD = 2.45, range 18–27 years) observed three stimuli groups. The eye movements of the participants were measured while viewing product images. The results indicated that variations in stimuli category and gender caused differences in fixation durations and counts. When presenting a mix of the two types of earphones, the AIGC-generated earphones and earphones from the Taobao shopping platform, the two gender groups both showed a significant effect in fixation duration with F (2, 284) = 3.942, p = 0.020 < 0.05, and η = 0.164 for the female group and F (2, 302) = 8.824, p < 0.001, and η = 0.235 for the male group. They all had a longer fixation duration for the AI-generated earphones. When presenting exclusively the two types of AI-generated gender-based stylized earphones, there was also a significant effect in fixation duration with F (2, 579) = 4.866, p = 0.008 < 0.05, and η = 0.129. The earphones generated for females had a longer fixation duration. Analyzing this dataset from a gender perspective, there was no significant effect when the male participants observed the earphones, with F (2, 304) = 1.312 and p = 0.271, but there was a significant difference in fixation duration when the female participants observed the earphones (F (2, 272) = 4.666, p = 0.010 < 0.05, and η = 0.182). The female participants had a longer fixation duration towards the earphones that the AI generated for females.

1. Introduction

Artificial intelligence-generated content (AIGC) represents a production method grounded in artificial intelligence (AI) technology that finds rules through data and automatically generates content [1]. This marks a pivotal advancement in science and technology, encouraging people to shift from sensing and understanding the world to actively creating and generating it [2]. The synergistic integration of GAN [3], CLIP [4,5], Transformer [6], Diffusion [7,8], pre-trained models [9], and other algorithmic techniques has catalyzed exponential growth in the domain of AIGC. The development of AIGC can be systematically categorized into three distinct phases. During the initial germination phase, researchers manipulated computers to realize the output of content via rudimentary programming techniques [10]; AIGC was limited to small-scale experiments. In 1957, the first computer-created string quartet was completed. Then, the world’s first human–computer interactive robot, Eliza, came out in 1966. In the mid-1980s, IBM created Tangora, a voice-controlled typing robot. In the subsequent precipitation and accumulation phase, AIGC transitioned from experimental to practical applications, albeit the bottleneck of algorithms constrained its capacity to generate diverse content [10]. In 2007, 1 The Road, the world’s first novel created by artificial intelligence, was released. After that, Microsoft demonstrated a fully automatic simultaneous interpretation system in 2012 that was capable of translating speech from English to Chinese quickly with high accuracy [11]. The third phase, commencing in 2010, marked AIGC’s entrance into a period of rapid development. Goodfellow proposed a Generic Adversarial Network (GAN) in 2014, a model that utilizes existing data to generate images. NVIDIA released the StyleGAN model in 2018, which can autonomously generate high-quality images. DeepMind released the DVD-GAN model for generating continuous videos in 2019. OpenAI launched DALL-E for the interactive generation of text and images in 2021 [12]. In the past two years, an increasing number of AIGC tools, including ChatGPT, Midjourney, and Stable Diffusion, have gained public visibility. For example, ChatGPT, launched by OpenAI, an AI research lab in the US, garnered over 100 million active users in just over two months since its first release in late 2022, making it the fastest-growing application ever in terms of users. The popularity of ChatGPT is a representation of the profound impact of artificial intelligence technology on human production and life [13].

2. Literature Review

2.1. Research Attempts with AIGC

AIGC encompasses the generation of text, images, videos, 3D assets, and other media forms through AI algorithms, facilitating the automation of content creation [3]. Research about AIGC is continuously emerging across various fields. Guo et al. (2023) explored AI-assisted design for automatically collecting data from sources like published documents, predicting UHPC properties and optimizing UHPC designs to reduce carbon footprint and cost [14]. Chen and Ma (2023) utilized Stable Diffusion to generate images of AI models that swiftly adapted clothing products, decreasing both production costs and the time required for creating image materials of clothing products for e-commerce companies [2]. Han et al. (2024) highlighted the potential of GPT-4 in predicting CVD risks across varied ethnic datasets, suggesting its broad future applications in medical practice [15]. Leng et al. (2024) applied AIGC techniques to create a dataset named CODP-1200, providing a standardized learning approach for child language acquisition [16]. Xu et al. (2023) utilized textiles and carbon microspheres as examples, using ChatGPT to translate requirements into code and partnering with Stable Diffusion for concept visualization in textile design [17]. Wang et al. (2023) created a simulation-based framework for earthquake resistance using AIGC algorithms and verified its feasibility through experiments. AIGC also possesses great potential in marketing [18]. Zhang and Prebensen (2024) showed the effectiveness of applying generative AI like ChatGPT to produce tourism marketing materials in two online experiments [19]. BlueFocus, a high-tech enterprise, employed AIGC technology to generate virtual human personas such as “Su Xiaomei” and “K”, offering customers varied marketing scenarios and interactive experiences [20]. Anta’s “virtual catwalk” with C++, the first digital virtual idol powered by AIGC, demonstrated the latest creativity of Anta’s digital products and brought a novel consumer impact to users [21].
In addition to the above, the rapid development of AIGC has also fostered research and reflection within the realms of art and design, bringing numerous opportunities and challenges. In product design, AIGC provides designers with inspirational references and enhances design efficiency. Song et al. (2023) utilized AIGC technology to efficiently generate related cultural and creative products by analyzing and processing ink paintings [22]. Wang et al. (2023) utilized Midjourney and Shap-E (a 3D modeling AI tool from OpenAI) for the innovative design of ceramics, resulting in a butterfly-shaped ceramic container [23]. Chen et al. (2023) combined ChatGPT and Midjourney to generate a multitude of home design solutions, reducing the time required for designers to obtain inspiration [24]. Wu et al. (2023) used the household vacuum cleaner as a case study to introduce an AIGC-empowered methodology for product color-matching design [25]. In graphic design, AIGC activates design thinking, enriches user experience, and contributes to cultural promotion. Miao and Yang (2023) aimed to use text-to-image AI tools as assistants to produce stunning images and create novel mementos of travel, which can help deepen a traveler’s impression of the destination [26]. Zhang and Romainoor (2023) innovatively proposed a combination of an AIGC algorithm and computer vision for image post-processing, making the Yangliuqing New Year prints have a high-quality pop art style and presenting a novel method for promoting New Year prints [27]. Netflix utilized AIGC technology to develop Auto-Art, an automated cover generation system that rapidly creates diverse cover art, providing more choice and variety [28]. Fan et al. empowered farmer paintings with AIGC and guided the public to participate in creating Jinshan farmer paintings by establishing a farmer painting database and training derivative generation algorithms for Jinshan farmer paintings [29]. Chung and Huang (2022) utilized the AIGC algorithm to transform Chinese ink paintings into real-style images and provide references for different styles [30]. Furthermore, Wang Lei (2023) explored the use of Midjourney to examine the application of AIGC drawing tools in UI interface design, with the generated outcomes providing design teams with valuable inspiration [31]. Zhang et al. (2023) chose Midjourney to co-create the Yongle Palace digital exhibition center in Shanxi Province. They utilized a large number of effect diagrams to illustrate and disseminate the design, offering insights into innovative design schemes and processes in traditional cultural inheritance [32]. Creative intelligence has emerged with the rise of artificial intelligence, generating unexpected effects under the framework of the algorithm [33].

2.2. Research Gap and Eye-Tracking Technology

AIGC has exhibited robust data processing and prediction capabilities in practical applications, with its algorithms playing a vital role in facilitating the solution of research problems through deep learning-based data analysis. These studies effectively illuminate the “instrumental nature” [33] of AIGC. Although the results generated by AIGC serve as inspiration for designers, limited research has investigated whether the product generated by AIGC meets the preferences of its intended users and whether it can enhance users’ attention to the product and thus promote product sales to a certain degree. This study will research this aspect by utilizing appropriate methodologies.
Eye-tracking technology is employed to study user attention towards AI-generated products. Eye-tracking technology is typically used to observe what attracts a user’s attention and examine the user’s behavior [34]. Compared with questionnaires, interviews, scales, and other conventional survey methods, data obtained from eye-tracking techniques are less susceptible to the study population’s subjective biases, competence levels, and environmental factors [35,36]. Eye movements can identify the original feelings of users since user visual perception is closely linked with emotional response, and users are often unable to control their physiological responses voluntarily [36,37,38]. Eye-tracking technologies capture signals from the movements and activities of the pupil, cornea, sclera, iris, retina, and other eye components using various methods, including shape-based, appearance-based, feature-based, and hybrid methods [39,40]. The eye indicators of an eye tracker include time to first fixation, first fixation duration, fixation count, fixation duration, visit count, and visit duration, among others [41]. The number of usability studies incorporating eye-tracking methodology has increased in the past decade [42]. Zhou et al. (2023) revealed differences in people’s visual perceptions and preferences towards waterfront parks via eye movement experiments, which informed more judicious landscape element selections and park space arrangements [43]. Liu et al. (2021) conducted a study on users’ visual search efficiency and their experience with application icons varying in color and border shape through the use of eye-tracking techniques [44]. Qu and Guo (2019) investigated the correlation between eye movements and users’ emotional responses to product features using images of various SUVs and accounting for gender differences [45]. Liao et al. (2019) employed eye-tracking technology to examine children’s visual attention towards texts and images of equivalent areas within storybooks, offering a scientific basis for the layout design of electronic storybooks [46].
The preceding analyses demonstrate AIGC’s broad applicability in various fields and its vital role in facilitating the solution of research problems. As an emerging AI productivity engine, AIGC has also triggered in-depth thinking in the design field. This study aims to explore the potential of AIGC in design by analyzing male and female attention towards AI-generated products, thereby verifying its assistance in the design process. Additionally, this study accounts for gender differences. The entry point for the study is wireless Bluetooth in-ear earphones. We used GPT4.0 for image generation and combined it with eye-tracking technology to analyze male and female attention to the corresponding images. The research outcomes will, to a certain degree, reflect AIGC’s capability for creation and comprehension. These findings will be valuable in accelerating product design output, enhancing product attractiveness, and facilitating product sales.

3. Methods

3.1. Stimuli

We conducted a pre-experiment to determine the type of stimuli. Based on the market popularity, we generated five products using the AIGC software GPT4.0, including an electric toothbrush, projector, Bluetooth earphone, wireless mouse, and watch. A 5-point Likert scale (with 1 representing the worst and 5 representing the best) was used to rate each image on five dimensions: clarity, detailing, realism, attractiveness, and overall satisfaction. The questionnaires were distributed on social media platforms, there were 63 effective questionnaires, and the reliability of the scale was α = 0.975. The scores of each stimulus are shown in Table 1. Three graduate students who majored in industrial design and had at least four years’ design experience were invited to be a part of the expert panel. Based on the results of scale and subjective evaluations, we finally chose Bluetooth earphones as the experimental subjects.
Twelve pairs of wireless Bluetooth in-ear earphones were selected as stimuli for the experiment. The sources of the earphones were categorized into two parts. One part was searched through the Taobao shopping platform using the keywords “Wireless Bluetooth in-ear earphones for male” and “Wireless Bluetooth in-ear earphones for female.” Both search results were filtered by sales volume, with the top three earphones being selected as the stimuli. Figure 1 shows the search process on the Taobao shopping platform, and Figure 2 summarizes the six pairs of earphones. The remaining six pairs were generated using the AIGC software GTP4.0 with specific commands. For preferences associated with males, the command “Please generate wireless Bluetooth in-ear earphones favored by males, featuring a white background and earphone compartment” was used, resulting in three pairs of earphones. This process was replicated for female preference; the command “Please generate wireless Bluetooth in-ear earphones favored by females, featuring a white background and earphone compartment” was used, resulting in another three pairs of earphones. Figure 3 shows the process of generating stylized headphones for males and females using the AIGC software GTP4.0, and Figure 4 summarizes the six pairs of earphones.
The 12 pairs of earphones obtained through the above steps were categorized into three stimuli groups following the removal of the background and non-target elements using Photoshop 2020. In Stimuli group 1, three pairs of Bluetooth earphones for females obtained from the Taobao shopping platform and three pairs generated for females by the AIGC software GTP4.0 are presented. The earphones are systematically arranged in an alternating sequence, where each pair from Taobao is immediately followed by a pair from the AIGC software, thereby maintaining a consistent alternating order throughout the group. Stimuli group 2, composed similarly, primarily includes Bluetooth earphones for males. Stimuli group 3 exclusively contains six Bluetooth earphones generated by the AIGC software GTP4.0. These earphones are arranged in an alternating sequence, starting with those generated for females, followed by those generated for males, continuing in this pattern throughout the group. Figure 5 illustrates the three stimuli groups.

3.2. Participants

Based on the report on the consumption trend and development status of Bluetooth earphones from the Baidu website in the 2022–2023 period, the consumer group of this product mainly comprises the younger generation, especially in the age group of 18 to 29 years old. And as the consumers grow older, the willingness to buy Bluetooth earphones decreases significantly. Therefore, the data collected in this study mainly came from the audience group comprising individuals aged 18–29.
A total of 44 Chinese students (22 males and 22 females with a mean age of 21.75, SD = 2.45, and a range of 18–27 years) were recruited as participants who were undergraduate students from Nanjing Forestry University. All participants had normal or corrected-to-normal vision. They all expressed willingness to cooperate in completing the experiment.

3.3. Apparatus

The eye-tracking experiment was conducted in the Human Factors and Ergonomics Laboratory using the Tobii Pro X series non-contact eye tracker; Figure 6 shows the equipment used. The equipment included a mainframe computer (Dell OptiPlex 7000 Compact Computer, Dell Inc., Xiamen, China) and a display screen (DELL E2423H, with the screen measuring 53.15 cm in size and 29.90 cm in height with a screen resolution of 1920 × 1080 pixels, Dell Inc., Xiamen, China). Stimuli were presented at the center of the screen with a white background. Ergolab 3.0 software, installed on the mainframe computer, assisted the eye tracker in recording the experimental process and collecting, displaying, and exporting each participant’s eye movement data. Laboratory conditions were maintained with a temperature of 25 °C, a humidity of 40%, and suitable lighting.

3.4. Eye-Tracking Measures

To analyze eye-tracking data, areas of interest (AOIs) were defined for each group before the experiment. For AI-generated earphones, those intended for females were labeled as A1, A2, and A3, while those intended for males were labeled as A4, A5, and A6. Similarly, for earphones searched from Taobao, those intended for females were labeled as T1, T2, and T3, and those intended for males were labeled as T4, T5, and T6. The AOIs and corresponding numbers for each stimulus were defined as Figure 7 shows. For each stimuli group, several AOI measures were calculated using Ergolab 3.0: fixation duration, fixation counts, total fixation duration, and total fixation counts.

3.5. Procedure

The experiment was conducted on three sets of stimuli separately, with participants viewing the stimuli 60 cm from the center of the screen display. Each set of stimuli was centrally displayed in a rectangle of 1920 × 1080 pixels. After a viewing test with different people before the formal experiment, the viewing duration for each stimulus group was set at 25 s. In order to exclude the influence of irrelevant variables on this study, each participant was briefed on the experimental procedure and relevant precautions before the experiment. Avoidance measures were taken for those who had not yet participated in the experiment.
The 44 participants were divided into three groups to conduct the experiments sequentially. All of them were guided to focus on the part of the stimuli they were interested in without any other actions. Twelve female students were assigned to view Stimuli group 1, eleven male students were assigned to view Stimuli group 2, and twenty-one students (ten females and eleven males) were assigned to view Stimuli group 3. The person who ended the experiment was prohibited from divulging specific information to those who did not participate. The general process of the experiment is shown in Figure 8. The variable labels used in this paper and the critical factors of the experiment are shown in Table 2 and Table 3.

4. Results

4.1. Analysis of Female Participants’ Attention to Stimuli Group 1

Without considering the cross-influence among factors, this section uses a one-factor ANOVA to analyze the difference in female attention towards two types of stimuli using the stimuli category as an influencing factor. Figure 9 shows the heat map obtained by 12 female participants after viewing Stimuli group 1.
Regarding the three stimuli generated by the AIGC software GPT4.0, the fixation duration of A1 is 100.697 s, that of A2 is 39.064 s, and that of A3 is 30.601 s. A1 is higher than A2 and A3. The fixation count of A1 is 68, that of A2 is 33, and that of A3 is 30. The average duration of each fixation is 1.480 s for A1, 1.185 s for A2, and 1.020 s for A3. Regarding the stimuli retrieved from the Taobao shopping platform, the fixation duration of T1 is 50.674 s, that of T2 is 25.683 s, and that of T3 is 47.270 s. T1 is higher than T2 and T3. The fixation count of T1 is 46, that of T2 is 36, and that of T3 is 46. The average duration of each fixation is 1.102 s for T1, 0.713 s for T2, and 1.028 s for T3. The total fixation duration of AF is 170.363 s, and that of TF is 123.627 s; AF is much higher than TF. The total fixation count of AF is 131, and that of TF is 128; the average fixation duration per time is 1.300 s for AF and 0.966 s for TF. The specific data are shown in Table 4 and Figure 10a,b.
For fixation duration, AF and TF had a significant effect with F (2,284) = 3.942, p = 0.020 < 0.05, and η = 0.164. These results are visually presented in Figure 10c, indicating that the stimuli generated by the AIGC software GPT4.0 are more attractive among females.

4.2. Analysis of Male Participants’ Attention to Stimuli Group 2

Without considering the cross-influence among factors, this section uses a one-factor ANOVA to analyze the difference in male attention towards the two types of stimuli using the stimuli category as an influencing factor. Figure 11 shows the heat map obtained by 11 male participants after viewing Stimuli group 2.
Regarding the three stimuli generated by the AIGC software GPT4.0, the fixation duration of A4 is 29.472 s, that of A5 is 60.681 s, and that of A6 is 71.729 s. A6 is higher than A5 and A4. The fixation count of A4 is 49, that of A5 is 37, and that of A6 is 36. The average duration of each fixation is 0.601 s for A4, 1.640 s for A5, and 1.992 s for A6. Regarding the stimuli retrieved from the Taobao shopping platform, the fixation duration of T4 is 20.267 s, that of T5 is 53.771 s, and that of T6 is 22.9 s. T5 is higher than T4 and T6. The fixation count of T4 is 32, that of T5 is 55, and that of T6 is 40. The average duration of each fixation is 0.633 s for T4, 0.978 s for T5, and 0.573 s for T6. The total fixation duration of AM is 161.882 s, and that of TM is 96.938 s; AM is much higher than TM. The total fixation count of AM is 122, and that of TM is 127; the average fixation duration per time is 1.327 s for AM and 0.763 s for TM. The specific data are shown in Table 5 and Figure 12a,b.
For fixation duration, AM and TM had a significant effect, with F (2,302) = 8.824, p < 0.001, and η = 0.235. These results are visually presented in Figure 12c, indicating that the stimuli generated by the AIGC software GPT4.0 are more attractive among males.

4.3. Analysis of Male and Female Participants’ Attention to Stimuli Group 3

This section analyses the difference in male and female participants’ attention based on gender and category through a two-factor ANOVA. Figure 13 shows the heat maps obtained by 11 male and 10 female participants after viewing stimuli group 3, separately.
Based on the subjective ratings of the product images by twenty-one participants, a 2 × 2 (gender × stimulus category) mixed-model ANOVA test revealed no significant effect of gender on the subjective emotional response, with F (1, 576) = 0.995 and p = 0.319. The data indicated a significant effect of stimulus category F (2, 576) = 5.838 (p = 0.003 < 0.05) on the subjective emotional response (AM M = 0.94, AF M = 0.97). The interaction of the main effects (gender × stimulus category) was insignificant, with F (2, 576) = 1.248 and p = 0.288.
A one-factor ANOVA was used to analyze the differences in attention among the participants across two categories, and the results are presented in Table 6. The fixation duration of AM is 243.4 s, and that of AF is 247.433 s; AF is higher than AM. The fixation count of AM is 260, and that of AF is 255; the average duration of each fixation is 0.936 s for AM and 0.970 s for AF. For fixation duration, there was a significant effect between AM and AF, with F (2,579) = 4.866, p = 0.008 < 0.05, and η = 0.129, as displayed in Figure 14a, indicating that the stimuli generated for females attracted more attention.
Analyses were further conducted from a gender perspective. For fixation duration, there was no significant difference between the AM and AF categories when the male participants observed the stimuli, with F (2, 304) = 1.312 and p = 0.271. However, there was a significant difference between the two categories when the female participants observed the stimuli, with F (2, 272) = 4.666, p = 0.010 < 0.05, and η = 0.182, as displayed in Figure 14b. The fixation duration of AM is 120.716 s, and that of AF is 123.801 s; AF is higher than AM. The fixation count of AM is 130, and that of AF is 122; the average duration of each fixation is 0.929 s for AM and 1.015 s for AF, indicating that the female participants paid more attention to the stimuli that the AI generated for females.

5. Discussion

This study analyzes whether AI-generated stylized products can attract users’ attention more effectively and improve product attractiveness. This study chose wireless Bluetooth in-ear earphones as the entry point. It combined the existing top-selling gender-based stylized earphones on the Taobao shopping platform and AI-generated gender-based stylized earphones. Eye-tracking technology converted the users’ eye movement physiological data into numerical data for analysis, and the users’ attention to the product was analyzed through eye movement data to assess the attractiveness of the AI-generated products.
In the experiment combining the AI-generated earphones with the existing earphones on the Taobao shopping platform (group 1 and group 2), there was a significant effect in terms of the fixation duration between the two types of earphones; participants of both genders paid more attention to the AI-generated earphones, which suggests that the AI-generated stylized earphones are more attractive than the existing products.
In the experiment involving two types of AI-generated gender-based earphones (group 3), there is also a significant effect of the stimulus category in terms of the fixation duration. The AI-generated earphones with a female style were more attractive to the younger group. Viewed from a gender perspective, the female participants paid more attention to the AI-generated earphones for females, whereas the male participants showed no difference in their attention to the two types of earphones.
This study found that AI-generated earphones have greater appeal than the existing products. Among the AI-generated earphones designed for different genders, the Bluetooth earphones generated for the female group were more popular. This study demonstrates that using AIGC in product design significantly enhances the attractiveness of products, and its creativity and insight into product styling can provide designers with valuable references and assist their design process to a certain degree. Furthermore, beyond earphones, future research will delve deeper into other products to further investigate the potential of integrating AIGC into design practices.

6. Limitations and Future Work

This study has several limitations that need to be considered. Firstly, the experiment used wireless Bluetooth in-ear earphones as the stimuli without extending the research to other products, thus limiting the general applicability of the findings. Secondly, the experiment participants were mainly students from the same school, aged 18 to 27. The limited age range and homogeneity in the background do not adequately represent the esthetic preferences of other audiences in different regions. Thirdly, eye-tracking technology was mainly used to measure and analyze the data. Although this method makes the experimental results present a certain degree of objectivity, an individual’s product appreciation is a multifaceted physiological and psychological process influenced by factors including emotions, experiences, cultural background, and socioeconomic status, which may be overlooked in a quantitative analysis. Fourthly, the absence of a control group with neutral earphone images limits this study’s ability to isolate the impact of stylization on user attention. Furthermore, the structural designs generated by AI tend to be rough, making it challenging to ensure the rationality of local design details [47].
To address the limitations of this study, expanding and diversifying the sample will improve the general applicability of the findings. Future studies should compass a broader range of products beyond Bluetooth earphones and invite participants from varied ages, regions, and cultural backgrounds. Regarding the research methodology, in addition to eye-tracking technology, interviews, questionnaires, and other methods should be combined to delve into participants’ preferences and experiences.

7. Conclusions

In this study, eye-tracking technology was used to analyze participants’ attention towards AI-generated stylized earphones as well as stylized earphones on the Taobao shopping website, combining the users’ eye-tracking data to determine whether the AI-generated products are more attractive. The experimental results are as follows: (1) Within the category of female-styled Bluetooth earphones—including those searched on the Taobao shopping platform and those generated by AI—the female participants paid more attention to the AI-generated female-stylized earphones. (2) Within the category of male-styled Bluetooth earphones—including those searched on the Taobao shopping platform and those generated by AI—the male participants paid more attention to the AI-generated male-stylized earphones. (3) The AI-generated female-styled earphones were more attractive to the younger group. The male participants’ attention to the AI-generated stylized earphones was no different, while the female participants paid more attention to the AI-generated female-styled earphones.

Author Contributions

Conceptualization, C.C.; methodology, C.C. and Y.T.; validation, Y.T.; formal analysis, Y.T.; investigation, Y.T.; data curation, Y.T.; writing—original draft preparation, Y.T.; writing—review and editing, C.C.; visualization, Y.T.; supervision, C.C.; project administration, C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported by the National Nature Science Foundation of China Grant (No. 72201128) and the China Postdoctoral Science Foundation (No. 2023M730483).

Institutional Review Board Statement

Ethical review and approval were waived for this study, due to the low-risk nature of the research and the use of fully anonymized data. The approving agency for the exemption is Nanjing Forestry University.

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study. This study was approved by the school ethics committee.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Acknowledgments

We thank the Laboratory of Human Factors and Ergonomics of NJFU for supporting the experiments. We are grateful to Jing Zhang for providing significant support in the funding acquisition. And we thank Yan Qiu and Yuxi Lin for their assistance during the dissertation process.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Guo, D.; Chen, H.; Wu, R.; Wang, Y. AIGC challenges and opportunities related to public safety: A case study of ChatGPT. J. Saf. Sci. Resil. 2023, 4, 329–339. [Google Scholar] [CrossRef]
  2. Chen, Y.; Ma, H. AIGC’ s divine assistance in the field of art and design majors-an example of Stable Diffusion. Fash. China 2024, 24, 73–84. [Google Scholar] [CrossRef]
  3. Foo, L.G.; Rahmani, H.; Liu, J. Ai-generated content (aigc) for various data modalities: A survey. arXiv 2023, arXiv:2308.14177. [Google Scholar]
  4. Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 8748–8763. [Google Scholar]
  5. Xia, X.; Dong, G.; Li, F.; Zhu, L.; Ying, X. When CLIP meets cross-modal hashing retrieval: A new strong baseline. Inf. Fusion 2023, 100, 101968. [Google Scholar] [CrossRef]
  6. Cao, Y.; Li, S.; Liu, Y.; Yan, Z.; Dai, Y.; Yu, P.S.; Sun, L. A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt. arXiv 2023, arXiv:2303.04226. [Google Scholar]
  7. Grechka, A.; Couairon, G.; Cord, M. GradPaint: Gradient-guided inpainting with diffusion models. Comput. Vis. Image Underst. 2024, 240, 103928. [Google Scholar] [CrossRef]
  8. Liu, J.; Cheng, P.; Dai, J.; Liu, J. DiffuCom: A novel diffusion model for comment generation. Knowl. Based Syst. 2023, 281, 111069. [Google Scholar] [CrossRef]
  9. Bu, K.; Liu, Y.; Ju, X. Efficient Utilization of Pre-trained Models: A Review of Sentiment Analysis via Prompt Learning. Knowl. Based Syst. 2023, 111148. [Google Scholar] [CrossRef]
  10. Wu, J.; Gan, W.; Chen, Z.; Wan, S.; Lin, H. Ai-generated content (aigc): A survey. arXiv 2023, arXiv:2304.06632. [Google Scholar]
  11. Joshi, R.M.; Tao, S.; Aaron, P.; Quiroz, B. Cognitive component of componential model of reading applied to different orthographies. J. Learn. Disabil. 2012, 45, 480–486. [Google Scholar] [CrossRef]
  12. Lu, Z.; Song, X.; Jin, Y. State of arts and development of intelligent design methods under the AIGC trend. Packag. Eng. 2023, 44, 18–33+13. [Google Scholar] [CrossRef]
  13. Wang, B.; Niu, C. From ChatGPT to GovGPT: Generative Artificial Intelligence-driven Government Service Ecosystem Construction. E-Government 2023, 25–38. [Google Scholar] [CrossRef]
  14. Guo, P.; Mahjoubi, S.; Liu, K.; Meng, W.; Bao, Y. Self-updatable AI-assisted design of low-carbon cost-effective ultra-high-performance concrete (UHPC). Case Stud. Constr. Mater. 2023, 19, e02625. [Google Scholar] [CrossRef]
  15. Han, C.; Kim, D.W.; Kim, S.; You, S.C.; Park, J.Y.; Bae, S.; Yoon, D. Evaluation of GPT-4 for 10-year cardiovascular risk prediction: Insights from the UK Biobank and KoGES data. Iscience 2024, 27, 109022. [Google Scholar] [CrossRef] [PubMed]
  16. Leng, G.; Zhang, G.; Xiong, Y.-J.; Chen, J. CODP-1200: An AIGC based benchmark for assisting in child language acquisition. Displays 2024, 82, 102627. [Google Scholar] [CrossRef]
  17. Xu, Y.; Zhi, C.; Guo, H.; Zhang, M.; Wu, H.; Sun, R.; Dong, Z.; Yu, L. ChatGPT for textile science and materials: A perspective. Mater. Today Commun. 2023, 37, 107101. [Google Scholar] [CrossRef]
  18. Wang, C.; Zhao, J.; Chan, T.-M. Artificial intelligence (AI)-assisted simulation-driven earthquake-resistant design framework: Taking a strong back system as an example. Eng. Struct. 2023, 297, 116892. [Google Scholar] [CrossRef]
  19. Zhang, Y.; Prebensen, N.K. Co-creating with ChatGPT for tourism marketing materials. Ann. Tour. Res. Empir. Insights 2024, 5, 100124. [Google Scholar] [CrossRef]
  20. Han, G.; Zhang, K. AIGC Marketing: Human-machine symbiotic marketing model promotes digital marketing to leapfrog to digital intelligence. Enterp. Econ. 2024, 43, 111–124. [Google Scholar] [CrossRef]
  21. Li, Y. With AIGC on the rise, it’s time for brand marketing to change its game again. PR Mag. 2023, 47–48. [Google Scholar] [CrossRef]
  22. Song, Y.; Qian, X.; Peng, L.; Ye, Z.; Qin, J. Cultural and creative design of AIGC Chinese aesthetic. Packag. Eng. 2023, 44, 1–8+33. [Google Scholar] [CrossRef]
  23. Wang, Y.; Gong, X.; Zhu, H.; Li, G. Research on creative design of ceramics under AIGC technology. Ceram. Sci. Art 2023, 57, 84–87. [Google Scholar] [CrossRef]
  24. Chen, Y.; Fang, Q.; Xiong, Z.; Yi, X.; Wang, Q. Opportunities and challenges of the application of ChatGPT and MJ in the field of home design. Furnit. Inter. Des. 2023, 30, 51–55. [Google Scholar] [CrossRef]
  25. Wu, F.; Hsiao, S.-W.; Lu, P. An AIGC-empowered methodology to product color matching design. Displays 2024, 81, 102623. [Google Scholar] [CrossRef]
  26. Miao, L.; Yang, F.X. Text-to-image AI tools and tourism experiences. Ann. Tour. Res. 2023, 102, 103642. [Google Scholar] [CrossRef]
  27. Zhang, B.; Romainoor, N.H. Research on artificial intelligence in new year prints: The application of the generated pop art style images on cultural and creative products. Appl. Sci. 2023, 13, 1082. [Google Scholar] [CrossRef]
  28. Liu, X. Application of AIGC technology in dynamic graphic design. Shanghai Packag. 2023, 30–32. [Google Scholar] [CrossRef]
  29. Chai, J.; Ding, H. AIGC and craftwork design. Shanghai Arts Crafts 2023, 75–77. [Google Scholar]
  30. Chung, C.-Y.; Huang, S.-H. Interactively transforming Chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimed. Tools Appl. 2023, 82, 11663–11696. [Google Scholar] [CrossRef]
  31. Wang, L. An exploration of the application of AIGC drawing tools in UI interface design—Taking Midjourney as an example. Comput. Knowl. Technol. 2023, 19, 108–111. [Google Scholar] [CrossRef]
  32. Zhang, J.; Wang, Y.; Yuan, Z. AIGC empowered traditional culture inheritance design method and practice—Taking the design of digital exhibition center of Yongle Gong in Shanghai province as an example. Design 2023, 36, 30–33. [Google Scholar] [CrossRef]
  33. Tao, W.; Gao, S.; Yuan, Y. Boundary crossing: An experimental study of individual perceptions toward AIGC. Front. Psychol. 2023, 14, 1185880. [Google Scholar] [CrossRef] [PubMed]
  34. Kuhar, M.; Merčun, T. Exploring user experience in digital libraries through questionnaire and eye-tracking data. Libr. Inf. Sci. Res. 2022, 44, 101175. [Google Scholar] [CrossRef]
  35. Ariely, D.; Berns, G.S. Neuromarketing: The hope and hype of neuroimaging in business. Nat. Rev. Neurosci. 2010, 11, 284–292. [Google Scholar] [CrossRef]
  36. Guo, F.; Ding, Y.; Liu, W.; Liu, C.; Zhang, X. Can eye-tracking data be measured to assess product design?: Visual attention mechanism should be considered. Int. J. Ind. Ergon. 2016, 53, 229–235. [Google Scholar] [CrossRef]
  37. Ho, C.-H.; Lu, Y.-N. Can pupil size be measured to assess design products? Int. J. Ind. Ergon. 2014, 44, 436–441. [Google Scholar] [CrossRef]
  38. Ho, H.-F. The effects of controlling visual attention to handbags for women in online shops: Evidence from eye movements. Comput. Hum. Behav. 2014, 30, 146–152. [Google Scholar] [CrossRef]
  39. Hansen, D.W.; Ji, Q. In the eye of the beholder: A survey of models for eyes and gaze. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 32, 478–500. [Google Scholar] [CrossRef]
  40. Majaranta, P.; Bulling, A. Eye tracking and eye-based human–computer interaction. In Advances in Physiological Computing; Springer: Berlin/Heidelberg, Germany, 2014; pp. 39–65. [Google Scholar]
  41. Ilhan, A.E.; Togay, A. Pursuit of methodology for data input related to taste in design: Using eye tracking technology. Displays 2023, 76, 102335. [Google Scholar] [CrossRef]
  42. Almourad, M.B.; Bataineh, E.; Hussain, M.; Wattar, Z. Usability Assessment of a University Academic Portal using Eye Tracking Technology. Procedia Comput. Sci. 2023, 220, 323–330. [Google Scholar] [CrossRef]
  43. Zhou, X.; Cen, Q.; Qiu, H. Effects of urban waterfront park landscape elements on visual behavior and public preference: Evidence from eye-tracking experiments. Urban For. Urban Green. 2023, 82, 127889. [Google Scholar] [CrossRef]
  44. Liu, W.; Cao, Y.; Proctor, R.W. How do app icon color and border shape influence visual search efficiency and user experience? Evidence from an eye-tracking study. Int. J. Ind. Ergon. 2021, 84, 103160. [Google Scholar] [CrossRef]
  45. Qu, Q.-X.; Guo, F. Can eye movements be effectively measured to assess product design?: Gender differences should be considered. Int. J. Ind. Ergon. 2019, 72, 281–289. [Google Scholar] [CrossRef]
  46. Liao, C.-N.; Chang, K.-E.; Huang, Y.-C.; Sung, Y.-T. Electronic storybook design, kindergartners’ visual attention, and print awareness: An eye-tracking investigation. Comput. Educ. 2020, 144, 103703. [Google Scholar] [CrossRef]
  47. Liao, W.; Lu, X.; Fei, Y.; Gu, Y.; Huang, Y. Generative AI design for building structures. Autom. Constr. 2024, 157, 105187. [Google Scholar] [CrossRef]
Figure 1. The process of searching for earphones on the Taobao shopping platform.
Figure 1. The process of searching for earphones on the Taobao shopping platform.
Applsci 14 07729 g001
Figure 2. Six pairs of earphones obtained from the Taobao shopping platform.
Figure 2. Six pairs of earphones obtained from the Taobao shopping platform.
Applsci 14 07729 g002
Figure 3. The process of generating stylized headphones using the AIGC software GTP4.0.
Figure 3. The process of generating stylized headphones using the AIGC software GTP4.0.
Applsci 14 07729 g003
Figure 4. Six pairs of earphones generated by the AIGC software GTP4.0.
Figure 4. Six pairs of earphones generated by the AIGC software GTP4.0.
Applsci 14 07729 g004
Figure 5. Three stimuli groups.
Figure 5. Three stimuli groups.
Applsci 14 07729 g005
Figure 6. Experimental instrument.
Figure 6. Experimental instrument.
Applsci 14 07729 g006
Figure 7. AOIs and numbers of each stimulus.
Figure 7. AOIs and numbers of each stimulus.
Applsci 14 07729 g007
Figure 8. The process of studying whether AI-generated products are more attractive.
Figure 8. The process of studying whether AI-generated products are more attractive.
Applsci 14 07729 g008
Figure 9. The heat map for Stimuli group 1.
Figure 9. The heat map for Stimuli group 1.
Applsci 14 07729 g009
Figure 10. Stimuli group 1: (a) the female participants’ fixation counts for each stimulus; (b) the female participants’ fixation duration for each stimulus; (c) the fixation duration for the two categories among the female participants.
Figure 10. Stimuli group 1: (a) the female participants’ fixation counts for each stimulus; (b) the female participants’ fixation duration for each stimulus; (c) the fixation duration for the two categories among the female participants.
Applsci 14 07729 g010
Figure 11. The heat map for Stimuli group 2.
Figure 11. The heat map for Stimuli group 2.
Applsci 14 07729 g011
Figure 12. Stimuli group 2: (a) the male participants’ fixation counts for each stimulus; (b) the male participants’ fixation duration for each stimulus; (c) the fixation duration for the two categories among the male participants.
Figure 12. Stimuli group 2: (a) the male participants’ fixation counts for each stimulus; (b) the male participants’ fixation duration for each stimulus; (c) the fixation duration for the two categories among the male participants.
Applsci 14 07729 g012
Figure 13. The heat maps for Stimuli group 3.
Figure 13. The heat maps for Stimuli group 3.
Applsci 14 07729 g013
Figure 14. (a) The fixation duration for the two categories among all participants. (b) The fixation duration for the two categories among the female participants.
Figure 14. (a) The fixation duration for the two categories among all participants. (b) The fixation duration for the two categories among the female participants.
Applsci 14 07729 g014
Table 1. The scores of each stimulus.
Table 1. The scores of each stimulus.
CategoryClarityDetailingRealismAttractivenessOverall SatisfactionTotal
Electric toothbrush3.493.573.683.523.617.86
Projector3.483.653.373.63.5617.66
Bluetooth earphone3.713.653.833.783.7518.72
Wireless mouse3.683.563.683.573.5118
Watch3.43.523.523.523.5117.47
Table 2. List of labels for each variable.
Table 2. List of labels for each variable.
Independent VariableAcronymSegmentationMeaning
GPT4.0 generated wireless Bluetooth in-ear earphonesAFA1, A2, A3GPT4.0 generated Bluetooth earphones for females.
AMA4, A5, A6GPT4.0 generated Bluetooth earphones for males.
The top three selling wireless Bluetooth in-ear earphones searched from the Taobao shopping platformTFT1, T2, T3The top three selling Bluetooth earphones for females on the Taobao shopping platform.
TMT4, T5, T6The top three selling Bluetooth earphones for males on the Taobao shopping platform.
GenderGGFFemale
GMMale
Table 3. Experimental critical factor.
Table 3. Experimental critical factor.
Implicit VariableAcronymMeaning
Fixation countsFCThe number of times the gaze is fixated on the area of interest.
Fixation durationFDHow long the gaze is fixated on the area of interest.
Total fixation countsTFCThe total number of times the gaze passes over the area of interest.
Total fixation durationTFDThe total time the gaze passes over the area of interest.
Table 4. Data analysis of female participants’ attention to Stimuli group 1.
Table 4. Data analysis of female participants’ attention to Stimuli group 1.
CategoryCategory SegmentationFCFD(s)TFCTFD(s)Fpη
AFA168100.697131170.3633.9420.0200.164
A23339.064
A33030.601
TFT14650.674128123.627
T23625.683
T34647.270
Table 5. Data analysis of female participants’ attention to Stimuli group 2.
Table 5. Data analysis of female participants’ attention to Stimuli group 2.
CategoryCategory SegmentationFCFD(s)TFCTFD(s)Fpη
AMA44929.472122161.8828.824<0.0010.235
A53760.681
A63671.729
TMT43220.26712796.938
T55553.771
T64022.9
Table 6. Data analysis of male and female participants’ attention to Stimuli group 3.
Table 6. Data analysis of male and female participants’ attention to Stimuli group 3.
CategoryCategory SegmentationFCFD(s)Fpη
GAM260243.44.8660.0080.129
AF255247.433
GMAM130122.6841.3120.2710.092
AF133123.633
GFAM130120.7164.6660.0100.182
AF122123.801
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, Y.; Chen, C. Can Stylized Products Generated by AI Better Attract User Attention? Using Eye-Tracking Technology for Research. Appl. Sci. 2024, 14, 7729. https://doi.org/10.3390/app14177729

AMA Style

Tang Y, Chen C. Can Stylized Products Generated by AI Better Attract User Attention? Using Eye-Tracking Technology for Research. Applied Sciences. 2024; 14(17):7729. https://doi.org/10.3390/app14177729

Chicago/Turabian Style

Tang, Yunjing, and Chen Chen. 2024. "Can Stylized Products Generated by AI Better Attract User Attention? Using Eye-Tracking Technology for Research" Applied Sciences 14, no. 17: 7729. https://doi.org/10.3390/app14177729

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop