Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = imagery style evaluation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 11395 KB  
Article
A SHDAViT-MCA Block-Based Network for Remote-Sensing Semantic Change Detection
by Weiqi Ren, Zhigang Zhang, Shaowen Liu, Haoran Xu, Zheng Ma, Rui Gao, Qingming Kong, Shoutian Dong and Zhongbin Su
Remote Sens. 2025, 17(17), 3026; https://doi.org/10.3390/rs17173026 - 1 Sep 2025
Viewed by 447
Abstract
This study addresses the challenge of accurately detecting agricultural land-use changes in bi-temporal remote sensing imagery, which is hindered by cross-temporal interference, multi-scale feature modeling limitations, and poor large-area scalability. The study proposes the Semantic Change Detection (SCD) with Single-Head Dual-Attention Vision Transformer [...] Read more.
This study addresses the challenge of accurately detecting agricultural land-use changes in bi-temporal remote sensing imagery, which is hindered by cross-temporal interference, multi-scale feature modeling limitations, and poor large-area scalability. The study proposes the Semantic Change Detection (SCD) with Single-Head Dual-Attention Vision Transformer (SHDAViT) and Multidimensional Collaborative Attention (MCA) Block-Based Network (SMBNet). The SHDAViT module enhances local-global feature aggregation through a single-head self-attention mechanism combined with channel–spatial dual attention. The MCA module mitigates cross-temporal style discrepancies by modeling cross-dimensional feature interactions, fusing bi-temporal information to accentuate true change regions. SHDAViT extracts discriminative features from each phase image, MCA aligns and fuses these features to suppress noise and amplify effective change signals. Evaluated on the newly developed AgriCD dataset and the JL1 benchmark, SMBNet outperforms five mainstream methods (BiSRNet, Bi-SRUNet++, HRSCD.str3, HRSCD.str4, and CDSC), achieving state-of-the-art performance, with F1 scores of 91.18% (AgriCD) and 86.44% (JL1), demonstrating superior accuracy in detecting subtle farmland transitions. Experimental results confirm the framework’s robustness against label imbalance and environmental variations, offering a practical solution for agricultural monitoring. Full article
Show Figures

Graphical abstract

23 pages, 6315 KB  
Article
A Kansei-Oriented Morphological Design Method for Industrial Cleaning Robots Integrating Extenics-Based Semantic Quantification and Eye-Tracking Analysis
by Qingchen Li, Yiqian Zhao, Yajun Li and Tianyu Wu
Appl. Sci. 2025, 15(15), 8459; https://doi.org/10.3390/app15158459 - 30 Jul 2025
Viewed by 325
Abstract
In the context of Industry 4.0, user demands for industrial robots have shifted toward diversification and experience-orientation. Effectively integrating users’ affective imagery requirements into industrial-robot form design remains a critical challenge. Traditional methods rely heavily on designers’ subjective judgments and lack objective data [...] Read more.
In the context of Industry 4.0, user demands for industrial robots have shifted toward diversification and experience-orientation. Effectively integrating users’ affective imagery requirements into industrial-robot form design remains a critical challenge. Traditional methods rely heavily on designers’ subjective judgments and lack objective data on user cognition. To address these limitations, this study develops a comprehensive methodology grounded in Kansei engineering that combines Extenics-based semantic analysis, eye-tracking experiments, and user imagery evaluation. First, we used web crawlers to harvest user-generated descriptors for industrial floor-cleaning robots and applied Extenics theory to quantify and filter key perceptual imagery features. Second, eye-tracking experiments captured users’ visual-attention patterns during robot observation, allowing us to identify pivotal design elements and assemble a sample repository. Finally, the semantic differential method collected users’ evaluations of these design elements, and correlation analysis mapped emotional needs onto stylistic features. Our findings reveal strong positive correlations between four core imagery preferences—“dignified,” “technological,” “agile,” and “minimalist”—and their corresponding styling elements. By integrating qualitative semantic data with quantitative eye-tracking metrics, this research provides a scientific foundation and novel insights for emotion-driven design in industrial floor-cleaning robots. Full article
(This article belongs to the Special Issue Intelligent Robotics in the Era of Industry 5.0)
Show Figures

Figure 1

29 pages, 13423 KB  
Article
Deep Learning-Based Imagery Style Evaluation for Cross-Category Industrial Product Forms
by Jianmin Zhang, Yuliang Li, Mingxing Zhou and Sixuan Chu
Appl. Sci. 2025, 15(11), 6061; https://doi.org/10.3390/app15116061 - 28 May 2025
Viewed by 493
Abstract
The evaluation of imagery style in industrial product design is inherently subjective, making it difficult for designers to accurately capture user preferences. This ambiguity often results in suboptimal market positioning and design decisions. Existing methods, primarily limited to single product categories, rely on [...] Read more.
The evaluation of imagery style in industrial product design is inherently subjective, making it difficult for designers to accurately capture user preferences. This ambiguity often results in suboptimal market positioning and design decisions. Existing methods, primarily limited to single product categories, rely on labor-intensive user surveys and computationally expensive data processing techniques, thus failing to support cross-category collaboration. To address this, we propose an Imagery Style Evaluation (ISE) method that enables rapid, objective, and intelligent assessment of imagery styles across diverse industrial product forms, assisting designers in better capturing user preferences. By combining Kansei Engineering (KE) theory with four key visual morphological features—contour lines, edge transition angles, visual directions and visual curvature—we define six representative style paradigms: Naturalness, Technology, Toughness, Steadiness, Softness, and Dynamism (NTTSSD), enabling quantification of the mapping between product features and user preferences. A deep learning-based ISE architecture was constructed by integrating the NTTSSD paradigms into an enhanced YOLOv5 network with a Convolutional Block Attention Module (CBAM) and semantic feature fusion, enabling effective learning of morphological style features. Experimental results show the method improves mean average precision (mAP) by 1.4% over state-of-the-art baselines across 20 product categories. Validation on 40 product types confirms strong cross-category generalization with a root mean square error (RMSE) of 0.26. Visualization through feature maps and Gradient-weighted Class Activation Mapping (Grad-CAM) further verifies the accuracy and interpretability of the ISE model. This research provides a robust framework for cross-category industrial product style evaluation, enhancing design efficiency and shortening development cycles. Full article
Show Figures

Figure 1

28 pages, 9332 KB  
Article
Contrastive Learning-Based Cross-Modal Fusion for Product Form Imagery Recognition: A Case Study on New Energy Vehicle Front-End Design
by Yutong Zhang, Jiantao Wu, Li Sun and Guoan Yang
Sustainability 2025, 17(10), 4432; https://doi.org/10.3390/su17104432 - 13 May 2025
Viewed by 744
Abstract
Fine-grained feature extraction and affective semantic mapping remain significant challenges in product form analysis. To address these issues, this study proposes a contrastive learning-based cross-modal fusion approach for product form imagery recognition, using the front-end design of new energy vehicles (NEVs) as a [...] Read more.
Fine-grained feature extraction and affective semantic mapping remain significant challenges in product form analysis. To address these issues, this study proposes a contrastive learning-based cross-modal fusion approach for product form imagery recognition, using the front-end design of new energy vehicles (NEVs) as a case study. The proposed method first employs the Biterm Topic Model (BTM) and Analytic Hierarchy Process (AHP) to extract thematic patterns and compute weight distributions from consumer review texts, thereby identifying key imagery style labels. These labels are then leveraged for image annotation, facilitating the construction of a multimodal dataset. Next, ResNet-50 and Transformer architectures serve as the image and text encoders, respectively, to extract and represent multimodal features. To ensure effective alignment and deep fusion of textual and visual representations in a shared embedding space, a contrastive learning mechanism is introduced, optimizing cosine similarity between positive and negative sample pairs. Finally, a fully connected multilayer network is integrated at the output of the Transformer and ResNet with Contrastive Learning (TRCL) model to enhance classification accuracy and reliability. Comparative experiments against various deep convolutional neural networks (DCNNs) demonstrate that the TRCL model effectively integrates semantic and visual information, significantly improving the accuracy and robustness of complex product form imagery recognition. These findings suggest that the proposed method holds substantial potential for large-scale product appearance evaluation and affective cognition research. Moreover, this data-driven fusion underpins sustainable product form design by streamlining evaluation and optimizing resource use. Full article
Show Figures

Figure 1

25 pages, 24138 KB  
Article
A Method for the Front-End Design of Electric SUVs Integrating Kansei Engineering and the Seagull Optimization Algorithm
by Yutong Zhang, Jiantao Wu, Li Sun, Qi Wang, Xiaotong Wang and Yiming Li
Electronics 2025, 14(8), 1641; https://doi.org/10.3390/electronics14081641 - 18 Apr 2025
Cited by 1 | Viewed by 611
Abstract
With the rapid expansion of the Electric Sport Utility Vehicle (ESUV) market, capturing consumer aesthetic preferences and emotional needs through front-end styling has become a key issue in automotive design. However, traditional Kansei Engineering (KE) approaches suffer from limited timeliness, subjectivity, and low [...] Read more.
With the rapid expansion of the Electric Sport Utility Vehicle (ESUV) market, capturing consumer aesthetic preferences and emotional needs through front-end styling has become a key issue in automotive design. However, traditional Kansei Engineering (KE) approaches suffer from limited timeliness, subjectivity, and low predictive accuracy when extracting affective vocabulary and modeling the nonlinear relationship between product form and Kansei imagery. To address these challenges, this study proposes an improved KE-based ESUV styling framework that integrates data mining, machine learning, and generative AI. First, real consumer reviews and front-end styling samples are collected via Python-based web scraping. Next, the Biterm Topic Model (BTM) and Analytic Hierarchy Process (AHP) are used to extract representative Kansei vocabulary. Subsequently, the Back Propagation Neural Network (BPNN) and Support Vector Regression (SVR) models are constructed and optimized using the Seagull Optimization Algorithm (SOA) and Particle Swarm Optimization (PSO). Experimental results show that SOA-BPNN achieves superior predictive accuracy. Finally, Stable Diffusion is applied to generate ESUV design schemes, and the optimal model is employed to evaluate their Kansei imagery. The proposed framework offers a systematic and data-driven approach for predicting consumer affective responses in the conceptual styling stage, effectively addressing the limitations of conventional experience-based design. Thus, this study offers both methodological innovation and practical guidance for integrating affective modeling into ESUV styling design. Full article
Show Figures

Figure 1

15 pages, 1779 KB  
Article
Romanian Style Chinese Modern Poetry Generation with Pre-Trained Model and Direct Preference Optimization
by Li Zuo, Dengke Zhang, Yuhai Zhao and Guoren Wang
Electronics 2025, 14(2), 294; https://doi.org/10.3390/electronics14020294 - 13 Jan 2025
Viewed by 1091
Abstract
The poetry of distant country with different culture and language is always distinctive and fascinating. Chinese and Romanian belong to Sinitic languages of the Sino-Tibetan language family and Romance languages of the Indo-European language family, which have relatively different syntax and general imagery [...] Read more.
The poetry of distant country with different culture and language is always distinctive and fascinating. Chinese and Romanian belong to Sinitic languages of the Sino-Tibetan language family and Romance languages of the Indo-European language family, which have relatively different syntax and general imagery of literature. Therefore, in this study, we make an attempt that was rarely involved in previous poetry generation research, using modern Chinese as the carrier, and generating modern poetry with Romanian style based on pre-trained model and direct preference optimization. Using a 5-point grading system, human evaluators awarded scores ranging from 3.21 to 3.83 across seven evaluation perspectives for the generated poems, achieving 76.2% to 91.6% of the comparable scores for the Chinese translations of authentic Romanian poems. The coincidence of the 30th to the 50th most frequently occurring poetic images in both generated poems and Romanian poems can reach 58.0–63.3%. Human evaluation and comparative statistical results on poetic imagery show that direct preference optimization is of great help in improving the degree of stylization, and the model can successfully create Chinese modern poems with Romanian style. Full article
(This article belongs to the Special Issue Emerging Theory and Applications in Natural Language Processing)
Show Figures

Figure 1

23 pages, 5887 KB  
Article
Exploring Multiple Pathways of Product Design Elements Using the fsQCA Method
by Yi Wang, Lijuan Sang, Weiwei Wang, Jian Chen, Xiaoyan Yang, Jun Liu, Zhiqiang Wen and Qizhao Peng
Appl. Sci. 2024, 14(20), 9435; https://doi.org/10.3390/app14209435 - 16 Oct 2024
Cited by 1 | Viewed by 2292
Abstract
To address current product styling design issues, such as ignoring the joint effects of multiple styling elements when constructing perceptual imagery fitting models and thus failing to effectively identify the relationships between styling elements, a product styling design method based on fuzzy set [...] Read more.
To address current product styling design issues, such as ignoring the joint effects of multiple styling elements when constructing perceptual imagery fitting models and thus failing to effectively identify the relationships between styling elements, a product styling design method based on fuzzy set qualitative comparative analysis (fsQCA) is proposed. This method first uses semantic differential and statistical methods to obtain users’ evaluative vocabulary for the product’s perceptual imagery. Then, morphological analysis and cluster analysis are employed to establish typical product samples and extract styling elements to create a styling feature library. Perceptual imagery ratings of these styling features are obtained through expert evaluation. fsQCA is then used to analyze the different grouping relationships between styling elements and their influence on product styling imagery, aiming to match user intentions through different element combination paths. The results show that this method achieves a consistency value of 0.9 for the most optimal styling configurations, demonstrating that fsQCA can effectively identify the multiple paths of product styling elements that meet users’ needs. The contributions of this study to the related fields are: (1) providing a new perspective on the relationship between user perceptual imagery and predicted product styling elements, and (2) advancing the theoretical basis for studying multiple paths of product styling elements. The research results demonstrate that using the fsQCA-based product styling design method can accurately portray the multiple paths of product styling elements that meet users’ needs, thereby effectively improving design efficiency. Finally, a teapot styling design study is used as an example to further verify the method’s feasibility. Full article
Show Figures

Figure 1

19 pages, 4475 KB  
Article
A Multi-Level Cross-Attention Image Registration Method for Visible and Infrared Small Unmanned Aerial Vehicle Targets via Image Style Transfer
by Wen Jiang, Hanxin Pan, Yanping Wang, Yang Li, Yun Lin and Fukun Bi
Remote Sens. 2024, 16(16), 2880; https://doi.org/10.3390/rs16162880 - 7 Aug 2024
Cited by 4 | Viewed by 3019
Abstract
Small UAV target detection and tracking based on cross-modality image fusion have gained widespread attention. Due to the limited feature information available from small UAVs in images, where they occupy a minimal number of pixels, the precision required for detection and tracking algorithms [...] Read more.
Small UAV target detection and tracking based on cross-modality image fusion have gained widespread attention. Due to the limited feature information available from small UAVs in images, where they occupy a minimal number of pixels, the precision required for detection and tracking algorithms is particularly high in complex backgrounds. Image fusion techniques can enrich the detailed information for small UAVs, showing significant advantages under extreme lighting conditions. Image registration is a fundamental step preceding image fusion. It is essential to achieve accurate image alignment before proceeding with image fusion to prevent severe ghosting and artifacts. This paper specifically focused on the alignment of small UAV targets within infrared and visible light imagery. To address this issue, this paper proposed a cross-modality image registration network based on deep learning, which includes a structure preservation and style transformation network (SPSTN) and a multi-level cross-attention residual registration network (MCARN). Firstly, the SPSTN is employed for modality transformation, transferring the cross-modality task into a single-modality task to reduce the information discrepancy between modalities. Then, the MCARN is utilized for single-modality image registration, capable of deeply extracting and fusing features from pseudo infrared and visible images to achieve efficient registration. To validate the effectiveness of the proposed method, comprehensive experimental evaluations were conducted on the Anti-UAV dataset. The extensive evaluation results validate the superiority and universality of the cross-modality image registration framework proposed in this paper, which plays a crucial role in subsequent image fusion tasks for more effective target detection. Full article
(This article belongs to the Special Issue Deep Learning and Computer Vision in Remote Sensing-III)
Show Figures

Figure 1

18 pages, 1584 KB  
Article
Automatic Generation and Evaluation of French-Style Chinese Modern Poetry
by Li Zuo, Dengke Zhang, Yuhai Zhao and Guoren Wang
Electronics 2024, 13(13), 2659; https://doi.org/10.3390/electronics13132659 - 6 Jul 2024
Viewed by 1491
Abstract
Literature has a strong cultural imprint and regional color, including poetry. Natural language itself is part of the poetry style. It is interesting to attempt to use one language to present poetry in another language style. Therefore, in this study, we propose a [...] Read more.
Literature has a strong cultural imprint and regional color, including poetry. Natural language itself is part of the poetry style. It is interesting to attempt to use one language to present poetry in another language style. Therefore, in this study, we propose a method to fine-tune a pre-trained model in a targeted manner to automatically generate French-style modern Chinese poetry and conduct a multi-faceted evaluation of the generated results. In a five-point scale based on human evaluation, judges assigned scores between 3.29 and 3.93 in seven dimensions, which reached 80.8–93.6% of the scores of the Chinese versions of real French poetry in these dimensions. In terms of the high-frequency poetic imagery, the consistency of the top 30–50 high-frequency poetic images between the poetry generated by the fine-tuned model and the French poetry reached 50–60%. In terms of the syntactic features, compared with the poems generated by the baseline model, the distribution frequencies of three special types of words that appear relatively frequently in French poetry increased by 12.95%, 15.81%, and 284.44% per 1000 Chinese characters in the poetry generated by the fine-tuned model. The human evaluation, poetic image distribution, and syntactic feature statistics show that the targeted fine-tuned model is helpful for the spread of language style. This fine-tuned model can successfully generate modern Chinese poetry in a French style. Full article
(This article belongs to the Special Issue Data Mining Applied in Natural Language Processing)
Show Figures

Figure 1

20 pages, 6178 KB  
Article
Boosting SAR Aircraft Detection Performance with Multi-Stage Domain Adaptation Training
by Wenbo Yu, Jiamu Li, Zijian Wang and Zhongjun Yu
Remote Sens. 2023, 15(18), 4614; https://doi.org/10.3390/rs15184614 - 20 Sep 2023
Cited by 5 | Viewed by 2432
Abstract
Deep learning has achieved significant success in various synthetic aperture radar (SAR) imagery interpretation tasks. However, automatic aircraft detection is still challenging due to the high labeling cost and limited data quantity. To address this issue, we propose a multi-stage domain adaptation training [...] Read more.
Deep learning has achieved significant success in various synthetic aperture radar (SAR) imagery interpretation tasks. However, automatic aircraft detection is still challenging due to the high labeling cost and limited data quantity. To address this issue, we propose a multi-stage domain adaptation training framework to efficiently transfer the knowledge from optical imagery and boost SAR aircraft detection performance. To overcome the significant domain discrepancy between optical and SAR images, the training process can be divided into three stages: image translation, domain adaptive pretraining, and domain adaptive finetuning. First, CycleGAN is used to translate optical images into SAR-style images and reduce global-level image divergence. Next, we propose multilayer feature alignment to further reduce the local-level feature distribution distance. By applying domain adversarial learning in both the pretrain and finetune stages, the detector can learn to extract domain-invariant features that are beneficial to the learning of generic aircraft characteristics. To evaluate the proposed method, extensive experiments were conducted on a self-built SAR aircraft detection dataset. The results indicate that by using the proposed training framework, the average precision of Faster RCNN gained an increase of 2.4, and that of YOLOv3 was improved by 2.6, which outperformed other domain adaptation methods. By reducing the domain discrepancy between optical and SAR in three progressive stages, the proposed method can effectively mitigate the domain shift, thereby enhancing the efficiency of knowledge transfer. It greatly improves the detection performance of aircraft and offers an effective approach to address the limited training data problem of SAR aircraft detection. Full article
(This article belongs to the Special Issue Advances in Radar Imaging with Deep Learning Algorithms)
Show Figures

Figure 1

19 pages, 6464 KB  
Article
Research on Green Consumption Based on Visual Evaluation Method—Evidence from Stone Flooring Industry
by Hanzhe Li and Hui Chen
Sustainability 2023, 15(13), 10453; https://doi.org/10.3390/su151310453 - 3 Jul 2023
Viewed by 1661
Abstract
Blind consumption discovered in the real application of flooring does not produce the visually anticipated impression, leading to additional time costs and stone flooring waste. Consumers cannot clearly articulate their visual imaging needs when purchasing stone flooring. Due to consumers’ lack of understanding [...] Read more.
Blind consumption discovered in the real application of flooring does not produce the visually anticipated impression, leading to additional time costs and stone flooring waste. Consumers cannot clearly articulate their visual imaging needs when purchasing stone flooring. Due to consumers’ lack of understanding of the visual imagery style of decorative stone flooring, manufacturers are unable to produce more visual styles of stone flooring in response to consumer demand, which leads to an unorganized production process and the wasting of stone resources. Additionally, manufacturers are unable to receive feedback on market demand, which makes communication links between sales teams difficult. A total of 40 adjectives were considered the most appropriate in a pool of 110 adjectives for the visual imagery evaluation of stone after ten interior design professionals with experience in decorative stone applications had narrowed the selection. Following this, a general consumer semantic difference method questionnaire survey and questionnaire data factor analysis statistics were used to create 10 sets of visual imagery adjectives for marble flooring, which were then divided into 10 different types of marble flooring. Following the computer simulation drawing with the 10 groups of visual imagery adjectives design questionnaire, the consumers completed the visual imagery evaluation questionnaire survey. They received a 304-question valid questionnaire, and using the triangular fuzzy number operation in fuzzy theory, they arrived at 10 marble floors in the visual imagery evaluation score. In order to clarify the current consumer demand for stone floor imagery, the high sales volume of stone flooring on the market for visual style division, which can guide consumers according to their visual needs for an efficient choice, can enhance the efficiency of communication between consumers and sellers. It can also help enterprises clarify the market consumer demand for orderly production to achieve the purpose of green consumption and to ensure the sustainable development of the decorative stone flooring market. Full article
(This article belongs to the Special Issue Circular Economy in Green Supply Chain and Digital Manufacturing)
Show Figures

Figure 1

20 pages, 5322 KB  
Article
SAM-GAN: Supervised Learning-Based Aerial Image-to-Map Translation via Generative Adversarial Networks
by Jian Xu, Xiaowen Zhou, Chaolin Han, Bing Dong and Hongwei Li
ISPRS Int. J. Geo-Inf. 2023, 12(4), 159; https://doi.org/10.3390/ijgi12040159 - 7 Apr 2023
Cited by 10 | Viewed by 4411
Abstract
Accurate translation of aerial imagery to maps is a direction of great value and challenge in mapping, a method of generating maps that does not require using vector data as traditional mapping methods do. The tremendous progress made in recent years in image [...] Read more.
Accurate translation of aerial imagery to maps is a direction of great value and challenge in mapping, a method of generating maps that does not require using vector data as traditional mapping methods do. The tremendous progress made in recent years in image translation based on generative adversarial networks has led to rapid progress in aerial image-to-map translation. Still, the generated results could be better regarding quality, accuracy, and visual impact. This paper proposes a supervised model (SAM-GAN) based on generative adversarial networks (GAN) to improve the performance of aerial image-to-map translation. In the model, we introduce a new generator and multi-scale discriminator. The generator is a conditional GAN model that extracts the content and style space from aerial images and maps and learns to generalize the patterns of aerial image-to-map style transformation. We introduce image style loss and topological consistency loss to improve the model’s pixel-level accuracy and topological performance. Furthermore, using the Maps dataset, a comprehensive qualitative and quantitative comparison is made between the SAM-GAN model and previous methods used for aerial image-to-map translation in combination with excellent evaluation metrics. Experiments showed that SAM-GAN outperformed existing methods in both quantitative and qualitative results. Full article
Show Figures

Figure 1

12 pages, 756 KB  
Article
The Relationships between Cognitive Styles and Creativity: The Role of Field Dependence-Independence on Visual Creative Production
by Marco Giancola, Massimiliano Palmiero, Laura Piccardi and Simonetta D’Amico
Behav. Sci. 2022, 12(7), 212; https://doi.org/10.3390/bs12070212 - 25 Jun 2022
Cited by 20 | Viewed by 4875
Abstract
Previous studies explored the relationships between field dependent-independent cognitive style (FDI) and creativity, providing misleading and unclear results. The present research explored this problematic interplay through the lens of the Geneplore model, employing a product-oriented task: the Visual Creative Synthesis Task (VCST). The [...] Read more.
Previous studies explored the relationships between field dependent-independent cognitive style (FDI) and creativity, providing misleading and unclear results. The present research explored this problematic interplay through the lens of the Geneplore model, employing a product-oriented task: the Visual Creative Synthesis Task (VCST). The latter requires creating objects belonging to pre-established categories, starting from triads of visual components and consists of two steps: the preinventive phase and the inventive phase. Following the Amabile’s consensual assessment technique, three independent judges evaluated preinventive structures in terms of originality and synthesis whereas inventions were evaluated in terms of originality and appropriateness. The Embedded Figure Test (EFT) was employed in order to measure the individual’s predisposition toward the field dependence or the field independence. Sixty undergraduate college students (31 females) took part in the experiment. Results revealed that field independent individuals outperformed field dependent ones in each of the four VCST scores, showing higher levels of creativity. Results were discussed in light of the better predisposition of field independent individuals in mental imagery, mental manipulation of abstract objects, as well as in using their knowledge during complex tasks that require creativity. Future research directions were also discussed. Full article
Show Figures

Figure 1

26 pages, 5936 KB  
Article
Recognition of Car Front Facing Style for Machine-Learning Data Annotation: A Quantitative Approach
by Lisha Ma, Yu Wu, Qingnan Li and Xiaofang Yuan
Symmetry 2022, 14(6), 1181; https://doi.org/10.3390/sym14061181 - 8 Jun 2022
Cited by 9 | Viewed by 3379
Abstract
Car front facing style (CFFS) recognition is crucial to enhancing a company’s market competitiveness and brand image. However, there is a problem impeding its development: with the sudden increase in style design information, the traditional methods, based on feature calculation, are insufficient to [...] Read more.
Car front facing style (CFFS) recognition is crucial to enhancing a company’s market competitiveness and brand image. However, there is a problem impeding its development: with the sudden increase in style design information, the traditional methods, based on feature calculation, are insufficient to quickly handle style analysis with a large volume of data. Therefore, we introduced a deep feature-based machine learning approach to solve the problem. Datasets are the basis of machine learning, but there is a lack of references for car style data annotations, which can lead to unreliable style data annotation. Therefore, a CFFS recognition method was proposed for machine-learning data annotation. Specifically, this study proposes a hierarchical model for analyzing CFFS style from the morphological perspective of layout, surface, graphics, and line. Based on the quantitative percentage of the three elements of style, this paper categorizes the CFFS into eight basic types of style and distinguishes the styles by expert analysis to summarize the characteristics of each layout, shape surface, and graphics. We use imagery diagrams and typical CFFS examples and characteristic laws of each style as annotation references to guide manual annotation data. This investigation established a CFFS dataset with eight types of style. The method was evaluated from a design perspective; we found that the accuracy obtained when using this method for CFFS data annotation exceeded that obtained when not using this method by 32.03%. Meanwhile, we used Vgg19, ResNet, ViT, MAE, and MLP-Mixer, five classic classifiers, to classify the dataset; the average accuracy rates were 76.75%, 78.47%, 78.07%, 75.80%, and 81.06%. This method effectively transforms human design knowledge into machine-understandable structured knowledge. There is a symmetric transformation of knowledge in the computer-aided design process, providing a reference for machine learning to deal with abstract style problems. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition, Machine Learning, and Symmetry)
Show Figures

Graphical abstract

18 pages, 5717 KB  
Article
The Use of Low-Altitude UAV Imagery to Assess Western Juniper Density and Canopy Cover in Treated and Untreated Stands
by Nicole Durfee, Carlos G. Ochoa and Ricardo Mata-Gonzalez
Forests 2019, 10(4), 296; https://doi.org/10.3390/f10040296 - 29 Mar 2019
Cited by 26 | Viewed by 5774
Abstract
Monitoring vegetation characteristics and ground cover is crucial to determine appropriate management techniques in western juniper (Juniperus occidentalis Hook.) ecosystems. Remote-sensing techniques have been used to study vegetation cover; yet, few studies have applied these techniques using unmanned aerial vehicles (UAV), specifically [...] Read more.
Monitoring vegetation characteristics and ground cover is crucial to determine appropriate management techniques in western juniper (Juniperus occidentalis Hook.) ecosystems. Remote-sensing techniques have been used to study vegetation cover; yet, few studies have applied these techniques using unmanned aerial vehicles (UAV), specifically in areas of juniper woodlands. We used ground-based data in conjunction with low-altitude UAV imagery to assess vegetation and ground cover characteristics in a paired watershed study located in central Oregon, USA. The study was comprised of a treated watershed (most juniper removed) and an untreated watershed. Research objectives were to: (1) evaluate the density and canopy cover of western juniper in a treated (juniper removed) and an untreated watershed; and, (2) assess the effectiveness of using low altitude UAV-based imagery to measure juniper-sapling population density and canopy cover. Ground- based measurements were used to assess vegetation features in each watershed and as a means to verify analysis from aerial imagery. Visual imagery (red, green, and blue wavelengths) and multispectral imagery (red, green, blue, near-infrared, and red-edge wavelengths) were captured using a quadcopter-style UAV. Canopy cover in the untreated watershed was estimated using two different methods: vegetation indices and support vector machine classification. Supervised classification was used to assess juniper sapling density and vegetation cover in the treated watershed. Results showed that vegetation indices that incorporated near-infrared reflectance values estimated canopy cover within 0.7% to 4.1% of ground-based calculations. Canopy cover estimates at the untreated watershed using supervised classification were within 0.9% to 2.3% of ground-based results. Supervised classification applied to fall imagery using multispectral bands provided the best estimates of juniper sapling density compared to imagery taken in the summer or to using visual imagery. Study results suggest that low-altitude multispectral imagery obtained using small UAV can be effectively used to assess western juniper density and canopy cover. Full article
(This article belongs to the Special Issue Forestry Applications of Unmanned Aerial Vehicles (UAVs) 2019)
Show Figures

Figure 1

Back to TopTop