Next Article in Journal
Impact of Intersection Left Turn Guide Lines Configuration on Novice Drivers’ Behavior
Previous Article in Journal
A Study of Model Iterations of Fitts’ Law and Its Application to Human–Computer Interactions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Influence of Object, Subject, and Context on Aesthetic Evaluation through Computational Aesthetics and Neuroaesthetics

1
College of Mechanical Engineering and Automation, Huaqiao University, Xiamen 361021, China
2
Xiamen Academy of Arts and Design, Fuzhou University, Xiamen 361021, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(16), 7384; https://doi.org/10.3390/app14167384 (registering DOI)
Submission received: 17 July 2024 / Revised: 16 August 2024 / Accepted: 19 August 2024 / Published: 21 August 2024

Abstract

:
Background: In recent years, computational aesthetics and neuroaesthetics have provided novel insights into understanding beauty. Building upon the findings of traditional aesthetics, this study aims to combine these two research methods to explore an interdisciplinary approach to studying aesthetics. Method: Abstract artworks were used as experimental materials. Based on traditional aesthetics and in combination, features of composition, tone, and texture were selected. Computational aesthetic methods were then employed to correspond these features to physical quantities: blank space, gray histogram, Gray Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP), and Gabor filters. An electroencephalogram (EEG) experiment was carried out, in which participants conducted aesthetic evaluations of the experimental materials in different contexts (genuine, fake), and their EEG data were recorded to analyze the impact of various feature classes in the aesthetic evaluation process. Finally, a Support Vector Machines (SVMs) was utilized to model the feature data, Event-Related Potentials (ERPs), context data, and subjective aesthetic evaluation data. Result: Behavioral data revealed higher aesthetic ratings in the genuine context. ERP data indicated that genuine contexts elicited more negative deflections in the prefrontal lobes between 200 and 1000 ms. Class II compositions demonstrated more positive deflections in the parietal lobes at 50–120 ms, while Class I tones evoked more positive amplitudes in the occipital lobes at 200–300 ms. Gabor features showed significant variations in the parieto-occipital area at an early stage. Class II LBP elicited a prefrontal negative wave with a larger amplitude. The results of the SVM models indicated that the model incorporating aesthetic subject and context data (ACC = 0.76866) outperforms the model using only parameters of the aesthetic object (ACC = 0.68657). Conclusion: A positive context tends to provide participants with a more positive aesthetic experience, but abstract artworks may not respond to this positivity. During aesthetic evaluation, the ERP data activated by different features show a trend from global to local. The SVM model based on multimodal data fusion effectively predicts aesthetics, further demonstrating the feasibility of the combined research approach of computational aesthetics and neuroaesthetics.

1. Introduction

Since ancient times, human beings have been in pursuit of the essence of aesthetics. Alexander Gottlieb Baumgarten emphasized the subjectivity and individualization of the aesthetic experience, and he believed that aesthetic feeling is the core of aesthetics, which refers to the direct experience and perception of beauty by human beings [1]. Aesthetics research encompasses a wide range of disciplines, including art, philosophy, and cultural studies. With the advancement of psychology, there has been a burgeoning interest in studying the brain’s activities during the appreciation of art and the experience of beauty, with the goal of unraveling the neural mechanisms underlying aesthetic experience. Zeki first put forward the concept of neuroesthetics in the 1990s with the aim of explaining and understanding the neural level of aesthetic experience through neural science [2]. Leder et al. [3,4] hold the belief that aesthetic experience is a dynamic process, encompassing the initial perceptual perception of sensory features such as color, shape, and texture, the understanding, interpretation, and evaluation of artworks, and the interaction and dynamic changes of emotions throughout the aesthetic process.
Research has indicated that there are certain commonalities in human visual cognitive mechanisms. For instance, stimulus symmetry may give rise to selective attention toward the overall characteristics of visual stimuli and facilitate higher-level cognitive processing in infants [5]. In the case of adults, a strong positive correlation is exhibited between symmetry judgment and aesthetic evaluation [6]. Certainly, it is not to imply that what is symmetrical is necessarily beautiful [7]. However, this inherent condition will, to a certain extent, affect people’s aesthetic cognition. Additionally, acquired learning and the accumulation of experience will also have an impact on aesthetics. Untrained viewers tend to focus more on individual objects rather than the relationships between graphic elements in paintings. Art expertise plays an important role in art appreciation in general, particularly for judgments of the beauty or importance of an artwork. Art experts rated the artworks higher than novices on aesthetic facets [8]. Zeki classified aesthetic experiences into two categories: biological beauty, which is determined by innate brain concepts that remain unchanged even with experience, and artificial beauty, which is determined by acquired concepts [9]. Aside from aesthetic experiences derived from sensory sources, factors like the environmental context of aesthetics can also have an impact on people’s aesthetic perception. Research has indicated that the aesthetic scores assigned by subjects who viewed stimuli in a “gallery” setting were notably higher compared to those of subjects who viewed the stimuli in a “computer” environment [10]. Artworks displayed in museums tend to be more popular than those exhibited in laboratories [11].
It is commonly accepted that beauty is subjective, and there is no absolute standard. Nevertheless, previous research has demonstrated that when experiencing beautiful works, different observers tend to share certain similarities, and they show a preference for works with specific characteristics. George David Birkhoff put forth an attempt to quantify aesthetics: M = O r d e r / C o m p l e x i t y [12]. The two physical quantities of complexity and order (entropy) can be linked to traditional beauty concepts in art history. Liu et al. [13] introduced the concept of “Engineering Aesthetics”, which integrates the aesthetic dimension into ergonomics. The objective is to employ engineering and scientific approaches to investigate aesthetics and incorporate these methods into the aesthetic design and evaluation process. In such instances, methods of statistical physics and network science can be used to quantify and better understand what it is that evokes that pleasant feeling [14]. In 2005, at the international academic conference on Computational Aesthetics in Graphics, Visualization, and Imaging of the Eurographics Association, computational aesthetics was initially put forward. Computational aesthetics is the research of computational methods that can make applicable aesthetic decisions in a similar fashion as humans can) [15]. The advantage of computational aesthetics lies in its ability to quantify various abstract aesthetic features, such as style, texture features, and so on [16]. By investigating the connection between these features and beauty, researchers are able to undertake tasks such as classification learning through the utilization of modeling techniques like Convolutional Neural Networks (CNNs), SVMs, and Back Propagation (BP) neural networks [17,18,19]. In addition to quantifying the aesthetic object, researchers have also started to incorporate more factors as the aesthetic subject. This includes combining visual and text data for quantification [20], analyzing the emotions in paintings [21], optimizing the model by integrating eye movement data, and optimizing the model by integrating eye movement data [22].
It can be observed that computational aesthetics excels at using mathematical methods to quantify the features of aesthetic objects. Subsequently, various mathematical techniques are employed to build models that simulate human aesthetic evaluations. Neuroaesthetics, on the other hand, primarily focuses on the study of the aesthetic subject. It employs psychological methods to investigate different aesthetic mechanisms and the impact of features, contexts, and other factors on aesthetics. This study proposes to integrate the research methods of computational aesthetics and neuroaesthetics to explore a comprehensive research approach. This approach involves delving into the internal relationships between features and aesthetics, as well as integrating and modeling the multimodal data of the aesthetic subject, aesthetic object, and aesthetic context. By leveraging the strengths of computational aesthetics in feature extraction and the advantages of neuroaesthetics in cognitive research, we aim to develop a more holistic understanding of aesthetic evaluation.
Representational paintings have been proved to have higher judgments on the perceptual dimensions of the semantic dimension of the Illusion of Reality [23]. Therefore, this study employed abstract artworks as experimental materials. The specific methods are as follows:
First, based on traditional concepts in art history, computational aesthetics is used to analyze and quantify the physical quantities corresponding to different features. Previous research has indicated that in the process of human visual perception, global processing is completed before more local analysis, that is, the Global Precedence Hypothesis: People typically perceive global features first in the visual processing and then notice local details [24,25,26,27]. Love et al. [28] discovered via eye movement research that even when the local and global forms have equal conspicuity, there remains an advantage in the processing of the overall form. Of course, the global advantage also has individual differences. Expertise would affect saccade metrics, biasing them toward larger saccade amplitudes. Artists adopted a more global scan path strategy when performing a task related to art expertise [29,30,31]. Global features encompass the arrangement and interrelationships of diverse visual elements within an artwork, and this overall relationship can be summarized as the composition of the picture [32,33]. Different compositions yield different visual effects. A suitable composition necessitates the rational allocation of subject space and negative space to achieve simplicity and balance. For instance, in traditional Chinese painting, a significant amount of blank space is frequently utilized to guide the viewer’s attention, create dynamic visual balance, enhance the rhythm of the picture, and generate a simple and ethereal ambiance [34,35].
In color theory, a tone is produced either by mixing a color with gray or by both tinting and shading. A mixture of a color with white increases lightness, while a shade is a mixture with black, which increases darkness [36]. Tone is a crucial global feature. It indicates the lightness and darkness within the grayscale of an image. By manipulating the tone’s lightness and darkness, contrast, and distribution, the overall style and emotional expression of the work can be influenced. Understanding and mastering the harmony of tone values and colors is of great significance for artists in capturing beauty. Furthermore, the tone value of an artwork can be influenced by the environment, and the same tone value work may also generate different aesthetic appraisals and viewing comforts under various lighting conditions [37,38].
Aside from the overall composition and tone, various detailed textures in artworks can also have an impact on aesthetics. While the texture in traditional art aesthetics and the texture in computational aesthetics both describe and analyze the structure and pattern of images, they differ significantly in terms of methods, applications, and evaluation criteria. The texture in traditional aesthetics places greater emphasis on subjective experience and artistic expression, whereas the texture in computational aesthetics emphasizes objective analysis and computational feature extraction. Computational aesthetics can correlate abstract artistic features with those that impact aesthetics in a quantitative manner, such as using the Gray Level Co-occurrence Matrix (GLCM) to identify the texture/surface quality in traditional aesthetics that reflects the visual and tactile sensations of the surface (smoothness, roughness, softness, hardness, etc.) [39], using the Local Binary Patterns (LBPs) to quantify the details/intricacy in art works [40], and using Gabor filters to extract repetition and rhythm in the picture through frequency domain analysis [41], etc.
This study adheres to the general principles of the Global Precedence Hypothesis and quantifies the features of artworks from three aspects: composition, tone, and texture (both global and local).
Second, based on the quantitative results, cluster analysis of the stimulus materials is performed, and the tagging method is used to classify the stimulus materials.
Third, using the research method of neuroaesthetics, an EEG experiment is conducted to analyze the influence of different classes of features in the process of aesthetic evaluation.
Finally, by utilizing the research method of computational aesthetics, we fuse and model the multimodal data, incorporating the feature data of the stimulus materials (aesthetic object), the ERP data of the participants (aesthetic subject), and the aesthetic context data, to construct a machine learning model. By employing this interdisciplinary approach, we enhance our understanding of the cognitive processes involved in the evaluation of different types of features and build machine learning models to simulate and predict human aesthetic preferences.

2. Materials and Methods

In this study, Visual Studio 2019 and the Open Source Computer Vision Library (OpenCV) are utilized to perform grayscale processing on artworks, followed by feature quantization.

2.1. Features

2.1.1. Composition

In an artistic creation, a rational composition of the image can be attained by appropriately distributing the foreground, background, primary and secondary objects, etc. Some research has adopted the QuadTrees approach to quantify the blank space characteristics of images, and experiments have demonstrated that it is related to the complexity of the picture and will have an impact on the viewer’s attention [42]. QuadTrees is a commonly used data structure for recursively subdividing a two-dimensional space into four quadrants or regions, thereby partitioning the two-dimensional space. By dividing the region into four equal quadrants, sub-quadrants, and so on to represent the division of the two-dimensional space, each leaf node contains data corresponding to a specific sub-region [43]. Based on the characteristics of abstract artworks, this study quantifies the blank space composition features using QuadTrees. The specific process is as follows: (1) Employ the average gray level of the image as the binarization threshold for image binarization processing; (2) Calculate the proportion of the black and white areas in the binarized image. Considering that the visual weight of dark colors is relatively large, in this experiment, if the black area in the binarized image exceeds 45%, then black is used as the blank space; otherwise, white is used as the blank space; (3) Utilize the QuadTrees method to quantify the ratio of the blank area to the total area of the image, and obtain the blank area ratio (as shown in Figure 1).

2.1.2. Tone

In this study, the tone is quantified through the utilization of the gray histogram. This histogram calculates the quantity of pixels at each gray level present in the image, and subsequently, the pixels in the image are tallied based on their respective gray levels and presented in the form of a histogram. By quantifying the characteristics of the gray histogram, the tone of the image can be reflected from multiple facets.
M e a n : The average of the gray levels in the histogram, which represents the overall brightness level of the image.
M e a n = i = 0 N 1 i · p i
V a r i a n c e : Describes the degree of dispersion of the histogram, that is, the extent to which the gray levels deviate from the mean of the histogram.
V a r i a n c e =   i = 0 N 1 ( i M e a n ) 2 · p i
S k e w n e s s : Describes the asymmetry of the histogram. A positive skewness indicates that the tone value in the image is higher; a negative skewness indicates that the tone value in the image is lower.
S k e w n e s s =   i = 0 N 1 ( i M e a n ) 3 · p i V a r i a n c e 3 / 2
K u r t o s i s : Describes the peakedness of the histogram. A positive kurtosis indicates that the gray values in the image are concentrated in certain specific gray levels. A kurtosis of 0 indicates that the peak of the data distribution is similar to that of the standard normal distribution.
K u r t o s i s =   i = 0 N 1 ( i M e a n ) 4 · p i V a r i a n c e 2 3
E n e r g y : The sum of the squares of the pixel counts at each gray level in the histogram, which reflects the uniformity of the gray level distribution in the image. A higher energy indicates that the gray level distribution in the image is more even.
E n e r g y = 1 S   i = 0 N 1 p i 2
N represents the number of gray levels, p i denotes the frequency of the i gray level, and S indicates the size of the picture (total number of pixels).

2.1.3. Global Texture Features

A large number of local features, through rational distribution and integration, become the global texture features of the work. The overall texture features can be reflected through the spatial frequency. The spatial frequency of a picture describes the frequency of changes in texture, details, structure, etc., at various spatial locations in the image.
In this study, Gabor filters are utilized to extract the overall texture features of the image. By applying Gabor filters on different scales and orientations, the texture features of the entire image are obtained. The Gabor operator can acquire texture features of different frequencies and angles and can capture local spatial frequency and orientation information in the image:
Complex:
g x , y ; λ , θ , ψ , σ , γ = e x 2 + γ 2 y 2 2 σ 2 e i ( 2 π x λ + ψ )
Real:
g r e a l x , y ; λ , θ , ψ , σ , γ = e x 2 + γ 2 y 2 2 σ 2 c o s ( 2 π x λ + ψ )
Imaginary:
g i m a g x , y ; λ , θ , ψ , σ , γ = e x 2 + γ 2 y 2 2 σ 2 sin ( 2 π x λ + ψ )
where
x = x c o s θ + y s i n θ y = x s i n θ + y c o s θ
Here, λ represents the wavelength, θ represents the direction, ψ represents the phase offset, γ represents the aspect ratio of the space, and σ is the variance of the Gaussian filter. The mathematical expression for applying the Gabor filter to the image I x , y is a convolution operation:
I g x , y = I x , y g x , y ; λ , θ , ψ , σ , γ
Here, represents the convolution operation, and I g x , y is the filtered image. The parameters of the filter are set as follows: ψ = 0 , σ = 2 π , γ = 0.5 , and different wavelengths λ and angles θ are used, as shown in Figure 2. The mean, variance, and energy of the filtered image are calculated (refer to calculation Formulas (1), (2) and (5)).
The mean value indicates the presence of obvious edges or textures in the original image at that wavelength and angle. The variance of the processed image reflects the degree of dispersion of the image gray level, that is, the complexity of the texture. The energy value of the processed image reflects the uniformity of the image texture.

2.1.4. Local Texture Features

In the realm of artistic creation, creators manipulate the texture of each area to generate diverse visual effects in the picture. Through the integration of the interrelationships among various components, a complete work is ultimately fashioned. For the extraction of local texture features, this study makes use of two extraction techniques: the GLCM and the LBP.
  • Gray Level Co-occurrence Matrix (GLCM);
Through the analysis of the gray level relationship between pixel pairs (or pixel neighbor pairs) in the image, the texture information of the image is captured. This reflects the spatial distribution relationship between different gray levels in the image.
Suppose the gray level range of the image is from 0 to L − 1; the GLCM is an L × L matrix, where L represents the number of gray levels. The element G L C M ( i ,   j ) in the GLCM indicates the frequency of occurrence of gray level i and gray level j in a specific spatial relationship within the image. For each pixel in a specific spatial relationship within the image. For each pixel I x , y , the neighboring pixel I x , y at a distance d is examined. The frequency of occurrence of the pixel pair ( I x , y , I x , y ) is counted, and the count at the corresponding position in the GLCM is incremented by 1:
1 ,     i f   I x , y = i   a n d   I x , y = j   a t   d i s t a n c e   d   a n d   d i r e c t i o n   θ   0 ,     o t h e r w i s e                                                                                                                                                                                                      
where x , y denotes the coordinate of the neighboring pixel of the pixel x , y at a distance d and in the direction θ . By quantifying the features of the GLCM, the texture of the image is reflected from multiple aspects. The degree of contrast between pixels with higher gray levels and pixels with lower gray levels in the GLCM. It reflects the contrast and roughness of the texture of the image.
C o n t r a s t = i = 0 L 1 j = 0 L 1 ( i j ) 2 · G L C M ( i , j ) 2
Energy: The square of the sum of the probabilities between pixel pairs in the GLCM, which reflects the uniformity of the image texture (refer to calculation Formula (5)).
Entropy: The uncertainty of the probability distribution between pixel pairs in the GLCM, which reflects the complexity of the image texture.
E n t r o p y = i = 0 L 1 j = 0 L 1 G L C M ( i , j ) l o g ( G L C M i , j + ϵ )
  • Local Binary Pattern (LBP);
The fundamental principle of the LBP feature is to obtain a local binary coding for each pixel in the image by comparing its gray value with the neighboring pixels in the surrounding area. This is used to represent the texture feature of the pixel. For a specific pixel point ( x , y ) in the image, a circular neighborhood with a radius of R is defined, which contains P sampling points and can be expressed as follows:
x i , y i = ( x + R c o s 2 π i P , y R s i n 2 π i P )
where i = 1 , 2 , , P 1 .
For each pixel point x in the image, the LBP code L B P x is computed as follows:
L B P x =   i = 0 P 1 s ( g p i g ( x ) ) × 2 i
where g p i represents the gray value of the pixel point p i ; g ( x ) represents the gray value of the central pixel x , and s ( v ) is a step function that compares the gray value of each neighboring pixel with the gray value of the central pixel. If the gray value of the neighboring pixel is greater than or equal to the gray value of the central pixel, the pixel is marked as 1; otherwise, it is marked as 0:
s v = 1 , i f   v 0   0 ,   i f   v < 0
In this study, the 8-neighborhood LBP operator with R = 1 is utilized, so the LBP coding value range is 0–255. The histogram quantization method is employed to quantify the mean, variance, skewness, kurtosis, and energy of LBP as the quantization result of LBP, and then the image texture features are analyzed (refer to calculation Formulas (1)–(5)).

2.2. Stimulus

The stimulus images for this study were primarily sourced from the website https://www.wikiart.org (accessed on 4 March 2024). Details of the experimental artworks, including their titles and sources, can be found in Appendix C. To minimize the influence of elements such as people, expressions, scenes, and landscapes in the images, this experiment selected abstract artworks as stimulus materials. Artworks were retrieved using keywords such as Constructivism, Minimalism, Neoplasticism, and Suprematism, resulting in a collection of stylistically similar works primarily composed of geometric patterns. These images were then curated by the research team. Ultimately, 249 images that met the experimental criteria were retained as the final set of stimuli. To ensure that extraneous factors such as image size and color did not influence the experimental outcomes, all images were uniformly processed using Visual Studio 2019 and OpenCV. The images were resized to fit within a maximum of 500 × 600 pixels, maintaining their aspect ratio. This ensured that the width did not exceed 500 pixels and the height did not exceed 600 pixels, with a resolution of 300 dpi. Each image was converted to grayscale using the formula: G r a y = 0.299 × R e d + 0.587 × G r e e n + 0.114 × B l u e .
After quantifying the different feature data of 249 materials, a second-order cluster analysis was performed on the data of different features, and each image was classified using the label method. The results are shown in Table 1, Figure 3, Appendix A and Appendix B.

2.3. EEG and ERP

In 1924, Hans Berger discovered EEG by positioning electrodes on the scalp and amplifying the signals to measure the electrical activity of the human brain [44]. In 1964, Gray Walter and his colleagues reported the first cognitive ERP component, namely, the contingent negative variation [45]. EEG reflects thousands of concurrent brain processes, among which the brain’s response to a single stimulus or point of interest is typically undetectable in a single-trial EEG recording. By performing multiple trials and superimposing and averaging the results, the random brain activity can be averaged out, and the relevant waveforms, namely ERP, can be retained.
The electrode distribution selected for the EEG experiment is the international 10–20 system, and each electrode placement site has a letter to identify the corresponding brain lobe or region: prefrontal (Fp), frontal (F), temporal (T), parietal (P), central (C), and occipital (O). “Z” (zero) refers to the electrodes placed on the midline sagittal plane of the skull (Fpz, Fz, Cz, Oz), and even electrodes (2, 4, 6, 8) indicate that the electrodes are placed on the right side of the head, while odd electrodes (1, 3, 5, 7) indicate that the electrodes are placed on the left; M (also known as A) refers to the mastoid process, a prominent bony protuberance usually found behind the outer ear, and M1 and M2 are used for contralateral reference of all EEG electrodes.

2.4. Participants

In this study, 16 graduate students from the College of Mechanical Engineering and Automation, Huaqiao University, all of whom were Chinese nationals, participated. The group comprised 12 males and 4 females, aged between 22 and 30 years. All participants were right-handed, had no history of mental illness, and had normal or corrected-to-normal vision. During recruitment, researchers interviewed participants to ensure they had not received systematic art education, rarely engaged with the art field and had limited familiarity with abstract art. This was their first exposure to the stimulus images and their first participation in this experiment.
Before the experiment, researchers communicated thoroughly with the participants to confirm the experimental schedule. Participants were instructed not to stay up late the night before, to have a full meal, to avoid beverages such as coffee, tea, and alcohol that could affect their alertness, and to refrain from engaging in strenuous physical exercise before the experiment. The experiment was conducted during the day, either after 9:00 a.m. or after 3:00 p.m. Participants read and signed the informed consent form in detail before the experiment. Each participant received 50 CNY as compensation for their participation. The experiment was approved by the School of Medicine, Huaqiao University.

2.5. Apparatus

The experiment was conducted on a ThinkCentre E74 computer running Windows 10, with the experimental protocol implemented using E-Prime 3.0 software. Stimuli were displayed on a ThinkVision T2224rF monitor (The equipment was sourced from Lenovo (Beijing) Co., Ltd., Haidian District, Beijing, China). EEG data were recorded using a cap-mounted array of Ag/AgCl electrodes. The NeuroScan SynAmps2 EEG amplifier was employed for signal amplification and digitization, and recording was managed using NeuroScan Acquire software. Offline signal processing was performed using CURRY 8.0 software on a computer running Windows 10.

2.6. Procedure

In the EEG experiment, the participants placed their right hands on the mouse and were informed that the experimental materials were from the works of artists (“Genuine”) and the works copied by students (“Fake”) (in fact, all the works were from the works of artists). After fully understanding the content of the experiment, the participants needed to concentrate. First, the participants fixated on a white cross on a gray background for 800 ms. Then, a context cue word (Genuine/Fake) would appear in the center of the screen for 1000 ms. Next, an artwork would be presented on the screen for 2000 ms. After the participants appreciated it, a rating scale ranging from 1 to 5 appeared below the image, and participants used a mouse to click and evaluate the artwork’s appeal (1: very unappealing; 5: very appealing). The scoring time was not limited, so the participants had sufficient time to respond. After the subjects clicked, an 800 ms blank stimulus would appear, and the subjects could take an appropriate rest before entering the next cycle (as show in Figure 4). The entire experiment was divided into 3 blocks, each containing 80 trials, and there was a rest of at least 5 min between each block. Before the main experiment, there would be a practice block containing 10 art works (which would not appear in the main experiment) to familiarize the participants with the experimental process.

2.7. EEG Recordings and Data Analysis

The EEG was continuously recorded from 60 sites according to the extended 10–20 system. The signal was digitized with a 24-bit resolution at a sampling rate of 1000 Hz. The ground electrode was placed on the top of the head between CP and CPz. Two pairs of electrodes were used to record the vertical electrooculogram (VEOG) and horizontal electrooculogram (HEOG). The VEOG electrodes were positioned above and below the left eye, while the HEOG electrodes were placed 1 cm from the outer corner of each eye. Throughout the experiment, the impedance between the electrodes and the scalp was kept below 10 kΩ.
In the data processing phase, all continuous EEG recordings were filtered offline using a band-pass finite impulse response (FIR) filter with the following specifications: a critical low-pass frequency of 30 Hz and a slope of 8 dB/octave. The ocular artifact detection time window was set from −200 ms to 500 ms, with a voltage threshold ranging from −200 µV to 0 µV. An artifact model was created based on the covariance matrix of the relevant electrodes, and covariance analysis was applied to identify and remove ocular artifacts. Bad block detection was used to remove additional artifacts. The voltage thresholds for artifact detection were set at −100 µV and 100 µV, with signals exceeding these limits identified as artifacts. The time window for artifact detection was defined as −200 ms to 500 ms relative to the event. After these processing steps, ERPs were computed for each condition with a length of 1200 ms, including a 200 ms pre-stimulus baseline, separately for each individual participant. The ERP data were then averaged across trials for each condition and each participant to obtain condition-specific ERPs.
All analyses were based on individual mean amplitudes within the specified time window at the respective electrode locations. Both the grand-average ERPs and isopotential contour plots were inspected to identify the electrodes and time windows influenced by the different experimental conditions. For each of these time windows, a repeated measures analysis of variance (ANOVA) was conducted with the following factors: context (genuine, fake), features (Class I, Class II), and 60 channels. The Greenhouse–Geisser correction was applied for nonsphericity, and multiple comparisons were corrected using the Bonferroni method where appropriate. The statistical analyses were performed using SPSS statistical software (IBM SPSS Statistics 26).

3. Results

Since the EEG data availability rate of 4 participants was less than 75%, only the EEG data of 12 participants were retained for subsequent analysis (with an average data availability of 91.8%).

3.1. Behavioral Data

A repeated measures ANOVA was performed to examine the effect of different contexts on the final aesthetic evaluation. The results demonstrated a significant difference in the aesthetic evaluation results across different contexts [F (1, 11) = 5.670, p = 0.036]. The average score in the genuine context (3.081 ± 0.097) was significantly higher than that in the fake context (2.890 ± 0.071).

3.2. EEG Data

3.2.1. Result of Context

As depicted in Figure 5, a more negative deflection for genuine context was observed in the prefrontal lobes within the time window between 200 and 1000 ms. A repeated measures ANOVA was executed on the time window with factors of channel (60) × context (genuine, fake). The findings indicated that the main effect of context was significant [F (1, 11) = 4.941, p = 0.048 < 0.05], while the interaction of channel × context was also significant [F (59, 649) = 2.832, p < 0.01]. Further analysis revealed that in the FPZ channel, in comparison to the fake context, the genuine context would evoke a more negative amplitude [F (1, 11) = 11.902, p = 0.005 < 0.01], and a comparable waveform also manifested in the FP1 channel [F (1, 11) = 5.274, p = 0.042] and FP2 channel [F (1, 11) = 10.213, p = 0.009 < 0.01].

3.2.2. Result of Composition

As depicted in Figure 6, a more positive deflection for Class II composition was observed in the parietal lobes within the time window between 50 and 120 ms. A repeated measures ANOVA was performed with factors of channel (60) × context (genuine, fake) × composition (Class I, Class II). The findings demonstrated that the main effects of context (F < 1) and composition (F < 1) were not significant, and neither was the interaction (F < 1). However, the interaction between channel and composition was significant [F (59, 649) = 2.657, p < 0.01], as was the interaction between channel × context × composition [F (59, 649) = 2.345, p < 0.01]. Further analysis revealed that in the PZ channel, the amplitude of the Class II composition was greater [F (1, 11) = 7.502, p = 0.019], and this was more pronounced in the genuine context [F (1, 11) = 14.784, p = 0.003]. Similarly, in the POZ channel, the amplitude of the Class II composition was also larger [F (1, 11) = 7.911, p = 0.017], and it was again more significant in the genuine context [F (1, 11) = 9.884, p = 0.009].

3.2.3. Result of Tone

As depicted in Figure 7, a more positive deflection for Class I tone was observed in the occipital lobes within the time window between 200 and 300 ms. A repeated measures ANOVA was performed with factors of channel (60) × context (genuine, fake) × tone (Class I, Class II). The results indicated that the main effect of tone (F < 1), the interaction of context × tone (F < 1), and the interaction of channel × context × tone was not significant (F < 1), but the interaction of channel × tone was significant [F (59, 649) = 1.464, p = 0.016 < 0.05]. Further analysis revealed that in the OZ channel, Class I tone evoked a more positive-going amplitude relative to Class II [F (1, 11) = 5.288, p = 0.042].

3.2.4. Result of Gabor-Mean

As depicted in Figure 8, in the center and left side of the parietal–occipital lobe, a more positive-going deflection for Class II Gabor-Mean was observed within the time window between 70 and 130 ms. A repeated measures ANOVA was conducted with factors of channel (60) × context (genuine, fake) × Gabor-Mean (Class I, Class II). The results suggested that the interaction of channel × Gabor-Mean was significant [F (59, 649) = 1.444, p = 0.020]. Further analysis revealed that in the PO3 channel, in comparison to the Class I Gabor-Mean, Class II evoked a more positive amplitude [F (1, 11) = 5.972, p = 0.033]. A similar waveform was also observed in the PO5 channel [F (1, 11) = 6.651, p = 0.026], PO7 channel [F (1, 11) = 6.252, p = 0.029], and P7 [F (1, 11) = 5.518, p = 0.039]. No other effect was significant.

3.2.5. Result of Gabor-Variance

As depicted in Figure 9, in the center of the parietal lobe and parieto-occipital lobe, a more positive-going deflection for Class II Gabor-Variance was observed within the time window between 70 and 130 ms. A repeated measures ANOVA was conducted with factors of channel (60) × context (genuine, fake) × Gabor-Variance (Class I, Class II). The results suggested that the interaction of channel × Gabor-Variance was significant [F (59, 649) = 1.590, p = 0.004]. Further analysis disclosed that in the PZ channel, in comparison to the Class I Gabor-Variance, Class II would evoke a more positive amplitude [F (1, 11) = 4.937, p = 0.048]. There was a marginally significant difference in the POZ channel [F (1, 11) = 4.722, p = 0.053]. No other effect was significant.

3.2.6. Result of Gabor-Energy

As depicted in Figure 10, in the center and right side of the parietal lobe and parieto-occipital lobe, a more positive-going deflection for Class I Gabor-Energy was observed within the time window between 70 and 130 ms. A repeated measures ANOVA was conducted with factors of channel (60) × context (genuine, fake) × Gabor-Energy (Class I, Class II). The results suggested that the interaction of channel × Gabor-Energy was significant [F (59, 649) = 2.020, p < 0.01]. Further analysis disclosed that in the PZ channel, in comparison to the Class II Gabor-Energy, Class I would evoke a more positive amplitude [F (1, 11) = 14.109, p = 0.003]. A similar waveform was also observed in the POZ channel [F (1, 11) = 17.687, p = 0.001], P2 channel [F (1, 11) = 9.158, p = 0.012], P4 channel [F (1, 11) = 9.140, p = 0.012], PO4 channel [F (1, 11) = 7.874, p = 0.017], and PO6 channel [F (1, 11) = 4.914, p = 0.049]. No other effect was significant.
A more positive-going deflection for Class I Gabor-Energy was observed in the center of the parieto-occipital lobe within the time window between 200 and 300 ms. A repeated measures ANOVA was conducted with factors of channel (60) × context (genuine, fake) × Gabor-Energy (Class I, Class II). The results suggested that the interaction of channel × Gabor-Energy was significant [F (59, 649) = 1.712, p = 0.001]. Further analysis disclosed that in the POZ channel, in comparison to the Class II Gabor-Energy, Class I would evoke a more positive amplitude [F (1, 11) = 4.768, p = 0.052]. No other effect was significant.

3.2.7. Result of Horizontal GLCM (θ = 0°)

As depicted in Figure 11, a more negative deflection for Class I horizontal GLCM was observed in the occipital lobe within the time window between 500 and 1000 ms. A repeated measures ANOVA was conducted with factors of channel (60) × context (genuine, fake) × horizontal GLCM (Class I, Class II), which revealed an effect of horizontal GLCM [F (1, 11) = 5.112, p = 0.045]. Class I horizontal GLCM evoked a more negative-going amplitude compared to Class II. The interaction of channel × horizontal GLCM was significant [F (59, 649) = 1.516, p < 0.01]. Further analysis uncovered that in the OZ channel, Class I horizontal GLCM evoked a more negative-going amplitude [F (1, 11) = 5.332, p = 0.041]. No other effect was significant.

3.2.8. Result of Diagonal GLCM (θ = 135°)

As depicted in Figure 12, a more negative deflection for Class I diagonal GLCM was observed in the left side of the parietal–occipital lobe within the time window between 70 and 140 ms. A repeated measures ANOVA with the factors channel (60), context (genuine, fake), and diagonal GLCM (Class I, Class II), the interaction of channel × diagonal GLCM was significant [F (59, 649) = 1.444, p = 0.020]. Further analysis uncovered that in the PO5 channel, the amplitude of Class I diagonal GLCM was greater [F (1, 11) = 6.651, p = 0.026]. Similarly, in the PO7 channel, the amplitude of the Class I diagonal GLCM was also larger [F (1, 11) = 6.252, p = 0.029]. No other effect was significant.
A more negative deflection for Class I diagonal GLCM was observed in the occipital lobes within the time window between 300 and 1000 ms. A repeated measures ANOVA was conducted with factors of channel (60) × context (genuine, fake) × diagonal GLCM (Class I, Class II), which revealed an effect of diagonal GLCM [F (1, 11) = 5.070, p = 0.046]. Class I diagonal GLCM evoked a more negative-going amplitude compared to Class II. The interaction of channel × diagonal GLCM was significant [F (59, 649) = 1.658, p = 0.02]. Further analysis uncovered that in the OZ channel, the wave amplitude induced by Class I was larger [F (1, 11) = 6.495, p = 0.027]. No other effect was significant.

3.2.9. Result of LBP

As depicted in Figure 13, a more negative-going deflection for genuine context was observed in the center and left side of the frontal lobe within the time window between 300 and 1000 ms. A repeated measures ANOVA was executed on the time window with factors of channel (60) × context (genuine, fake) × LBP (Class I, Class II). The findings indicated that the interaction of channel × LBP was significant [F (59, 649) = 2.432, p < 0.01]. Further analysis disclosed that in the FPZ channel of the prefrontal lobe, in comparison to the Class I LBP, the Class II LBP would evoke a more negative amplitude [F (1, 11) = 6.712, p = 0.025] and a comparable waveform also manifested in the FP1 channel [F (1, 11) = 8.869, p = 0.013], AF3 channel [F (1, 11) = 9.097, p = 0.012], and F5 channel [F (1, 11) = 6.195, p = 0.030]. No other effect was significant.

4. Machine Learning Model

Through the interaction of the aesthetic context, aesthetic object, and aesthetic subject, the participants formed the final aesthetic evaluation. Based on the experimental results, a model of the experimental data was constructed. The aesthetic evaluations of the experimental materials by the participants were categorized into three classes: beautiful (scores 4–5), not beautiful (scores 1–2), and unclear (score 3). The data for the “beautiful” and “not beautiful” categories were organized, each containing 983 and 1028 valid data points. In this study, SVM was used to develop a model that could quickly classify the materials as either beautiful or not beautiful. SVM is a potent supervised learning model widely applied in classification and regression analysis. It achieves classification tasks by finding the optimal hyperplane to separate data samples of different categories. Its primary advantage lies in its ability to handle high-dimensional data [46,47]. The LIBSVM supports multiple kernel functions (such as linear kernel, polynomial kernel) and multi-class classification tasks [48]. Use the LIBSVM toolbox in MATLAB to train the data, select the appropriate penalty parameter C and kernel parameter γ, then train the SVM model with the best parameters and evaluate the performance on the test set [49].

4.1. Dataset Construction

The participants’ scoring results on the pictures were averaged to obtain the aesthetic evaluation results of each picture in different contexts, which served as the output layer data. The input layer data included the feature data of the stimulus pictures as the aesthetic object, the context data, and the average amplitudes of ERP components elicited ed by the context and different feature classes in the corresponding channels and time windows, as shown in Table 2.
All data were standardized (standardization):
X = X μ σ
Here, X is the standardized feature value, X is the original feature value, μ is the mean of the feature, and σ is the standard deviation of the feature.

4.2. Parameter Determination

The Radial Basis Function Kernel (RBF) is chosen for modeling. The RBF kernel function can map the original feature space to a high-dimensional feature space, enabling the data to be linearly separable in the new feature space, thereby addressing the issue of linear inseparability in the original feature space [50,51]. A total of 80% of the data are utilized as the training set, and 20% of the data are used as the test set. The model quality is evaluated through 5-fold cross-validation, and the grid search method is employed to determine the optimal C and γ.
The search range for C and γ is [0.01, 100], and 50 candidate values are generated through linear space sampling, resulting in a total of 25,000 combinations. To evaluate the performance of the proposed model comprehensively, the area under the receiver operating characteristic curve (AUC) is employed to assess the performance of the multi-class SVM model. By using this approach, the classification ability of the model on different categories can be fully understood, and the accuracy and reliability of the evaluation results can be guaranteed. The AUC threshold is set at 0.7, and based on this, the SVM classification model with the highest ACC is sought to ensure that the model possesses strong discriminatory power while maximizing its overall classification accuracy.
It can be seen from Figure 14 and Table 3 that the optimal solution has an average loss of 0.20199 in the cross-validation, an accuracy rate of 0.76866 on the test set, and an AUC of 0.74155. The performance of the model in different folds is relatively stable, and the performance on the training and validation sets is consistent, indicating that the model has suitable generalization ability.
In contrast, by constructing the same SVM model using only image feature data, it was found that the optimal (C, γ) combination was (46.9441, 14.2943), leading to an accuracy of only 0.68657 on the test set. This shows that incorporating EEG data from the subjects and context data greatly enhances the model’s accuracy. Relying solely on image feature data mainly focuses on the commonality of beauty (biological beauty) and neglects individual differences and the influence of the aesthetic context in the evaluation process.

5. Discussion

Aesthetics is influenced by context, and previous studies have also affirmed that in a positive environment, people tend to have more positive aesthetic performance. The research findings of Noguchi et al. [52] demonstrated that the genuine/fake context had a significant impact on aesthetic evaluation, and the study of Grüner et al. [53] also discovered that people could obtain a higher appreciation rate when experiencing real artworks in the museum compared to when enjoying replicas in the laboratory. The results of the behavioral data in this study further confirmed this perspective, and the aesthetic evaluation score in the genuine context was notably higher than that in the fake context.
Studies have discovered that artistic creation and appreciation are associated with the advanced centers of the central nervous system, including the orbitofrontal cortex (OFC), temporal, and parietal lobes [54]. Among them, the prefrontal lobe is related to aesthetic rewards, and the appreciation of abstract artworks leads to the activation of the OFC area associated with rewards and the prefrontal area associated with cognitive classification [55]. One perspective proposes that OFC represents stimulus–reward value and supports the learning and relearning of stimulus–reward associations, while an alternative view implicates OFC in behavioral control following rewarding or punishing feedback. O’Doherty et al. proposed functional heterogeneity within the OFC, suggesting a role for the region in representing stimulus–reward values, signaling changes in reinforcement contingencies, and behavioral control [56]. During the period of anticipating rewards and after receiving them, the activity of OFC neurons will increase in response to reward prediction signals [57]. There will be significant differences in the ERP of the aesthetic response in the prefrontal region to artistic stimuli, and the dislike condition will trigger a negative wave with a larger amplitude [58]. Regarding the difference in negative waves in the prefrontal lobe caused by the judgment of beauty, some researchers suggest that the reason for this difference is that the subjects in the experiment consider the visual stimulus as not beautiful rather than considering it as beautiful [59].
In the present study, ERP data revealed that during the process of aesthetic evaluation, different contexts would give rise to distinct prefrontal negative waves. These waves initiated approximately at 200 ms and persisted until the conclusion of the aesthetic evaluation, the genuine context activating a negative wave of greater amplitude. This implies that for the same stimulus material, in the genuine context, although the participants assigned higher scores, they might not necessarily perceive the stimulus material as beautiful. This phenomenon could be associated with the aesthetic reward, where the genuine context cue enhanced the anticipation of rewards. For participants naive to art, the abstract artworks did not provide sufficient feedback during the reward-receiving period. Collectively, the aesthetic evaluation in the genuine context in this study triggered a more robust prefrontal negative wave, potentially due to the fact that the genuine is typically regarded as having higher cognitive and emotional values, whereas the fake is considered relatively deficient in this regard.
The theory of two visual pathways suggests that the primate visual cortex can be separated into dorsal and ventral streams, which originate from the primary visual cortex (V1). The dorsal pathway extends toward the parietal cortex, passing through motion areas of medio-temporal (MT) and median superior temporal (MST). The ventral pathway proceeds through area V4 along the entire temporal cortex, reaching the area of infero-temporal (IT). These ventral and dorsal pathways are supported by parallel retino-thalamo-cortical inputs to V1, known as the Magno (M) and Parvocellular pathways (P) [60,61,62].
The Hierarchical Theory of Visual Perception proposes that the processing of visual information occurs in a sequential manner within the brain, starting from the detection of basic features at a low level and progressing to the analysis of complex shapes and motions at a higher level. Within the visual cortex of the brain, distinct regions (V1 to V5) are accountable for diverse visual tasks. V1 primarily receives input from the retina via the lateral geniculate nucleus (LGN) and is mainly involved in detecting fundamental features such as edges and temporal frequency [63,64]. V2 and V3 cells are largely responsible for integrating and processing higher-order visual information, including aspects of motion and stereoscopic vision [65,66,67,68]. V4 is predominantly associated with the advanced processing of colors, shapes, textures, and other visual features [69,70], while V5 is mainly involved in motion perception and visual motion prediction [71].
In this study, ERP data revealed that the majority of feature-induced ERP data differences were primarily concentrated in the parietal and occipital regions. The ERP components activated by the composition feature, reflecting the overall information of the picture, in the parietal and parieto-occipital regions dissociated at approximately 50 ms. In contrast, the ERP components activated by the Gabor (Mean, Variance, Energy) filter feature, reflecting the texture frequency and direction of the picture, in the central parietal and parieto-occipital regions dissociated at approximately 70 ms. Specifically, Gabor-Mean predominantly activated the left parieto-occipital region, while Gabor-Energy mainly activated the right parieto-occipital region. The ERP components activated by the diagonal GLCM feature, reflecting the texture direction on the diagonal, in the parieto-occipital region dissociated at approximately 70 ms. Additionally, the ERP components activated by the tone feature, reflecting the overall gray level of the picture, and the Gabor-Energy feature in the parieto-occipital region dissociated at 200 ms. Finally, the ERP components activated by the horizontal GLCM feature, reflecting the horizontal direction in the occipital region dissociated at 500 ms. It can be inferred from the ERP data that the processing of features primarily occurred in the visual cortex of the brain (parietal and occipital regions), and the processing sequence followed the Global Precedence Hypothesis, with composition, tone, and global features (Gabor filters) being processed first, followed by local features (GLCM). Previous studies have also demonstrated that specific features (such as spatial frequency, human face, low-level visual information, etc.) can influence the amplitude of P1, reflecting the sensitivity to features in the early stages of visual processing [72,73,74]. The P2 component is involved in feature integration, attention, and preliminary object recognition, among other things. For example, an unattractive color combination can result in differences in the amplitude of the P200 component in the frontal, central, and parietal regions [75]. What is unique is that the LBP feature, as a local feature, reflects the gray level relationship between each pixel and its neighboring pixels (local texture). The ERP data activated in the middle and left frontal lobes began to diverge from 300 ms and persisted until 1000 ms. This suggests that the LBP feature requires a more intricate processing procedure. Sbriscia Fioretti et al. [55] also discovered in the abstract painting experiment that the retention and elimination of texture features would result in differences in aesthetic evaluation, and the original work with texture had a more significant impact. This disparity was reflected in a negative deflection occurring at the frontal and central scalp sites from 260 to 328 ms after stimulus presentation.
Overall, in the process of the participants’ aesthetic evaluation of abstract artworks, context continuously influences the cognitive process of aesthetics and will also affect the final aesthetic evaluation. The process generally follows the Global Precedence Hypothesis, the Hierarchical Theory of Visual Perception, and the Two Visual Pathways Theory. The overall performance is to process global features first, followed by local features.
This study found that compared to the SVM model constructed using only image feature data, the model integrating neuroaesthetic data, computational aesthetic data, and contextual data has better quality and higher accuracy. This indicates that including human and context factors through an interdisciplinary approach that combines neuroaesthetics and advanced machine learning models can better establish an integrated aesthetic evaluation system between humans and Artificial Intelligence (AI), effectively simulating and predicting human aesthetic preferences [76,77,78].
Aesthetics is a significant area of study within psychology and neuroscience. Neuroaesthetics, a relatively new subfield, aims to understand the brain mechanisms underlying aesthetic experiences and related phenomena. However, there is currently an imbalance in the relationship between neuroaesthetics and the disciplines it draws upon for inspiration and guidance. Neuroaesthetics has much to learn from these fields, but at present, it offers relatively little in return [79,80]. In this study, by combining computational aesthetics and neuroaesthetics research methods, the factors influencing aesthetics were explored. Through this interdisciplinary approach, on the one hand, in the study of neuroaesthetics, computational aesthetics methods are utilized to quantify features, broadening the research realm of neuroaesthetics. The feature extraction and classification of pictures employ a more objective quantitative approach, allowing for a clearer comprehension of the physical attributes of different features that impact aesthetics. Additionally, with the advancement of computational aesthetics quantification techniques, an increasing number of complex features can be quantified, providing more avenues for investigating the relationship between features and aesthetics. On the other hand, in the study of computational aesthetics, the research findings of neuroaesthetics are integrated. Building upon the original model that primarily relies on features as input, the data of the aesthetic subject and the aesthetic context are added, which not only enriches the model data but also considers the factors influencing aesthetics more comprehensively, resulting in a higher-quality model. Moreover, it offers a novel perspective for feature selection in modeling. Through the neuroaesthetics research method, the relationship between features and aesthetics can be more precisely determined, avoiding the inclusion of irrelevant features in the aesthetic model.

6. Conclusions

In conclusion, this study integrates computational aesthetics and neuroaesthetics to investigate the cognitive processes underlying aesthetic evaluations of abstract artworks. Our findings reveal that a positive context enhances aesthetic experience, although abstract artworks may not consistently reflect this positivity. ERP data indicate that feature processing during aesthetic evaluation follows a pattern from global to local analysis. Specifically, global features such as composition and tone evoke early ERP components, while detailed textures trigger more complex neural responses. The SVM model, incorporating multimodal data, effectively predicts aesthetic preferences, highlighting the potential of combining computational and neuroaesthetic approaches.
Overall, this research underscores the importance of context in aesthetic evaluation and offers a promising avenue for integrating diverse methodologies to deepen our understanding of aesthetic experience. This interdisciplinary method demonstrates the feasibility of quantitatively analyzing aesthetic features and their neural correlates. Future studies should continue to expand this interdisciplinary approach, exploring both foundational theories and practical applications in aesthetics.

Author Contributions

F.L., W.S., Y.L. and W.X. designed the experiment; F.L. and W.X. utilized the Neuroscan SynAmps2 device to record the EEG data and e-prime software to record behavioral data; F.L. and W.X. analyzed and processed the experimental data using the CURRY8.0 and IBM SPSS Statistics 26 software; F.L. and W.X. used MATLAB software for machine learning model construction; F.L. and W.X. edited the first draft of the paper; W.S., W.X. and Y.L. revised the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Humanities and Social Science Research Project of the Ministry of Education (No. 20YJA760067), the Social Science Foundation of Fujian Province project (No. FJ2024C131), and the Cooperative Research Project between CRRC Zhuzhou Locomotive and Huaqiao University (2021-0015).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the School of Medicine, Huaqiao University.

Informed Consent Statement

Informed consent was obtained from all participants involved in the study. Written informed consent has been obtained from the participants to publish this paper.

Data Availability Statement

The datasets supporting the results of this article are included within the article.

Acknowledgments

We would like to thank all experimenters and participants of this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Image Counts and Corresponding Parameter Values by Feature Classification (Composition, Tone, GLCM, LBP)

FeaturesClassParameterMeanSD
Composition
(Blank Space)
IMean0.7690.066
IIMean0.5730.059
Tone
(Gray Histogram)
IMean108.77431.488
Variance5060.8032820.122
Skewness0.6131.245
Kurtosis0.7745.617
Energy4909.4123298.131
IIMean179.91225.290
Variance3306.2471735.233
Skewness−1.6761.167
Kurtosis3.2397.356
Energy10,016.35313,510.457
Horizontal GLCM (θ = 0°)IContrast60,369.9410,198.14
Energy72.7467.304
Entropy783.65559.472
IIContrast3131.599272.695
Energy6.3650.431
Entropy95.5165.242
Diagonal GLCM (θ = 135°)IContrast115,494.115,748.78
Energy88.3658.009
Entropy960.06669.589
IIContrast7635.667744.169
Energy7.0380.468
Entropy136.3899.833
LBPIMean141.7610.702
Variance8982.15923.793
Skewness−0.2030.010
Kurtosis−1.5300.005
Energy29,133.405200.916
IIMean170.9611.740
Variance8333.66185.958
Skewness−0.7080.039
Kurtosis−0.8950.095
Energy16,563.702439.355

Appendix B. Image Counts and Corresponding Parameter Values by Feature Classification (Gabor Filter)

FeaturesGabor-MeanGabor-VarianceGabor-Energy
Class IClass IIClass IClass IIClass IClass II
λθMeanSDMeanSDMeanSDMeanSDMeanSDMeanSD
3060.8461.66980.740.87748.9171.1663.3731.441188.9512.346225.6761.584
3π/871.2642.009104.9041.24744.2641.19357.1741.165209.1452.047239.9151.021
3π/463.7541.81893.3281.09642.981.3653.9671.324203.8572.338236.1621.275
33π/871.3661.996104.9181.27044.8711.20857.0281.101209.1312.063239.5660.999
3π/260.2071.64379.8870.92647.0511.20761.671.397189.7942.349226.7061.584
35π/871.1181.984104.7981.26644.1181.22756.4821.106209.6722.102240.1960.987
33π/463.7251.80193.1861.10142.6721.35953.7451.305204.0422.36236.5131.256
37π/871.3241.979104.9471.2644.4141.257.4311.094209.2282.049239.9010.997
607.0410.5272.3280.19919.3780.97536.9131.4818.7030.6022.5090.203
6π/8159.7543.338221.4521.54365.2131.41989.1610.945225.5741.313245.490.453
6π/4175.2363.273230.4791.22357.4811.54183.8681.03233.4411.208248.1620.41
63π/8159.9093.326221.9751.48964.7041.46488.8480.899226.1291.305245.7170.475
6π/26.4480.4831.6780.14316.5780.88635.3351.3317.8770.5521.9030.16
65π/8160.5103.360222.5331.50963.4981.45288.2760.91226.9041.34246.5840.433
63π/4175.7753.306230.9541.23056.4981.48283.4391.048233.8231.22248.8840.353
67π/8160.1373.360221.5751.53165.2021.3988.9470.926225.7171.334245.5580.441
90227.1091.804247.0310.45838.1311.14766.1271.566238.7030.926249.5630.3
9π/813.8130.7086.50.34732.7140.92255.1351.08919.7080.8077.0660.34
9π/4219.3552.305246.050.58137.0981.38367.7931.365241.2590.902250.7530.304
93π/813.6790.7226.0550.34331.9141.03854.1321.16318.9180.8096.8080.372
9π/2228.3271.752248.330.39634.3321.17164.4521.473240.0720.858250.6560.265
95π/812.6600.7125.3910.31429.3880.88952.4211.11818.1940.8135.9010.303
93π/4220.1052.3465246.490.56635.6871.27766.9991.372241.7490.917251.2930.259
97π/813.8870.7396.5810.34233.2520.92155.0931.12719.6830.8587.1240.326
12010.7660.6334.0530.23627.9410.92548.4891.30212.150.6734.0690.229
12π/816.0080.7877.1240.34236.5470.94659.8151.16219.5660.8267.1160.32
12π/4241.5671.385250.4870.34626.421.11848.9291.526246.5090.611251.8990.236
123π/814.9010.7746.420.33234.5640.97157.4211.21318.0970.8226.6680.334
12π/29.8200.5893.2020.20024.5670.93146.2611.20810.970.623.2910.211
125π/814.0850.7705.6740.30832.0730.8656.0611.18117.4120.8195.7260.275
123π/4242.1641.386250.9260.32524.3690.96747.8611.561246.9480.615252.4550.189
127π/815.6480.8087.1370.32836.7240.91259.0761.21619.1290.8587.1140.296
15039.9471.42026.9530.99869.61.16689.1181.11954.7541.40430.2681.034
15π/8226.9671.642244.0750.50245.8021.01674.3871.257232.3321.016246.3760.373
15π/4248.2980.978252.9980.20816.3081.00833.6321.602250.5270.443253.4650.156
153π/8228.5571.579245.440.48742.8011.11171.8191.3234.2010.951247.3530.38
15π/237.5571.41924.2380.95765.4991.30586.4641.16652.3981.36327.6381.024
155π/8229.2811.585245.9790.47441.0091.03771.0661.304234.80.958248.0740.333
153π/4248.64520.968253.2440.19214.3630.86832.8331.622250.7590.437253.8030.131
157π/8227.4721.653244.5780.544.6040.99573.6651.309232.7151.043246.8330.348

Appendix C. Table of Names and Sources of Experimental Stimuli

NArtwork TitleLink
1Konstruktion på svarthttps://www.wikiart.org/en/gosta-adrian-nilsson/konstruktion-p-svart-1930 (accessed on 4 March 2024)
2Suprematismhttps://www.wikiart.org/en/kazimir-malevich/suprematism-1916-2 (accessed on 4 March 2024)
3Composition A XXIhttps://www.wikiart.org/en/laszlo-moholy-nagy/composition-a-xxi-1925 (accessed on 4 March 2024)
4Untitledhttps://www.wikiart.org/en/cesar-domela/untitled-3 (accessed on 4 March 2024)
5Untitled (Graphic Composition)https://www.wikiart.org/en/bruno-munari/untitled-graphic-composition-1951-2 (accessed on 4 March 2024)
6Untitledhttps://www.wikiart.org/en/tony-smith/untitled-1963 (accessed on 4 March 2024)
7Untitledhttps://www.wikiart.org/en/alexander-calder/untitled-1930 (accessed on 4 March 2024)
8Stained glass composition “Woman”https://www.wikiart.org/en/theo-van-doesburg/stained-glass-composition-woman-1917 (accessed on 4 March 2024)
9Rising, Falling, Flyinghttps://www.wikiart.org/en/sophie-taeuber-arp/rising-falling-flying-1934 (accessed on 4 March 2024)
10The stormhttps://www.wikiart.org/en/laszlo-moholy-nagy/the-storm-1922 (accessed on 4 March 2024)
11K VIIhttps://www.wikiart.org/en/laszlo-moholy-nagy/k-vii-1922 (accessed on 4 March 2024)
12Constructionhttps://www.wikiart.org/en/laszlo-moholy-nagy/construction-1923 (accessed on 4 March 2024)
13Composición Maquinistahttps://www.wikiart.org/en/costigliolo/composicion-maquinista-1957 (accessed on 4 March 2024)
14Counter Composition VIIhttps://www.wikiart.org/en/theo-van-doesburg/counter-composition-vii-1924 (accessed on 4 March 2024)
15Collage with Squares Arranged According to the Laws of Chancehttps://www.wikiart.org/en/jean-arp/untitled (accessed on 4 March 2024)
16Building the Revolution. Soviet Art and Architecturehttps://www.wikiart.org/en/lyubov-popova/building-the-revolution-soviet-art-and-architecture (accessed on 4 March 2024)
17Bet Youhttps://www.wikiart.org/en/lyman-kipp/bet-you (accessed on 4 March 2024)
1859-80.75-chttps://www.wikiart.org/en/martin-barre/59-80-75-c-1959 (accessed on 4 March 2024)
19Accent on rosehttps://www.wikiart.org/en/wassily-kandinsky/accent-on-rose-1926 (accessed on 4 March 2024)
20Panneaux de signalisation de chemin de ferhttps://www.wikiart.org/en/jean-hugo/panneaux-de-signalisation-de-chemin-de-fer-1918 (accessed on 4 March 2024)
21Compositionhttps://www.wikiart.org/en/f-lix-del-marle/composition-1947 (accessed on 4 March 2024)
22Estela funerariahttps://www.wikiart.org/en/ramirez-villamizar/estela-funeraria-1998 (accessed on 4 March 2024)
23Proun 8https://www.wikiart.org/en/el-lissitzky/proun-8 (accessed on 4 March 2024)
24Thirteen rectangleshttps://www.wikiart.org/en/wassily-kandinsky/three-rectangles-1930 (accessed on 4 March 2024)
25Kontrasty Mekanofakturowehttps://www.wikiart.org/en/henryk-berlewi/kontrasty-mekanofakturowe-1924 (accessed on 4 March 2024)
26Untitledhttps://www.wikiart.org/en/sophie-taeuber-arp/untitled (accessed on 4 March 2024)
27Striped Picturehttps://www.wikiart.org/en/charlotte-posenenske/striped-picture-1962 (accessed on 4 March 2024)
28Yellow Circlehttps://www.wikiart.org/en/laszlo-moholy-nagy/yellow-circle (accessed on 4 March 2024)
29Composition (Red Forms)https://www.wikiart.org/en/lajos-kassak/composition-red-forms (accessed on 4 March 2024)
30Floating Forms with Whitehttps://www.wikiart.org/en/willi-baumeister/floating-forms-with-white-1938 (accessed on 4 March 2024)
31Compositionhttps://www.wikiart.org/en/janos-mattis-teutsch/composition-1925-4 (accessed on 4 March 2024)
32Suprematism. Two-Dimensional Self Portraithttps://www.wikiart.org/en/kazimir-malevich/suprematism-two-dimensional-self-portrait-1915-1 (accessed on 4 March 2024)
33Compositionhttps://www.wikiart.org/en/sophie-taeuber-arp/composition-1937 (accessed on 4 March 2024)
34Futurehttps://www.wikiart.org/en/giacomo-balla/future-1923 (accessed on 4 March 2024)
35Bluehttps://www.wikiart.org/en/wassily-kandinsky/blue-1922 (accessed on 4 March 2024)
36Untitledhttps://www.wikiart.org/en/charlotte-posenenske/untitled-1962 (accessed on 4 March 2024)
37Compositionhttps://www.wikiart.org/en/f-lix-del-marle/composition-1947-1 (accessed on 4 March 2024)
38Suprematist Paintinghttps://www.wikiart.org/en/kazimir-malevich/suprematist-painting-1917 (accessed on 4 March 2024)
39June 1937 (painting)https://www.wikiart.org/en/ben-nicholson/june-1937-painting-1937 (accessed on 4 March 2024)
40Joy Sparks of the Gods IIhttps://www.wikiart.org/en/hans-hofmann/joy-sparks-of-the-gods-ii-1958 (accessed on 4 March 2024)
41Seated Figure—Abstractedhttps://www.wikiart.org/en/willi-baumeister/seated-figure-abstracted-1926 (accessed on 4 March 2024)
42Suprematic Paintinghttps://www.wikiart.org/en/kazimir-malevich/suprematic-painting-1916 (accessed on 4 March 2024)
43Y = - x2 + bx + c rouge-verthttps://www.wikiart.org/en/georges-vantongerloo/y-x2-bx-c-rouge-vert-1933 (accessed on 4 March 2024)
44Suprematismhttps://www.wikiart.org/en/kazimir-malevich/suprematism-1915-8 (accessed on 4 March 2024)
45Composition XXIIhttps://www.wikiart.org/en/theo-van-doesburg/composition-xxii-1922 (accessed on 4 March 2024)
46Egg Beater No. 4https://www.wikiart.org/en/stuart-davis/egg-beater-no-4-1928 (accessed on 4 March 2024)
47Untitledhttps://www.wikiart.org/en/lyman-kipp/untitled-1970 (accessed on 4 March 2024)
48Compositionhttps://www.wikiart.org/en/janos-mattis-teutsch/composition-1925-3 (accessed on 4 March 2024)
49Prounhttps://www.wikiart.org/en/el-lissitzky/proun (accessed on 4 March 2024)
50Compositionhttps://www.wikiart.org/en/theo-van-doesburg/composition-1918 (accessed on 4 March 2024)
51Composition II, Pink and Bluehttps://www.wikiart.org/en/sandor-bortnyik/composition-ii-pink-and-blue-1921 (accessed on 4 March 2024)
52Suprematismhttps://www.wikiart.org/en/kazimir-malevich/suprematism-1915-7 (accessed on 4 March 2024)
53Suprematismhttps://www.wikiart.org/en/kazimir-malevich/suprematism-1916-1 (accessed on 4 March 2024)
54View of a Townhttps://www.wikiart.org/en/edward-wadsworth/view-of-a-town-1918 (accessed on 4 March 2024)
55Signal serieshttps://www.wikiart.org/en/yuriy-zlotnikov/zlotnikov-1 (accessed on 4 March 2024)
56Suprematismhttps://www.wikiart.org/en/kazimir-malevich/suprematism-1916 (accessed on 4 March 2024)
57Composizionehttps://www.wikiart.org/en/atanasio-soldati/composizione (accessed on 4 March 2024)
58Detaliu Arhitectonichttps://www.wikiart.org/en/gil-nicolescu/gil-nicolescu-detaliu-arhitectonic-1968-1968 (accessed on 4 March 2024)
59UNTITLEDhttps://www.wikiart.org/en/alexander-calder/untitled-1930-0 (accessed on 4 March 2024)
60Mechano-Facturehttps://www.wikiart.org/en/henryk-berlewi/mechano-facture-1953 (accessed on 4 March 2024)
61Compositionhttps://www.wikiart.org/en/f-lix-del-marle/composition-1948 (accessed on 4 March 2024)
62Three Stepped Figureshttps://www.wikiart.org/en/willi-baumeister/three-stepped-figures-1920 (accessed on 4 March 2024)
63Untitledhttps://www.wikiart.org/en/costigliolo/untitled-1953 (accessed on 4 March 2024)
64Untitledhttps://www.wikiart.org/en/john-mclaughlin/untitled-1948 (accessed on 4 March 2024)
65Pictorial Architecturehttps://www.wikiart.org/en/lajos-kassak/pictorial-architecture-1923 (accessed on 4 March 2024)
66Folk Motives (variation)https://www.wikiart.org/en/lajos-kassak/folk-motives-variation-1921 (accessed on 4 March 2024)
67Suprematist Compositionhttps://www.wikiart.org/en/kazimir-malevich/suprematist-composition-1915 (accessed on 4 March 2024)
68Untitledhttps://www.wikiart.org/en/john-mclaughlin/untitled-1 (accessed on 4 March 2024)
69Untitledhttps://www.wikiart.org/en/georges-vantongerloo/untitled-1939 (accessed on 4 March 2024)
70Equilibriumhttps://www.wikiart.org/en/jean-helion/equilibrium-1934 (accessed on 4 March 2024)
71Mechano-Fakturahttps://www.wikiart.org/en/henryk-berlewi/mechano-faktura-1924 (accessed on 4 March 2024)
72Rezolvare Provizorie II (Echilibru)https://www.wikiart.org/zh/gil-nicolescu/gil-nicolescu-rezolvare-provizorie-ii-echilibru-1971-1971 (accessed on 4 March 2024)
73Upwardhttps://www.wikiart.org/en/wassily-kandinsky/upward-1929 (accessed on 4 March 2024)
74Untitled (After Nature: Tree Trunks)https://www.wikiart.org/en/charlotte-posenenske/untitled-after-nature-tree-trunks-1959 (accessed on 4 March 2024)
75Bretagnehttps://www.wikiart.org/en/charlotte-posenenske/bretagne-1960-1 (accessed on 4 March 2024)
76Moving Circleshttps://www.wikiart.org/en/sophie-taeuber-arp/moving-circles-1933 (accessed on 4 March 2024)
77Constructiehttps://www.wikiart.org/en/gil-nicolescu/gil-nicolescu-constructie-1997-1997 (accessed on 4 March 2024)
78Relief #169https://www.wikiart.org/en/cesar-domela/relief-169 (accessed on 4 March 2024)
79Black Relationshiphttps://www.wikiart.org/en/wassily-kandinsky/black-relationship-1924 (accessed on 4 March 2024)
80Proun 23, No.6https://www.wikiart.org/en/el-lissitzky/proun-23-no-6-1919 (accessed on 4 March 2024)
81Blanchttps://www.wikiart.org/en/auguste-herbin/blanc-1947 (accessed on 4 March 2024)
82Sex Symbolhttps://www.wikiart.org/en/jo-baer/sex-symbol-1961 (accessed on 4 March 2024)
83Composition XIIIhttps://www.wikiart.org/en/theo-van-doesburg/composition-xiii-1918-1 (accessed on 4 March 2024)
84Architectonics in Paintinghttps://www.wikiart.org/en/lyubov-popova/architectonics-in-painting (accessed on 4 March 2024)
85Untitled—Sculptural studyhttps://www.wikiart.org/en/lyman-kipp/untitled-sculptural-study (accessed on 4 March 2024)
86Composición futurista https://wikioo.org/es/paintings.php?refarticle=8XXSNL&titlepainting=Futuristic+Composition&artistname=David+Burliuk (accessed on 4 March 2024)
87Composition VIII https://www.wikiart.org/en/wassily-kandinsky/composition-viii-1923 (accessed on 4 March 2024)
88Proun 99https://www.wikiart.org/en/el-lissitzky/proun-99-1924 (accessed on 4 March 2024)
89UNTITLEDhttps://www.wikiart.org/en/alexander-calder/untitled-1930-1 (accessed on 4 March 2024)
90Simultaneous Counter compositionhttps://www.wikiart.org/en/theo-van-doesburg/simultaneous-counter-composition (accessed on 4 March 2024)
91Compositionhttps://www.wikiart.org/en/henryk-berlewi/composition-1934 (accessed on 4 March 2024)
92White Circlehttps://www.wikiart.org/en/alexander-rodchenko/white-circle-1918 (accessed on 4 March 2024)
93Chess Players IIIhttps://www.wikiart.org/en/willi-baumeister/chess-players-iii-1924 (accessed on 4 March 2024)
94Suprematismhttps://www.wikiart.org/en/kazimir-malevich/suprematism-1915-3 (accessed on 4 March 2024)
95Composizionehttps://www.wikiart.org/en/atanasio-soldati/composizione-1951 (accessed on 4 March 2024)
96On the pointshttps://www.wikiart.org/en/wassily-kandinsky/on-the-points-1928 (accessed on 4 March 2024)
97Project for the sculpture Bennetthttps://www.wikiart.org/en/laszlo-moholy-nagy/project-for-the-sculpture-bennett-1946 (accessed on 4 March 2024)
98Composition No. 144https://www.wikiart.org/en/friedrich-vordemberge-gildewart/composition-no-144 (accessed on 4 March 2024)
99Future Planshttps://www.wikiart.org/en/vasyl-yermylov/future-plans (accessed on 4 March 2024)
100Geometriehttps://www.wikiart.org/en/atanasio-soldati/geometrie-1951 (accessed on 4 March 2024)
101Kompozcija. Sėdinti Moterishttps://www.wikiart.org/en/vytautas-kairiukstis/kompozcija-sedinti-moteris (accessed on 4 March 2024)
102Construction AL6https://www.wikiart.org/en/laszlo-moholy-nagy/construction-al6 (accessed on 4 March 2024)
103Geométricohttps://www.wikiart.org/en/costigliolo/geometrico (accessed on 4 March 2024)
104Compositionhttps://www.wikiart.org/en/el-lissitzky/composition (accessed on 4 March 2024)
105Composition XIIIhttps://www.wikiart.org/en/theo-van-doesburg/composition-xiii-1918 (accessed on 4 March 2024)
106Compositionhttps://www.wikiart.org/en/lajos-kassak/composition-1963 (accessed on 4 March 2024)
107Compositionhttps://www.wikiart.org/en/bart-van-der-leck/composition-1918 (accessed on 4 March 2024)
108A 18https://www.wikiart.org/en/laszlo-moholy-nagy/a-18-1927 (accessed on 4 March 2024)
109Nudehttps://www.wikiart.org/en/auguste-herbin/nude-1960 (accessed on 4 March 2024)
110Architectonics in Paintinghttps://www.wikiart.org/en/lyubov-popova/architectonics-in-painting-1 (accessed on 4 March 2024)
111Suprematistic Constructionhttps://www.wikiart.org/en/kazimir-malevich/suprematistic-construction-1915 (accessed on 4 March 2024)
112Compositionhttps://www.wikiart.org/en/marcelle-cahn/composition-2 (accessed on 4 March 2024)
113Several Circleshttps://www.wikiart.org/en/wassily-kandinsky/several-circles-1926 (accessed on 4 March 2024)
114Composition A XIhttps://www.wikiart.org/en/laszlo-moholy-nagy/composition-a-xi-1923 (accessed on 4 March 2024)
115Untitledhttps://www.wikiart.org/en/lyubov-popova/untitled (accessed on 4 March 2024)
116Képarchitektúrahttps://www.wikiart.org/en/lajos-kassak/k-parchitekt-ra-1922 (accessed on 4 March 2024)
117Opus 30-1922 (Factory)https://www.wikiart.org/en/victor-servranckx/opus-30-1922-factory-1922 (accessed on 4 March 2024)
118Il mercante di cuori, bozzetto di scenahttps://www.wikiart.org/en/enrico-prampolini/il-mercante-di-cuori-bozzetto-di-scena-1927 (accessed on 4 March 2024)
119Suprematismhttps://www.wikiart.org/en/kazimir-malevich/suprematism-1916-3 (accessed on 4 March 2024)
120Hearts of the Revolutionaries: Passage of the Planets of the Futurehttps://www.wikiart.org/en/joseph-beuys/hearts-of-the-revolutionaries-passage-of-the-planets-of-the-future (accessed on 4 March 2024)
121Peinture 58: Opus 58https://www.wikiart.org/en/victor-servranckx/peinture-58-opus-58-1923 (accessed on 4 March 2024)
122Emery Paper collagehttps://www.wikiart.org/en/laszlo-moholy-nagy/emery-paper-collage-1930 (accessed on 4 March 2024)
123Compositionhttps://www.wikiart.org/en/laszlo-moholy-nagy/composition-1 (accessed on 4 March 2024)
124Equilibriumhttps://www.wikiart.org/en/lajos-kassak/equilibrium-1926 (accessed on 4 March 2024)
125Les Relais Futureshttps://www.wikiart.org/en/jean-paul-jerome/les-relais-futures-1996 (accessed on 4 March 2024)
126Orangehttps://www.wikiart.org/en/wassily-kandinsky/orange-1923 (accessed on 4 March 2024)
127Rombohttps://www.wikiart.org/en/costigliolo/rombo-1960 (accessed on 4 March 2024)
128Untitled—Laca a La Piroxilinahttps://www.wikiart.org/en/costigliolo/untitled-laca-a-la-piroxilina-1950 (accessed on 4 March 2024)
129Cubist Compositionhttps://www.wikiart.org/en/auguste-herbin/cubist-composition-1913 (accessed on 4 March 2024)
130Untitledhttps://www.wikiart.org/en/marc-vaux/untitled-1964 (accessed on 4 March 2024)
131Untitledhttps://www.wikiart.org/en/el-lissitzky/untitled (accessed on 4 March 2024)
132Proun 4 Bhttps://www.wikiart.org/en/el-lissitzky/proun-4-b (accessed on 4 March 2024)
133Invention (Composition 31)https://www.wikiart.org/en/rudolf-bauer/invention-composition-31-1933 (accessed on 4 March 2024)
134Picturial Architecture Vhttps://www.wikiart.org/en/lajos-kassak/picturial-architecture-v-1923 (accessed on 4 March 2024)
135Small worldshttps://www.wikiart.org/en/wassily-kandinsky/small-worlds-1922 (accessed on 4 March 2024)
136Telephone Picture EM 3https://www.wikiart.org/en/laszlo-moholy-nagy/telephone-picture-em-3-1922 (accessed on 4 March 2024)
137Planar Tension with Redhttps://www.wikiart.org/en/willi-baumeister/planar-tension-with-red-1926 (accessed on 4 March 2024)
138Untitledhttps://www.wikiart.org/en/john-ferren/untitled-1932-1 (accessed on 4 March 2024)
139Chesshttps://www.wikiart.org/en/willi-baumeister/chess-1925 (accessed on 4 March 2024)
140Arithmetic compositionhttps://www.wikiart.org/en/theo-van-doesburg/arithmetic-composition-1929 (accessed on 4 March 2024)
141Composition Z VIIIhttps://www.wikiart.org/en/laszlo-moholy-nagy/composition-z-viii-1924 (accessed on 4 March 2024)
142Circle and Square in Spacehttps://www.wikiart.org/en/henryk-berlewi/circle-and-square-in-space-1923 (accessed on 4 March 2024)
143Composition in Color Ahttps://www.wikiart.org/en/piet-mondrian/composition-in-color-a-1917 (accessed on 4 March 2024)
144Compositionhttps://www.wikiart.org/en/georges-vantongerloo/composition-1918 (accessed on 4 March 2024)
145Suprematismhttps://www.wikiart.org/en/kazimir-malevich/suprematism-4 (accessed on 4 March 2024)
146Painterly Architectonichttps://www.wikiart.org/en/lyubov-popova/painterly-architectonic (accessed on 4 March 2024)
147Proun 1 Chttps://www.wikiart.org/en/el-lissitzky/proun-1-c-1919 (accessed on 4 March 2024)
148Composition of Circles and Overlapping Angleshttps://www.wikiart.org/en/sophie-taeuber-arp/composition-of-circles-and-overlapping-angles-1930 (accessed on 4 March 2024)
149Composition dans un cerclehttps://www.wikiart.org/en/sophie-taeuber-arp/composition-dans-un-cercle-1937 (accessed on 4 March 2024)
150Abstract Compositionhttps://www.wikiart.org/en/edward-wadsworth/abstract-composition-1915 (accessed on 4 March 2024)
151Suprematismhttps://www.wikiart.org/en/kazimir-malevich/suprematism-1915-1 (accessed on 4 March 2024)
152Sculptural Composition ‘Three Russian Revolutions. 1825, 1905 and 1917’. Sketchhttps://www.wikiart.org/en/vasyl-yermylov/sculptural-composition-three-russian-revolutions-1825-1905-and-1917-sketch-1925 (accessed on 4 March 2024)
153Suprematism with Blue Triangle and Black Squarehttps://www.wikiart.org/en/kazimir-malevich/suprematism-with-blue-triangle-and-black-square-1915 (accessed on 4 March 2024)
154Suprematismhttps://www.wikiart.org/en/kazimir-malevich/suprematism-1915-6 (accessed on 4 March 2024)
155Collagehttps://www.wikiart.org/en/ad-reinhardt/collage-1938 (accessed on 4 March 2024)
156Compositionhttps://www.wikiart.org/en/theo-van-doesburg/composition-1917 (accessed on 4 March 2024)
157Untitledhttps://www.wikiart.org/en/ramirez-villamizar/untitled-1958 (accessed on 4 March 2024)
158Sextanthttps://www.wikiart.org/en/marsden-hartley/sextant-1917 (accessed on 4 March 2024)
159Sculptural studyhttps://www.wikiart.org/en/lyman-kipp/sculptural-study-1964 (accessed on 4 March 2024)
160The messenger of autumnhttps://www.wikiart.org/en/paul-klee/the-messenger-of-autumn-1922 (accessed on 4 March 2024)
161Dreiformvariationhttps://www.wikiart.org/en/carl-buchheister/dreiformvariation-1928 (accessed on 4 March 2024)
162Compositionhttps://www.wikiart.org/en/michel-seuphor/composition-1928 (accessed on 4 March 2024)
163Composizionehttps://www.wikiart.org/en/bruno-munari/composizione-1950 (accessed on 4 March 2024)
164Untitledhttps://www.wikiart.org/en/bruno-munari/untitled-1951 (accessed on 4 March 2024)
165Picture T 21https://www.wikiart.org/en/willi-baumeister/picture-t-21-1922 (accessed on 4 March 2024)
166Farfallahttps://www.wikiart.org/en/pettoruti/farfalla (accessed on 4 March 2024)
167Suprematist Paintinghttps://www.wikiart.org/en/kazimir-malevich/suprematist-painting-1916 (accessed on 4 March 2024)
16860-T-18https://www.wikiart.org/en/martin-barre/60-t-18-1960 (accessed on 4 March 2024)
169Komposition in Schwarz und Weißhttps://www.wikiart.org/en/otto-freundlich/komposition-in-schwarz-und-wei-1936 (accessed on 4 March 2024)
170Untitledhttps://www.wikiart.org/en/erich-buchholz/untitled-1920-1 (accessed on 4 March 2024)
171Red Comphttps://www.wikiart.org/en/erich-buchholz/red-comp-1968 (accessed on 4 March 2024)
172Red Rotationhttps://www.wikiart.org/en/victor-servranckx/red-rotation-1922 (accessed on 4 March 2024)
173Komposition rotes Dreieckhttps://www.wikiart.org/en/carl-buchheister/komposition-rotes-dreieck-1934 (accessed on 4 March 2024)
174Untitledhttps://www.wikiart.org/en/james-lee-byars/untitled-1959-1 (accessed on 4 March 2024)
175Composition No. 15https://www.wikiart.org/en/friedrich-vordemberge-gildewart/composition-no-15-1925 (accessed on 4 March 2024)
176Announcerhttps://www.wikiart.org/en/el-lissitzky/announcer-1923 (accessed on 4 March 2024)
177Bild mit schwarzem Keilhttps://www.wikiart.org/en/carl-buchheister/bild-mit-schwarzem-keil-1931 (accessed on 4 March 2024)
178Composition K IVhttps://www.wikiart.org/en/laszlo-moholy-nagy/composition-k-iv-1922 (accessed on 4 March 2024)
179AXL IIhttps://www.wikiart.org/en/laszlo-moholy-nagy/axl-ii-1927 (accessed on 4 March 2024)
180Non-Objective Composition (Suprematism)https://www.wikiart.org/en/olga-rozanova/non-objective-composition-suprematism (accessed on 4 March 2024)
181Opus 14https://www.wikiart.org/en/victor-servranckx/opus-14-1927 (accessed on 4 March 2024)
182Untitledhttps://www.wikiart.org/fr/sandu-darie/untitled-1950-1 (accessed on 4 March 2024)
183Composiciónhttps://www.wikiart.org/en/costigliolo/composicion-1954 (accessed on 4 March 2024)
184Coexistenta Spatiilor Contradictorii IIhttps://www.wikiart.org/en/gil-nicolescu/gil-nicolescu-coexistenta-spatiilor-contradictorii-ii-1971-1971 (accessed on 4 March 2024)
185Untitledhttps://www.wikiart.org/en/tony-smith/untitled-1962 (accessed on 4 March 2024)
186Study for a Paintinghttps://www.wikiart.org/en/ad-reinhardt/study-for-a-painting-1938-1 (accessed on 4 March 2024)
187Am 7 (26)https://www.wikiart.org/en/laszlo-moholy-nagy/am-7-26-1926 (accessed on 4 March 2024)
188Mechano-Facturahttps://www.wikiart.org/en/henryk-berlewi/mechano-factura-1924 (accessed on 4 March 2024)
189Constructivist Compositionhttps://www.wikiart.org/en/lajos-kassak/constructivist-composition-1921 (accessed on 4 March 2024)
190Striped Picturehttps://www.wikiart.org/en/charlotte-posenenske/striped-picture-1965-1 (accessed on 4 March 2024)
191Proun 30https://www.wikiart.org/en/el-lissitzky/proun-30-1920 (accessed on 4 March 2024)
192Untitledhttps://www.wikiart.org/en/gyorgy-kepes/untitled-3 (accessed on 4 March 2024)
193Mechano-Faktura Constructionhttps://www.wikiart.org/en/henryk-berlewi/mechano-faktura-construction-1924 (accessed on 4 March 2024)
194A IIhttps://www.wikiart.org/en/laszlo-moholy-nagy/a-ii-1924 (accessed on 4 March 2024)
195Compositionhttps://www.wikiart.org/en/georges-vantongerloo/composition-1921 (accessed on 4 March 2024)
196Non-objective Compositionhttps://www.wikiart.org/en/olga-rozanova/non-objective-composition-1917 (accessed on 4 March 2024)
197Untitled #5 (2-17-55)https://www.wikiart.org/en/myron-stout/untitled-5-2-17-55-1955 (accessed on 4 March 2024)
198Composition Vhttps://www.wikiart.org/en/michel-seuphor/composition-v-1929 (accessed on 4 March 2024)
199Compositionhttps://www.wikiart.org/en/cesar-domela/composition-1926 (accessed on 4 March 2024)
200Suprematist Paintinghttps://www.wikiart.org/en/kazimir-malevich/suprematist-painting-1916 (accessed on 4 March 2024)
201Suprematism with Eight Red Rectangleshttps://www.wikiart.org/en/kazimir-malevich/suprematism-with-eight-rectangles-1915 (accessed on 4 March 2024)
202Untitledhttps://www.wikiart.org/en/john-ferren/untitled-1932 (accessed on 4 March 2024)
203Non-Objective Composition (Suprematism)https://www.wikiart.org/en/olga-rozanova/non-objective-composition-suprematism-2 (accessed on 4 March 2024)
204Sil Ihttps://www.wikiart.org/en/laszlo-moholy-nagy/sil-i-1933 (accessed on 4 March 2024)
205Sahttps://www.wikiart.org/en/yves-gaucher/sa-1962 (accessed on 4 March 2024)
206O. T. (Croisement de droites, plans)https://www.wikiart.org/en/sophie-taeuber-arp/o-t-croisement-de-droites-plans-1942 (accessed on 4 March 2024)
207Proun 19Dhttps://www.wikiart.org/en/el-lissitzky/proun-19d-1922 (accessed on 4 March 2024)
208Schwarz Rot Goldhttps://www.wikiart.org/en/erich-buchholz/schwarz-rot-gold-1921 (accessed on 4 March 2024)
209Composition No. 4 (Leaving The Factory)https://www.wikiart.org/en/bart-van-der-leck/composition-no-4-leaving-the-factory-1917 (accessed on 4 March 2024)
210Simultaneous Counter Compositionhttps://www.wikiart.org/en/theo-van-doesburg/simultaneous-counter-composition-1930 (accessed on 4 March 2024)
211Z IIhttps://www.wikiart.org/en/laszlo-moholy-nagy/z-ii (accessed on 4 March 2024)
212Black and White Formhttps://www.wikiart.org/en/doug-ohlson/black-and-white-form-1962 (accessed on 4 March 2024)
213Picturial Architecturehttps://www.wikiart.org/en/lajos-kassak/picturial-architecture-1922 (accessed on 4 March 2024)
214Black Lines 1https://www.wikiart.org/en/georgia-o-keeffe/black-lines-1 (accessed on 4 March 2024)
215Compositionhttps://www.wikiart.org/en/f-lix-del-marle/composition-1925 (accessed on 4 March 2024)
216Black Sunhttps://www.wikiart.org/en/alexander-calder/black-sun-1953 (accessed on 4 March 2024)
217Composition Xhttps://www.wikiart.org/fr/theo-van-doesburg/composition-x-1918 (accessed on 4 March 2024)
218Verde Azul y Negrohttps://www.wikiart.org/en/ramirez-villamizar/verde-azul-y-negro-1958 (accessed on 4 March 2024)
219Composição—linhas horizontaishttps://www.wikiart.org/en/lothar-charoux/composi-o-linhas-horizontais-1950 (accessed on 4 March 2024)
220Eclipse lunarhttps://www.wikiart.org/en/ramirez-villamizar/eclipse-lunar-1999 (accessed on 4 March 2024)
221Suprematist Composition: Aeroplane Flyinghttps://www.wikiart.org/en/kazimir-malevich/aeroplane-flying-1915 (accessed on 4 March 2024)
222Untitledhttps://www.wikiart.org/en/balcomb-greene/untitled-1936 (accessed on 4 March 2024)
223Untitled—Laca a La Piroxilinahttps://www.wikiart.org/en/costigliolo/untitled-laca-a-la-piroxilina-1950 (accessed on 4 March 2024)
224The letter scalehttps://www.wikiart.org/en/gosta-adrian-nilsson/the-letter-scale-1923 (accessed on 4 March 2024)
225Proun 43https://www.wikiart.org/en/el-lissitzky/proun-43 (accessed on 4 March 2024)
226Proun Interpenetrating Planeshttps://www.wikiart.org/en/el-lissitzky/proun-interpenetrating-planes (accessed on 4 March 2024)
227332 rhttps://www.wikiart.org/en/carl-buchheister/332-r-1932 (accessed on 4 March 2024)
228Composition XIII (Woman in studio)https://www.wikiart.org/en/theo-van-doesburg/composition-xiii-woman-in-studio-1918 (accessed on 4 March 2024)
229F1https://www.wikiart.org/en/pierre-daura/f1-1930 (accessed on 4 March 2024)
230Counter composition XIIIhttps://www.wikiart.org/en/theo-van-doesburg/counter-composition-xiii-1925 (accessed on 4 March 2024)
231Color Perspectivehttps://www.wikiart.org/en/jacques-villon/color-perspective-1922 (accessed on 4 March 2024)
232Suprematismhttps://www.wikiart.org/en/kazimir-malevich/suprematism-1 (accessed on 4 March 2024)
233Machine with Red Squarehttps://www.wikiart.org/en/willi-baumeister/machine-with-red-square-1926 (accessed on 4 March 2024)
234Fluxhttps://www.wikiart.org/en/gil-nicolescu/gil-nicolescu-flux-1968-1968 (accessed on 4 March 2024)
235Non-Objective Composition (Suprematism)https://www.wikiart.org/en/olga-rozanova/non-objective-composition-suprematism-1 (accessed on 4 March 2024)
236Composition cubistehttps://www.wikiart.org/en/auguste-herbin/composition-cubiste-1917 (accessed on 4 March 2024)
237On White IIhttps://www.wikiart.org/en/wassily-kandinsky/on-white-ii-1923 (accessed on 4 March 2024)
238Vertical and horizontal compositionhttps://www.wikiart.org/en/sophie-taeuber-arp/vertical-and-horizontal-composition-1916 (accessed on 4 March 2024)
239Room (Space Construction)https://www.wikiart.org/en/peter-laszlo-peri/room-space-construction-1921 (accessed on 4 March 2024)
240Suprematismhttps://www.wikiart.org/en/kazimir-malevich/suprematism-1917 (accessed on 4 March 2024)
241Untitledhttps://www.wikiart.org/en/johannes-itten/untitled-1966 (accessed on 4 March 2024)
242Abstract 5https://www.wikiart.org/en/rene-marcil/abstract-5-1965 (accessed on 4 March 2024)
243Proun G7https://www.wikiart.org/en/el-lissitzky/proun-g7-1923 (accessed on 4 March 2024)
244Suprematismhttps://www.wikiart.org/en/kazimir-malevich/suprematism-1915 (accessed on 4 March 2024)
245Suprematismhttps://www.wikiart.org/en/kazimir-malevich/suprematism-1915-5 (accessed on 4 March 2024)
246Pejzaż morski, 2 VIIhttps://www.wikiart.org/en/wladyslaw-strzeminski/pejza-morski-2-vii-1934 (accessed on 4 March 2024)
247Flamelet Picturehttps://www.wikiart.org/en/willi-baumeister/flamelet-picture-1931 (accessed on 4 March 2024)
248Proun 30 Thttps://www.wikiart.org/en/el-lissitzky/proun-30-t (accessed on 4 March 2024)
249Black and Red Tensionhttps://www.wikiart.org/en/balcomb-greene/black-and-red-tension-1935 (accessed on 4 March 2024)

References

  1. Baumgarten, A.G. Aesthetica; Impens Ioannis Christiani Kleyb: 1763. Available online: https://books.google.com.sg/books?id=wMfu3eOBiVEC&pg (accessed on 4 March 2024).
  2. Zeki, S. Inner Vision: An Exploration of Art and the Brain; Oxford University Press: New York, NY, USA, 2000. [Google Scholar]
  3. Leder, H.; Belke, B.; Oeberst, A.; Augustin, D. A model of aesthetic appreciation and aesthetic judgments. Br. J. Psychol. 2004, 95, 489–508. [Google Scholar] [CrossRef] [PubMed]
  4. Leder, H.; Nadal, M. Ten years of a model of aesthetic appreciation and aesthetic judgments: The aesthetic episode—Developments and challenges in empirical aesthetics. Br. J. Psychol. 2014, 105, 443–464. [Google Scholar] [CrossRef] [PubMed]
  5. Guy, M.W.; Reynolds, G.D.; Mosteller, S.M.; Dixon, K.C. The effects of stimulus symmetry on hierarchical processing in infancy. Dev. Psychobiol. 2017, 59, 279–290. [Google Scholar] [CrossRef]
  6. Jacobsen, T.; Höfel, L. Aesthetics Electrified: An Analysis of Descriptive Symmetry and Evaluative Aesthetic Judgment Processes Using Event-Related Brain Potentials. Empir. Stud. Arts 2001, 19, 177–190. [Google Scholar] [CrossRef]
  7. Leder, H.; Tinio, P.P.L.; Brieber, D.; Kröner, T.; Jacobsen, T.; Rosenberg, R. Symmetry Is Not a Universal Law of Beauty. Empir. Stud. Arts 2019, 37, 104–114. [Google Scholar] [CrossRef]
  8. van Paasschen, J.; Bacci, F.; Melcher, D.P.; Brattico, E. The Influence of Art Expertise and Training on Emotion and Preference Ratings for Representational and Abstract Artworks. PLoS ONE 2015, 10, e0134241. [Google Scholar] [CrossRef]
  9. Zeki, S.; Chén, O.Y.; Romaya, J.P. The Biological Basis of Mathematical Beauty. Front. Hum. Neurosci. 2018, 12, 467. [Google Scholar] [CrossRef]
  10. Kirk, U.; Skov, M.; Hulme, O.; Christensen, M.S.; Zeki, S. Modulation of aesthetic value by semantic context: An fMRI study. Neuroimage 2009, 44, 1125–1132. [Google Scholar] [CrossRef] [PubMed]
  11. Estrada-Gonzalez, V.; East, S.; Garbutt, M.; Spehar, B. Viewing Art in Different Contexts. Front. Psychol. 2020, 11, 569. [Google Scholar] [CrossRef] [PubMed]
  12. Birkhoff, G.D. Aesthetic Measure; Harvard University Press: Cambridge, MA, USA, 1933. [Google Scholar]
  13. Liu, Y. Engineering aesthetics and aesthetic ergonomics: Theoretical foundations and a dual-process research methodology. Ergonomics 2003, 46, 1273–1292. [Google Scholar] [CrossRef]
  14. Perc, M. Beauty in artistic expressions through the eyes of networks and physics. J. R. Soc. Interface 2020, 17, 20190686. [Google Scholar] [CrossRef] [PubMed]
  15. Hoenig, F. Defining Computational Aesthetics, Computational Aesthetics in Graphics. Vis. Imaging 2005, 2005, 13–18. [Google Scholar]
  16. Haralick, R.M. Statistical and Structural Approaches to Texture. Proc. IEEE 1979, 67, 786–804. [Google Scholar] [CrossRef]
  17. Falomir, Z.; Museros, L.; Sanz, I.; Gonzalez-Abril, L. Categorizing paintings in art styles based on qualitative color descriptors, quantitative global features and machine learning (QArt-Learn). Expert Syst. Appl. 2018, 97, 83–94. [Google Scholar] [CrossRef]
  18. Sandoval, C.; Pirogova, E.; Lech, M. Two-stage deep learning approach to the classification of fine-art paintings. IEEE Access 2019, 7, 41770–41781. [Google Scholar] [CrossRef]
  19. Zhong, S.; Huang, X.; Xiao, Z. Fine-art painting classification via two-channel dual path networks. Int. J. Mach. Learn. Cybern. 2020, 11, 137–152. [Google Scholar] [CrossRef]
  20. Baraldi, L.; Cornia, M.; Grana, C.; Cucchiara, R. Aligning Text and Document Illustrations: Towards Visually Explainable Digital Humanities, In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018.
  21. Tan, W.; Wang, J.; Wang, Y.; Lewis, M.; Jarrold, W. CNN Models for Classifying Emotions Evoked by Paintings; Technical Report; SVL Lab, Stanford University: Stanford, CA, USA, 2018. [Google Scholar]
  22. Yanulevskaya, V.; Uijlings, J.; Bruni, E.; Sartori, A.; Zamboni, E.; Bacci, F.; Melcher, D.; Sebe, N. In the eye of the beholder: Employing statistical analysis and eye tracking for analyzing abstract paintings, In Proceedings of the 20th ACM International Conference on Multimedia, Nara, Japen, 29 October–2 November 2012.
  23. Markovic, S. Perceptual, Semantic and Affective Dimensions of Experience of Abstract and Representational Paintings. Psihologija 2011, 44, 191–210. [Google Scholar] [CrossRef]
  24. Navon, D. The forest revisited: More on global precedence. Psychol. Res. 1981, 43, 1–32. [Google Scholar] [CrossRef]
  25. Oliva, A.; Torralba, A. Building the gist of a scene: The role of global image features in recognition. Prog. Brain Res. 2006, 155, 23. [Google Scholar] [PubMed]
  26. Navon, D. Forest before trees: The precedence of global features in visual perception. Cogn. Psychol. 1977, 9, 353–383. [Google Scholar] [CrossRef]
  27. Schütz, A.C.; Braun, D.I.; Gegenfurtner, K.R. Eye movements and perception: A selective review. J. Vis. 2011, 11, 9. [Google Scholar] [CrossRef] [PubMed]
  28. Love, B.C.; Rouder, J.N.; Wisniewski, E.J. A structural account of global and local processing. Cogn. Psychol. 1999, 38, 291–316. [Google Scholar] [CrossRef] [PubMed]
  29. Sharvashidze, N.; Schutz, A.C. Task-Dependent Eye-Movement Patterns in Viewing Art. J. Eye Mov. Res. 2020, 13, 1–17. [Google Scholar] [CrossRef]
  30. Park, S.; Wiliams, L.; Chamberlain, R. Global Saccadic Eye Movements Characterise Artists’ Visual Attention While Drawing. Empir. Stud. Arts 2022, 40, 228–244. [Google Scholar] [CrossRef]
  31. Koide, N.; Kubo, T.; Nishida, S.; Shibata, T.; Ikeda, K. Art expertise reduces influence of visual salience on fixation in viewing abstract-paintings. PLoS ONE 2015, 10, e0117696. [Google Scholar] [CrossRef]
  32. Soxibov, R. Composition and Its Application in Painting. Sci. Innov. 2023, 2, 108–113. [Google Scholar]
  33. Lelievre, P.; Neri, P. A deep-learning framework for human perception of abstract art composition. J. Vis. 2021, 21, 9. [Google Scholar] [CrossRef] [PubMed]
  34. Wang, G.; Shen, J.; Yue, M.; Ma, Y.; Wu, S. A Computational Study of Empty Space Ratios in Chinese Landscape Painting, 618–2011. Leonardo 2022, 55, 43–47. [Google Scholar] [CrossRef]
  35. Fan, Z.; Zhang, K.; Zheng, X.S. Evaluation and Analysis of White Space in Wu Guanzhong’s Chinese Paintings. Leonardo 2019, 52, 111–116. [Google Scholar] [CrossRef]
  36. Mollica, P. Color Theory: An Essential Guide to Color-From Basic Principles to Practical Applications; Walter Foster: Irvine, CA, USA, 2013; p. 17. [Google Scholar]
  37. Zhai, Q.; Luo, M.R.; Liu, X.Y. The impact of illuminance and colour temperature on viewing fine art paintings under LED lighting. Lighting Res. Technol. 2015, 47, 795–809. [Google Scholar] [CrossRef]
  38. Feng, Y.; Wang, Z.; Zhang, M.; Qin, X.; Liu, T. Exploring the Influence of the Illumination and Painting Tone of Art Galleries on Visual Comfort. Photonics 2022, 9, 981. [Google Scholar] [CrossRef]
  39. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
  40. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  41. Hammouda, K.; Jernigan, E. Texture segmentation using gabor filters. Cent. Intell. Mach 2000, 2, 64–71. [Google Scholar]
  42. Fan, Z.; Li, Y.; Zhang, K.; Yu, J.; Huang, M.L. Measuring and Evaluating the Visual Complexity Of Chinese Ink Paintings. Comput. J. 2022, 65, 1964–1976. [Google Scholar] [CrossRef]
  43. Finkel, R.A.; Bentley, J.L. Quad trees a data structure for retrieval on composite keys. Acta Inform. 1974, 4, 1–9. [Google Scholar] [CrossRef]
  44. Berger, H. Über das elektroenkephalogramm des menschen. Arch. Psychiatr. Nervenkrankh. 1929, 87, 527–570. [Google Scholar] [CrossRef]
  45. Walter, W.G.; Cooper, R.; Aldridge, V.J.; Mccallum, W.C.; Winter, A.L. Contingent Negative Variation: An Electric Sign of Sensori-Motor Association and Expectancy in the Human Brain. Nature 1964, 203, 380–384. [Google Scholar] [CrossRef]
  46. Boser, B.; Guyon, I.; Vapnik, V. A training algorithm for optimal margin classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, Pittsburgh, PA, USA, 27–29 July 1992. [Google Scholar]
  47. Wong, K.K. Cybernetical intelligence: Engineering Cybernetics with Machine Intelligence; John Wiley & Sons: Hoboken, NJ, USA, 2023. [Google Scholar]
  48. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  49. Chang, C.; Lin, C. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
  50. Shawe-Taylor, J.; Cristianini, N. Kernel Methods for Pattern Analysis; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  51. Bishop, C.M.; Nasrabadi, N.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  52. Noguchi, Y.; Murota, M. Temporal dynamics of neural activity in an integration of visual and contextual information in an esthetic preference task. Neuropsychologia 2013, 51, 1077–1084. [Google Scholar] [CrossRef]
  53. Grüner, S.; Specker, E.; Leder, H. Effects of Context and Genuineness in the Experience of Art. Empir. Stud. Arts 2019, 37, 138–152. [Google Scholar] [CrossRef]
  54. Petcu, E.B. The Rationale for a Redefinition of Visual Art Based on Neuroaesthetic Principles. Leonardo 2018, 51, 59–60. [Google Scholar] [CrossRef]
  55. Sbriscia-Fioretti, B.; Berchio, C.; Freedberg, D.; Gallese, V.; Umiltà, M.A.; Di Russo, F. ERP modulation during observation of abstract paintings by Franz Kline. PLoS ONE 2013, 8, e75241. [Google Scholar] [CrossRef]
  56. O’Doherty, J.; Critchley, H.; Deichmann, R.; Dolan, R.J. Dissociating valence of outcome from behavioral control in human orbital and ventral prefrontal cortices. J. Neurosci. 2003, 23, 7931–7939. [Google Scholar] [CrossRef] [PubMed]
  57. Schultz, W.; Tremblay, L. Relative reward preference in primate orbitofrontal cortex. Nature 1999, 398, 704–708. [Google Scholar]
  58. Thai, C.H. Electrophysiological Measures of Aesthetic Processing. Doctoral Dissertation, Swinburne University of Technology, Melbourne, Australia, 2019. [Google Scholar]
  59. Munar, E.; Nadal, M.; Rosselló, J.; Flexas, A.; Moratti, S.; Maestú, F.; Marty, G.; Cela-Conde, C.J.; Martinez, L.M. Lateral orbitofrontal cortex involvement in initial negative aesthetic impression formation. PLoS ONE 2012, 7, e38152. [Google Scholar] [CrossRef] [PubMed]
  60. Medathati, N.V.K.; Neumann, H.; Masson, G.S.; Kornprobst, P. Bio-inspired computer vision: Towards a synergistic approach of artificial and biological vision. Comput. Vis. Image Underst. 2016, 150, 1–30. [Google Scholar] [CrossRef]
  61. Milner, A.D.; Goodale, M.A. Two visual systems re-viewed. Neuropsychologia 2008, 46, 774–785. [Google Scholar] [CrossRef]
  62. Goodale, M.A.; Milner, A.D. Separate visual pathways for perception and action. Trends Neurosci. 1992, 15, 20–25. [Google Scholar] [CrossRef]
  63. Zhong, H.; Wang, R. Neural mechanism of visual information degradation from retina to V1 area. Cogn. Neurodyn. 2021, 15, 299–313. [Google Scholar] [CrossRef] [PubMed]
  64. Bayram, A.; Karahan, E.; Bilgiç, B.; Ademoglu, A.; Demiralp, T. Achromatic temporal-frequency responses of human lateral geniculate nucleus and primary visual cortex. Vis. Res. 2016, 127, 177–185. [Google Scholar] [CrossRef] [PubMed]
  65. Willmore, B.D.; Prenger, R.J.; Gallant, J.L. Neural representation of natural images in visual area V2. J. Neurosci. 2010, 30, 2102–2114. [Google Scholar] [CrossRef] [PubMed]
  66. Rolls, E.T.; Deco, G.; Huang, C.; Feng, J. Multiple cortical visual streams in humans. Cereb. Cortex 2023, 33, 3319–3349. [Google Scholar] [CrossRef] [PubMed]
  67. Hegde, J.; Van Essen, D.C. Selectivity for Complex Shapes in Primate Visual Area V2. J. Neurosci. 2000, 20, 61. [Google Scholar] [CrossRef] [PubMed]
  68. Silva, A.E.; Thompson, B.; Liu, Z. Motion opponency examined throughout visual cortex with multivariate pattern analysis of fMRI data. Hum. Brain Mapp. 2021, 42, 5–13. [Google Scholar] [CrossRef]
  69. Bannert, M.M.; Bartels, A. Human V4 Activity Patterns Predict Behavioral Performance in Imagery of Object Color. J. Neurosci. 2018, 38, 3657–3668. [Google Scholar] [CrossRef] [PubMed]
  70. Kim, T.; Bair, W.; Pasupathy, A. Neural Coding for Shape and Texture in Macaque Area V4. J. Neurosci. 2019, 39, 4760–4774. [Google Scholar] [CrossRef]
  71. Vetter, P.; Grosbras, M.; Muckli, L. TMS over V5 disrupts motion prediction. Cereb. Cortex 2015, 25, 1052–1059. [Google Scholar] [CrossRef] [PubMed]
  72. Baumgartner, H.M.; Graulty, C.J.; Hillyard, S.A.; Pitts, M.A. Does spatial attention modulate the earliest component of the visual evoked potential? In The Cognitive Neuroscience of Attention; Routledge: London, UK, 2020; pp. 9–24. [Google Scholar]
  73. Conte, S.; Richards, J.E.; Guy, M.W.; Xie, W.; Roberts, J.E. Face-sensitive brain responses in the first year of life. Neuroimage 2020, 211, 116602. [Google Scholar] [CrossRef]
  74. Schindler, S.; Bruchmann, M.; Gathmann, B.; Moeck, R.; Straube, T. Effects of low-level visual information and perceptual load on P1 and N170 responses to emotional expressions. Cortex 2021, 136, 14–27. [Google Scholar] [CrossRef] [PubMed]
  75. Jiang, X. Evaluation of Aesthetic Response to Clothing Color Combination: A Behavioral and Electrophysiological Study. J. Fiber Bioeng. Inform. 2018, 6, 405–414. [Google Scholar] [CrossRef]
  76. Liang, T.; Lau, B.T.; White, D.; Barron, D.; Zhang, W.; Yue, Y.; Ogiela, M. Artificial Aesthetics: Bridging Neuroaesthetics and Machine Learning. In Proceedings of the 2024 8th International Conference on Control Engineering and Artificial Intelligence, Shanghai, China, 26–28 January 2024; ACM: New York, NY, USA, 2024. [Google Scholar]
  77. Li, R.; Zhang, J. Review of computational neuroaesthetics: Bridging the gap between neuroaesthetics and computer science. Brain Inform. 2020, 7, 16. [Google Scholar] [CrossRef] [PubMed]
  78. Botros, C.; Mansour, Y.; Eleraky, A. Architecture Aesthetics Evaluation Methodologies of Humans and Artificial Intelligence. MSA Eng. J. 2023, 2, 450–462. [Google Scholar] [CrossRef]
  79. Skov, M.; Nadal, M. A Farewell to Art: Aesthetics as a Topic in Psychology and Neuroscience. Perspect. Psychol. Sci. 2020, 15, 630–642. [Google Scholar] [CrossRef]
  80. Zeki, S.; Bao, Y.; Poppel, E. Neuroaesthetics: The art, science, and brain triptych. Psychol. J. 2020, 9, 427–428. [Google Scholar] [CrossRef]
Figure 1. The calculation of blank space in Suprematist Composition: Airplane Flying (images processed by authors as fair use from wikiart.org) https://www.wikiart.org/en/kazimir-malevich/aeroplane-flying-1915 (accessed on 4 March 2024).
Figure 1. The calculation of blank space in Suprematist Composition: Airplane Flying (images processed by authors as fair use from wikiart.org) https://www.wikiart.org/en/kazimir-malevich/aeroplane-flying-1915 (accessed on 4 March 2024).
Applsci 14 07384 g001
Figure 2. Kernels of different wavelengths λ and angles θ .
Figure 2. Kernels of different wavelengths λ and angles θ .
Applsci 14 07384 g002
Figure 3. Example images of features.
Figure 3. Example images of features.
Applsci 14 07384 g003
Figure 4. Illustration of the stimulus paradigm applied.
Figure 4. Illustration of the stimulus paradigm applied.
Applsci 14 07384 g004
Figure 5. Grand–average event–related brain potentials and isopotential contour plot (200–1000 ms) for genuine and fake context. N = 12.
Figure 5. Grand–average event–related brain potentials and isopotential contour plot (200–1000 ms) for genuine and fake context. N = 12.
Applsci 14 07384 g005
Figure 6. Grand–average event–related brain potentials and isopotential contour plot (50–120 ms) for context (genuine, fake) × composition (Class I, Class II). N = 12.
Figure 6. Grand–average event–related brain potentials and isopotential contour plot (50–120 ms) for context (genuine, fake) × composition (Class I, Class II). N = 12.
Applsci 14 07384 g006
Figure 7. Grand–average event–related brain potentials and isopotential contour plot (200–300 ms) for context (genuine, fake) × tone (Class I, Class II). N = 12.
Figure 7. Grand–average event–related brain potentials and isopotential contour plot (200–300 ms) for context (genuine, fake) × tone (Class I, Class II). N = 12.
Applsci 14 07384 g007
Figure 8. Grand–average event–related brain potentials and isopotential contour plot (70–130 ms) for context (genuine, fake) × Gabor–Mean (Class I, Class II). N = 12.
Figure 8. Grand–average event–related brain potentials and isopotential contour plot (70–130 ms) for context (genuine, fake) × Gabor–Mean (Class I, Class II). N = 12.
Applsci 14 07384 g008
Figure 9. Grand–average event–related brain potentials and isopotential contour plot (70–130 ms) for context (genuine, fake) × Gabor–Variance (Class I, Class II). N = 12.
Figure 9. Grand–average event–related brain potentials and isopotential contour plot (70–130 ms) for context (genuine, fake) × Gabor–Variance (Class I, Class II). N = 12.
Applsci 14 07384 g009
Figure 10. Grand–average event–related brain potentials and isopotential contour plot (70–130 ms and 200–300 ms) for context (genuine, fake) × Gabor–Energy (Class I, Class II). N = 12.
Figure 10. Grand–average event–related brain potentials and isopotential contour plot (70–130 ms and 200–300 ms) for context (genuine, fake) × Gabor–Energy (Class I, Class II). N = 12.
Applsci 14 07384 g010
Figure 11. Grand–average event–related brain potentials and isopotential contour plot (500–1000 ms) for context (genuine, fake) × horizontal GLCM (Class I, Class II). N = 12.
Figure 11. Grand–average event–related brain potentials and isopotential contour plot (500–1000 ms) for context (genuine, fake) × horizontal GLCM (Class I, Class II). N = 12.
Applsci 14 07384 g011
Figure 12. Grand–average event–related brain potentials and isopotential contour plot (70–140 ms and 500–1000 ms) for context (genuine, fake) × diagonal GLCM (Class I, Class II). N = 12.
Figure 12. Grand–average event–related brain potentials and isopotential contour plot (70–140 ms and 500–1000 ms) for context (genuine, fake) × diagonal GLCM (Class I, Class II). N = 12.
Applsci 14 07384 g012
Figure 13. Grand–average event–related brain potentials and isopotential contour plot (300–1000 ms) for context (genuine, fake) × LBP (Class I, Class II). N = 12.
Figure 13. Grand–average event–related brain potentials and isopotential contour plot (300–1000 ms) for context (genuine, fake) × LBP (Class I, Class II). N = 12.
Applsci 14 07384 g013
Figure 14. Performance of SVM models with varying C and γ values: (a) the ACC with different C and γ combinations; (b) the AUC with different C and γ combinations. (The closer the color is to red, the higher the value; the closer it is to blue, the lower the value).
Figure 14. Performance of SVM models with varying C and γ values: (a) the ACC with different C and γ combinations; (b) the AUC with different C and γ combinations. (The closer the color is to red, the higher the value; the closer it is to blue, the lower the value).
Applsci 14 07384 g014
Table 1. Image features classification and description.
Table 1. Image features classification and description.
FeaturesClassNDescription
CompositionI123Class I has a larger blank area compared to Class II.
II126
ToneI105The average brightness of Class I is lower than that of Class II, and the gray level distribution is more discrete.
II144
Gabor-MeanI117 In   the   following   λ , θ combinations—(6, 0), (6, π/2), (9, π/8), (9, 3π/8), (9, 5π/8), (9,7π/8), (12,0), (12, π/8), (12, 3π/8), (12, π/2), (12, 5π/8), (12, 7π/8), (15,0), (15, π/2), the grayscale mean of the filtered images in Class I is higher than that of Class II. This indicates that Class I has a higher overall brightness in these combinations. Conversely, Class II exhibits more high-brightness features in other combinations.
II132
Gabor-VarianceI130 In   all   the   λ , θ combinations, Class II has more high-frequency components and greater texture changes after Gabor filter processing, which suggests that it contains more diverse and complex textures.
II119
Gabor-EnergyI122 In   the   following   λ , θ combinations—(6, 0), (6, π/2), (9, π/8), (9, 3π/8), (9, 5π/8), (9,7π/8), (12,0), (12, π/8), (12, 3π/8), (12, π/2), (12, 5π/8), (12, 7π/8), (15,0), (15, π/2)—the energy gray level of Class I after Gabor filter processing is greater. The features in Class I are more pronounced and intense, likely containing more edges, lines, or texture features in these combinations. Conversely, Class II contains more prominent edges or complex textures in other combinations.
II127
Horizontal GLCM (θ = 0°)I123In the horizontal direction, compared to Class II, the texture features of Class I are more obvious, with high-contrast edges and lines, higher regularity, and more texture patterns.
II126
Diagonal GLCM (θ = 135°)I118In the diagonal direction, compared to Class II, the texture features of Class I are more obvious, with high-contrast edges and lines, higher regularity, and more texture patterns.
II131
LBPI113Compared to Class II, Class I has weaker texture contrast but richer details, uniform texture distribution, and contains more small and intense details.
II136
Table 2. Details of input and output layer data.
Table 2. Details of input and output layer data.
Input LayerOutput Layer
Context and FeaturesAverage Amplitude of ERP/μV
ChannelTime Window/ms
ContextGenuine/FakeFP1, FPZ, FP2200–1000Aesthetic Evaluation
CompositionBlank SpacePZ, POZ50–120
ToneGray HistogramOZ200–300
Global TextureGabor-MeanP7, PO3, PO5, PO770–130
Gabor-VariancePZ, POZ70–130
Gabor-EnergyP2, P4, PZ, PO4, PO6, POZ70–130
Local TextureHorizontal GLCMOZ500–1000
Diagonal GLCMPO5, PO770–140
OZ300–1000
LBPAF3, P5, FP1, FPZ300–1000
Table 3. Models with best combinations of C and γ.
Table 3. Models with best combinations of C and γ.
MetricsValues
Best C28.5786
Best γ14.2943
Train ACC0.79801
Test ACC0.76866
AUC0.74155
Precision0.782410.75269
Recall0.786050.74866
F10.784220.75067
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, F.; Xu, W.; Li, Y.; Song, W. Exploring the Influence of Object, Subject, and Context on Aesthetic Evaluation through Computational Aesthetics and Neuroaesthetics. Appl. Sci. 2024, 14, 7384. https://doi.org/10.3390/app14167384

AMA Style

Lin F, Xu W, Li Y, Song W. Exploring the Influence of Object, Subject, and Context on Aesthetic Evaluation through Computational Aesthetics and Neuroaesthetics. Applied Sciences. 2024; 14(16):7384. https://doi.org/10.3390/app14167384

Chicago/Turabian Style

Lin, Fangfu, Wanni Xu, Yan Li, and Wu Song. 2024. "Exploring the Influence of Object, Subject, and Context on Aesthetic Evaluation through Computational Aesthetics and Neuroaesthetics" Applied Sciences 14, no. 16: 7384. https://doi.org/10.3390/app14167384

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop