Next Article in Journal
Another Perspective on Huntington’s Disease: Disease Burden in Family Members and Pre-Manifest HD When Compared to Genotype-Negative Participants from ENROLL-HD
Previous Article in Journal
Conventional and Algorithmic Music Listening before Radiotherapy Treatment: A Randomized Controlled Pilot Study
Previous Article in Special Issue
Left Frontal EEG Power Responds to Stock Price Changes in a Simulated Asset Bubble Market
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Watershed Brain Regions for Characterizing Brand Equity-Related Mental Processes

Department of Marketing, Faculty of Commerce, University of Marketing and Distribution Sciences, Kobe 651-2188, Japan
Brain Sci. 2021, 11(12), 1619; https://doi.org/10.3390/brainsci11121619
Submission received: 3 November 2021 / Revised: 28 November 2021 / Accepted: 2 December 2021 / Published: 8 December 2021
(This article belongs to the Special Issue Advances in Neuroeconomics)

Abstract

:
Brand equity is an important intangible for enterprises. As one advantage, products with brand equity can increase revenue, compared with those without such equity. However, unlike tangibles, it is difficult for enterprises to manage brand equity because it exists within consumers’ minds. Although, over the past two decades, numerous consumer neuroscience studies have revealed the brain regions related to brand equity, the identification of unique brain regions related to such equity is still controversial. Therefore, this study identifies the unique brain regions related to brand equity and assesses the mental processes derived from these regions. For this purpose, three analysis methods (i.e., the quantitative meta-analysis, chi-square tests, and machine learning) were conducted. The data were collected in accordance with the general procedures of a qualitative meta-analysis. In total, 65 studies (1412 foci) investigating branded objects with brand equity and unbranded objects without brand equity were examined, whereas the neural systems involved for these two brain regions were contrasted. According to the results, the parahippocampal gyrus and the lingual gyrus were unique brand equity-related brain regions, whereas automatic mental processes based on emotional associative memories derived from these regions were characteristic mental processes that discriminate branded from unbranded objects.

1. Introduction

Generally, branded products with brand equity are traded at premium prices, compared with unbranded products [1]. As for brand equity, it is one of the crucial profitable sources for enterprises and one of their greatest assets. Aaker [2] segregated the elements of brand equity into five factors: brand name awareness, brand associations, brand loyalty, perceived brand quality, and other proprietary brand assets, such as patents, trademarks, and channel relationships. Thus, unlike factories and office buildings, brand equity is an asset attributed to consumers’ minds.
Numerous brand equity studies have been conducted over the past two decades in order to clarify how to build brand equity in consumers’ minds. Such research included empirical studies, theoretical studies, and practical cases. Meanwhile, the number of neuroscience studies to understand brand equity-related mental processes has been increasing. For example, McClure et al. [3] showed that brain activations on beverage products with brand equity were observed in the hippocampus (HP) and dorsolateral prefrontal cortex (DLPFC), whereas both the ventral medial prefrontal cortex (VMPFC) and ventral striatum (VS) were activated in low-brand equity products. However, the activations of both the VMPFC and VS were observed in several brand equity studies [4,5,6,7,8]. These regions are known as the “neural currency network” [9]. The activations in these regions were also reported in several studies on unbranded objects [10,11,12].
Barring these regions, the activations in the medial prefrontal cortex (MPFC) were observed in both branded and unbranded research. For example, in a comparison between familiar automobile brands and unfamiliar ones, the MPFC was activated [13]. In related studies, Schaefer and Rotte [14] confirmed the activations in the MPFC when comparing luxury automobile brands and unfamiliar brands, whereas Chen et al. [15] investigated the brain activations associated with brand personality, which is an element of brand association composed of brand equity. In the latter study, they reported the activations in the MPFC, the cingulate cortex, and the caudate. Meanwhile, the activations in the MPFC were reported in numerous studies on unbranded objects [16,17,18,19,20,21,22,23,24,25,26,27]. Besides these brain regions, other brain activated regions have been found in studies on branded objects with brand equity, e.g., the insula [28], the inferior frontal gyrus [29], and the superior frontal gyrus [30,31]. It has also been reported that these brain regions are solely or multi-regionally activated.
Based on the aforementioned findings, brand equity-related brain regions are highly diversified. Even though brand equity has an influence on consumers’ decision-making, such as purchases, preferences, and attitudes, the watershed brain regions between brand equity and unbranded-related brain regions remain unknown. Therefore, the purpose of this study is to assess the unique characteristics of the mental processes associated with brand equity by identifying the watershed brain regions through a comparison between the brain regions related to brand equity and such regions related to unbranded objects without brand equity.

2. Materials and Methods

In order to achieve our research objective, the analyses were conducted in two steps. First, an activation likelihood estimation (ALE) was conducted to statistically clarify the distinct brain regions between branded objects with brand equity and unbranded objects without brand equity. Second, although a statistical conjunction analysis was attempted (based on the ALE method) to identify any overlapped or distinct brain regions between the brand equity-related brain regions and the unbranded objects-related brain regions, it could not be executed. Thus, a chi-square test was adopted to characterize the brand equity-related brain regions and the unbranded ones. When conducting the chi-square test, all the reported foci were categorized into the optimal number of clusters using the k-means algorithm. To assess the brain regions that can discriminate between brain equity-related brain regions and unbranded objects-related brain regions, supervised machine learning was applied. The details regarding these procedures are presented in the following sections.

2.1. Procedures of the ALE Method

A systematic literature review was conducted to select the neuroimaging studies on consumers’ decision-making of branded and unbranded objects. The selections were performed using PubMed (https://pubmed.ncbi.nlm.nih.gov) as the primary database. Specifically, the search focused on studies using functional magnetic resonance imaging (fMRI), with specific terms such as “brand”, “consumer”, “fMRI”, “neural”, “choice”, “purchase”, “decision-making”, and “preference”. This search process yielded the following: 10 studies for “brand, fMRI, neural, and choice”; 0 for “brand, fMRI, neural, and purchase”; 11 for “brand, fMRI, neural, and decision-making”; 12 for “brand, fMRI, neural, and preference”; 38 for “consumer, fMRI, neural, and choice”; 12 for “consumer, fMRI, neural, and purchase”; 48 for “consumer, fMRI, neural, and decision-making”; and 26 for “consumer, fMRI, neural, and preference”.
Next, the branding studies in Plassmann’s [32] references list were added. Based on the information in the titles and abstracts, the studies were selected according to the following criteria: (1) studies in peer-reviewed English language journals published between January 2000 and March 2021; (2) studies that conducted fMRI scans of healthy participants; (3) studies in which branded objects were used as experimental stimuli, e.g., products, logos, and advertising with brand logos or the equivalent; (4) studies in which unbranded objects were used as experimental stimuli, e.g., products, product packages, and advertisements without brand logos or the equivalent; and (5) studies that reported activations as three-dimensional coordinates in the stereotactic space of Talairach or the Montreal Neurological Institute (MNI).
It should be noted that two studies [33,34] did not directly use branded objects as experimental stimuli. However, they were ultimately included in the branded objects group for quantitative synthesis because the stimuli were regarded as objects similar to branded ones. These two studies were also included in Plassmann’s references list [32]. In addition, since Knutson et al. [35] adopted a shop task as an experimental task, it is believed that the stimuli used in their study included logos and characteristic package designs. However, their study was included in the unbranded objects group because they controlled the attractiveness of the stimuli, and their objectives assessed the influences of brand equity on consumers’ decision-making. In other words, they treated all the experimental stimuli as equivalents. The preferred items for the systematic review and meta-analyses (PRISMA) flow diagram (see Figure 1) provide details of this screening process. For the present meta-analysis (see Supplementary Table S1a,b), 26 studies (679 foci) were included in the branded objects group, while 39 studies (733 foci) were included in the unbranded objects group. In addition, similar to other meta-analytical neuroimaging studies, the entire activation foci of the included studies that were originally reported in the Talairach space were converted to the MNI space using a transformation algorithm [36], owing to disparities between the Talairach space and MNI space [37]. The MNI coordinates were adopted in the current study as an ALE map was created on the MNI space by using the GingerALE software (http://www.brainmap.org/, accessed on 1 April 2021) [38,39]. The details of the ALE method are described in Figure 1.
The ALE method [40], the most popular qualitative meta-analysis method [41], was also adopted. First, the modeled activation map was created by applying the three-dimensional Gaussian probability density function to each focus. Similar procedures were conducted on all the foci in the selected studies. With the increased convergence of the reported foci across studies, there was a gradual minimization in the variance of the Gaussian probability distribution. This indicated that the contingency of the reported foci in each study was expected to be eliminated. Second, an ALE map was obtained by calculating the union of the modeled activation maps. Finally, to create a more accurate ALE map, it was compared with the randomness map created by null distribution. Concretely, the ALE map with thresholds was obtained by conducting a permutation test, which assessed the differentiations between these maps at each voxel [40,42].
Overall, the ALE method was conducted using the GingerALE (Version 3.02) tool (http://www.brainmap.org/, accessed on 1 April 2021), while the thresholding analyses were performed using a cluster-level correction for multiple comparisons at p = 0.05, with a cluster-forming threshold of p = 0.0001. Meanwhile, the permutation size was set to 1000. Although Eickhoff et al. [43] recommended a cluster-forming threshold at p = 0.001, more conservative criteria were adopted. In the present study, all the coordinates were reported in the MNI space. Moreover, all the activated brain images were exported as NIfTI files and as output into the canonical anatomical T1 brain template in the MNI space via Mango software (Version 4.1; http://ric.uthscsa.edu/mango/, accessed on 1 April 2021).

2.2. Procedures of the Chi-Square Test

The procedures of the chi-square test are as follows: First, the foci of both the branded studies and the unbranded studies were merged. Second, each focus was flagged, depending on each study’s database. Specifically, the foci in the branded studies’ databases were flagged as “1” (hereafter called the branded flag), since these foci were collected to measure the objects with brand equity. Meanwhile, the foci in the unbranded studies’ databases were flagged as “0” (hereafter called the unbranded flag), since these foci were collected to measure the unbranded objects. These data were also constructed from 1412 rows and four columns (excluding the index column). Each row represented each focus, while the first three columns (excluding the index column) represented the brain coordinates. The final column was expressed as the database source flags that indicated whether the foci corresponded to the branded or unbranded objects. These foci plotted on the Colin27 template are displayed in Figure 2, the details of the data structure are presented in Figure 3A, and the descriptive statistics are shown in Table 1. Third, in order to categorize each focus, the spatially closer brain coordinates were organized into appropriate clusters by using the k-means algorithm (see Figure 3B). In this case, the optimal number of clusters was determined though the elbow method, while the k-means algorithm was performed by using scikit-learn in Python. After the optimal number of clusters was determined and a cluster ID was provided to each focus, a chi-square test was conducted with both the cluster ID and the flag in order to determine whether each cluster included such tendencies as brand equity-related brain regions, unbranded-related brain regions, and overlapped brain regions across brand equity-related brain regions and unbranded-related brain regions (hereafter called overlapped brain regions).

2.3. Procedures of Machine Learning

The cluster ID, calculated in previous procedures, was used as a feature value, while the last column was used as a dependent variable. The data structure for machine learning is shown in Figure 3C. First, feature engineering was conducted to determine the values that have high relevance to the dependent variables. In this regard, if the cluster ID did not significantly differ via the results of the chi-square test, then the foci with the cluster ID were judged as overlapped brain regions, flagged as “2” (see Figure 3D), and eliminated. Second, supervised learning algorithms were performed to identify the brain regions. The calculations were performed using H2O-AutoML (H2O Version 3.32.1.2, https://www.h2o.ai, accessed on 15 July 2021), which is an open-source framework for machine learning [44]. Several major machine learning algorithms were covered in this framework such as XGBoost, the Distributed Random Forest (DRF), the Gradient Boosting Machine (GBM), Generalized Linear Models, the Deep Neural Network, and StackedEnsemble. In this study, the DRF, XGBoost, the GBM, and Generalized Linear Models were adopted. The Deep Neural Network and StackedEnsemble algorithms were excluded in this study because the effectiveness of the feature values could not be calculated in these algorithms.
Third, a random grid search algorithm with H2O AutoML was conducted to tune the hyperparameters in each algorithm, excluding the Random Forest and Extremely Randomized Trees, since the current version of this search algorithm was not supported by these algorithms. In this regard, since each algorithm included several hyperparameters that could not be calculated based on the provided data, inputting appropriate values was required to calculate the algorithms. In this algorithm, combinations of potential values were randomly chosen and inputted in each hyperparameter until a model converged at an optimal value in the performance index, e.g., the mean error rate, the area under the curve (AUC), and F-measures. In this framework, the AUC was set as the performance index, while the maximized value of the AUC was the optimal value. Moreover, the hyperparameters were automatedly tuned, while 5-fold cross validation was conducted for each model.
Finally, the effectiveness of the feature value was calculated. In the tree family algorithm (i.e., the DRF, XGBoost, and the GBM), similar to Gini importance, the magnitude of the values calculated by this framework’s approach represents the contributions to the reduction in squared error for each tree node during the process of dividing trees. In the non-tree-family algorithms, the regression coefficients were calculated. The performance of the algorithms was evaluated with major indices such as the AUC, the area under the precision-recall curve (AUC-PR), and the logarithmic loss metric (logloss). Both the AUC and AUC-PR are performance indices for binary classification problems. Specifically, the AUC ranges from 0 to 1, in which an AUC of “1” means perfect classification, an AUC of “0.5” means chance level, and an AUC of “0” means too-poor performance. As for the AUC-PR, it is appropriate for highly imbalanced data. Unlike the AUC, the AUC-PR focuses on evaluating the true positive, the false positive, and the false negative. Similar to the AUC, an ACU-PR of “1” means perfect classification, an AUC-PR of “0.5” means chance level, and an AUC-PR of “0” means too-poor performance. Moreover, the logloss can be used for both binary and multiclass classification problems. In this regard, the logloss value represents the closeness to the targeted values in the dependent variables. For example, a logloss of “0” means a perfect classifier. In other words, the smaller the logloss, the better the performance classifiers.

3. Results

3.1. Results of the ALE

In this study, the activation foci associated with the branded- and unbranded-related regions were revealed. For the branded objects, the activations in the five clusters significantly converged (see Figure 4A and Table 2a). Additionally, the rostral anterior cingulate cortex (rACC, BA32, ventral MPFC [VMPFC]), the medial frontal gyrus (BA10), the parahippocampal gyrus (the entorhinal cortex <BA28>, HP), the caudate head (the anterior part of VS), the posterior cingulate cortex <PCC> (the retrosplenial cortex <RSC>; BA29, BA30), and the lingual gyrus (LG) were observed.
For the unbranded objects, the activations in the six clusters were significantly converged (see Figure 4B and Table 2b). Moreover, the rostral anterior cingulate cortex (rACC, BA32, ventral MPFC [VMPFC]), the medial frontal gyrus (BA24, BA32), the caudate head (the anterior part of the ventral striatum [VS]), the caudate body, the insula (BA13), the IFG (BA40), and the medial frontal gyrus (BA6) were observed.
Based on the results, the anterior part of the MPFC (BA10), the parahippocampal gyrus (PHG) regions (including the hippocampus), the LG, and the PCC were characteristic brain regions, compared with the unbranded-related brain regions. Conversely, both the insula (BA13) and the IFG (BA40) were unique brain regions, compared with the brand equity-related brain regions. Furthermore, both the VMPFC and VS were seemingly overlapped between the brand equity-related brain regions and the unbranded objects-related brain regions.

3.2. Results of the Chi-Square Test

In order to categorize the foci, the k-means algorithm was performed on the initial dataset (see Figure 1 and Table 1). Overall, 26 clusters were determined as the optimal number of clusters by the elbow method. The results of this method are shown in Figure 5 and Supplementary Table S2. Interestingly, the skew of the elbow plot seemed to flatten beyond 20 clusters. As the index of the sum of squared error reached 1/10 in the 24th cluster, The 24th to 26th clusters were chosen as the optimal range (see Supplemental Table S2). Additionally, 26 clusters were adopted as the optimal number of clusters because it was the upper threshold value in the range. The centroid of each cluster is shown in Table 3 and Figure 6. Using the 26 clusters, a chi-square test was conducted. Moreover, the following hypotheses are presented, with H1 representing the null hypothesis and H2 representing the alternative hypothesis:
H1: The cluster ID and the flag are independent;
H2: The cluster ID and the flag are not independent.
The results revealed significant differences between the branded flag and unbranded flag (X2 (25) = 45.277, p = 0.008). Since H1 was discarded, a residual analysis was conducted to determine the characters of each cluster. Based on the results of the residual analysis shown in Table 4, a significant difference was observed in Clusters 8, 9, 15, 20, and 22. The branded flag also included dominant proportions in Clusters 9, 15, and 20. The findings also indicated that the brain regions belonging to Clusters 9, 15, and 20 can be associated with brand equity-related mental processes. Moreover, the centroid of each cluster corresponded to the PHG (Cluster 9) and the LG (Cluster 15, 20). Conversely, the unbranded flag included dominant proportions in Clusters 8 and 22. The findings also showed that the brain regions belonging to Clusters 8 and 22 were significantly involved in the unbranded objects-related mental processes. As for the centroid of each cluster, it corresponded to the inferior parietal lobule (<IPL>, Cluster 8) and the angular gyrus (Cluster 22), respectively. The other clusters (i.e., Clusters 0–7, 10–16, 17–19, 21, 23–25) were not significantly different.
Finally, the overlapped brain regions were determined by investigating the clusters that did not significantly differ. Since there was almost the same rate between the branded flag and unbranded flag within each cluster (see Table 4), the clusters with a p-value higher than 0.45 were determined to be overlapped brain regions. In this regard, Clusters 1, 2, 6, 7, 10, 11, 13, 14, and 21 met this criterion. In Table 4, the corresponding brain regions to the centroid of each cluster are shown. Based on the results, the brain regions in these clusters were more involved in the mental processes of the brand equity-related brain regions and the unbranded objects-related brain regions, compared with the other clusters. Although significant differences were not observed in the other clusters (i.e., Clusters 0, 3–5, 12, 16–19, 23–25), it was presumed that these clusters may have such tendencies such as either brand equity-related brain regions or unbranded-related brain regions, based on the rate between the branded flag and unbranded flag within each cluster.

3.3. Results of Machine Learning

The descriptive statistics of feature engineering are shown in Table 5. The data, in which the foci included in the clusters of the overlapped brain regions were eliminated, was used for machine learning. In this case, the XRT algorithm had the best performance. However, the XGBoost algorithm was adopted for analysis because the importance of each feature value could not be calculated in the XRT algorithm. The tuned hyperparameters are listed in Supplementary Table S3, while the performance indices are shown in Table 6. Both the AUC and AUC-PR indicated the chance level. Although the logloss was over 0.5, the results demonstrated that the model can be used to evaluate the predictability of chance level. Additionally, feature importance is shown in Table 7. In this regard, Cluster 9 (the brain regions of the cluster centroid; the right PHG), Cluster 15 (the brain regions of the cluster centroid; the left LG), and Cluster 20 (the brain regions of the cluster centroid; the right LG) had stronger influences on the dependent variables, compared with the other clusters. Considering the proportions of the flags in each cluster, Clusters 9, 15, and 20 had influences on the branded flags because of the dominant proportion in such flags. Meanwhile, Cluster 25 had an influence on the unbranded flags because of the dominant proportion in such flags. In particular, Cluster 9 had the strongest influence on the dependent variables and the highest effectiveness for identifying the branded flags. Thus, the brain regions around the PHG had the strongest influences determining whether objects have brand equity.

4. Discussion

Although the characteristic brain regions were not observed regarding the unbranded-related brain regions through the three assessments (i.e., the ALE method, the statistical hypothesis test, and machine learning), this study revealed that the brain regions around both the PHG and the left LG were characteristic brain regions to brand equity and anatomically close to one another. In the right LG, two assessment methods (the statistical hypothesis test and machine learning) were passed. Accordingly, the PHG and LG can be thought of as watershed brain regions for distinguishing mental processes of branded and unbranded objects. Therefore, when metabolic alternations in these regions are observed in a magnetic resonance spectroscopy (MRS) research hereafter, it will imply that the clustered brain regions around both the PHG and LG can be biomarkers for whether brand equity has been established in consumers’ minds. Specifically, the PHG, which corresponds to the centroid of Cluster 9, is associated with recognition [45], episodic memory [46,47,48], and visual and spatial scene processing and navigation [49,50]. In the function of recognition, the anterior part of the PHG is engaged in familiarity-based recognition [51,52]. Meanwhile, the posterior part of the PHG is engaged in recollection-based recognition [53,54]. During recollection, the activations of both the PHG and the posterior parts of the PHG (or single activations of hippocampus) were observed in many cases [45]. In addition, the PHG has a tendency to activate in association with various elements, such as memory sources, and remembering targets when engaging functions of episodic memory [46,47,48]. Thus, episodic memory engaged with the PHG can be thought of as “associative memory” or what Aminoff et al. [55] described as “contextual association”. As for the LG, it is associated with mental imagery [56], visual and spatial scene processing and navigation [49], episodic memory [57], divergent thinking [58], predictive inferences [59], and recognition [60]. These functions, in which the LG is associated in elements of visual processing are required. For example, when generating predictive inferences or performing divergent thinking, visual images must be internally generated. Moreover, the LG also plays a crucial role in language processing, such as in visual recognition of words [61,62] and semantic processing of words [63,64,65]. According to Zhang et al. [65], the LG is involved in language processing and supramodal organization in patients who are not congenitally blind but lost their sight in their early teens. Musz and Thompson [66] demonstrated that the LG plays a role in the semantic hub across the modalities of words. Thus, considering that consumers may recognize a brand as a type of word, the LG is believed to serve as a link connecting modalities and meanings of a brand. Interestingly, these regions are associated with the default mode network (DMN) [67,68]. The PHG is the core region of the DMN [69], and the LG has functional connection with brain regions constituting the DMN [68]. Given that the DMN is engaged in self-referential processing (e.g., episodic memory, autobiographical memory) [67] and associative memory-based autopilot behavior [70], mental processing of branded objects can be thought of as automated mental processes based on associative memories and effortless decision-making based on these mental processes.
Meanwhile, as described earlier, consistent results from the three assessments (i.e., the ALE method, statistical hypothesis test, and machine learning) were not observed in unbranded objects relative to that in branded ones. However, the IPL (BA40) was commonly observed as characteristic brain regions via two assessments (the ALE method and statistical hypothesis test). The IPL associates with calculation [71,72] and decision-making under uncertainty [73,74,75]. Interestingly, connections between the IPL and insula were recorded in metacognition under uncertainty [75]. The insula plays a crucial role in monitoring situations when decision-making under uncertainty. The IPL is involved in controlling and managing mental resources for problem solving under uncertainty. The insula was the brain region revealed in the assessment by the ALE method. In the consumer contexts, the insula detects and evaluates the social risks in purchase decision-making [23]. Besides these regions, the medial frontal gyrus was revealed by the ALE method. The machine learning approach demonstrated that the superior frontal gyrus (Cluster 25) has an influence on unbranded flags. These brain regions are so close that they are placed in a dorsal and medial part of the prefrontal cortex (hereafter, the dorsomedial prefrontal cortex). The dorsomedial prefrontal cortex (DMPFC) is associated with action control, conflict monitoring [76,77], and decision uncertainty [78]. The DMPFC performs these cognitive control-related functions by connecting with the executive control network [78,79]. Additionally, the DMPFC associates with the DMN and is involved in social cognition through a connection with brain regions of the DMN [69]. The DMPFC plays a role in inferring others’ thoughts in complex social relationships [80]. In this way, this region is engaged in organizing and adjusting information to solve problems in complex situations, such as a preference on options with equal values, and unstable situations [81]. Thus, in mental processes of unbranded objects, cognitive control and deliberative aspects may be dominated to handle unknown objects, such as unbranded products and services. In other words, it can be presumed that consumers carefully behave while purchasing unbranded objects.
Cognitive decoding in Neurosynth (https://neurosynth.org/, accessed on 7 September 2021) was also conducted to more rigorously decode the functions of these clustered brain regions. The decoding analysis was performed for the results of branded objects and unbranded objects. Additionally, the region of interest (ROI) was determined via the Mango software (Version 4.1; http://ric.uthscsa.edu/mango/, accessed on 1 April 2021). In this regard, three ROIs (Cluster 9, 15, and 20) were established, and the shape of each ROI was set as a cube in branded objects. In unbranded objects, two ROIs (Cluster 8, and 25) were established. The length, width, and height in each cube were determined in accordance with the standard deviations of the coordinates in each cluster. Concretely, each measurement of the cube was set at 18 mm. The calculation procedures are as follows. First, the standard deviations (1 sigma) of each coordinate (x, y and z) in each cluster (Clusters 8, 9, 15, 20, and 25) were calculated. Second, the maximum and minimum values of the coordinates in each cluster were determined. For example, in Cluster 9, the maximum value of the x coordinate was determined by calculating 30 (x; centroid of Cluster 9) plus 13 (1 sigma of the x coordinate), while the minimum value of the x coordinate was determined by calculating 30 (x; centroid of Cluster 9) minus 13 (1 sigma of the x coordinate). As for the ranges of the ROI of Cluster 9, the x coordinate ranged from 17 to 43, the y coordinate ranged from −16 to 6, and the z coordinate ranged from −22 to −1. Regarding the ranges of the ROI of Cluster 15, the x coordinate ranged from −24 to −6, the y coordinate ranged from −96 to −77, and the z coordinate ranged from −11 to 11. Regarding the ranges of the ROI of Cluster 20, the x coordinate ranged from 10 to 31, the y coordinate ranged from −94 to −77, and the z coordinate ranged from −4 to 15. Third, each measurement of the cube was adjusted in accordance with these ranges calculated in the second step using the Colin27-T1 template in the Mango software. It was determined that 18 mm was appropriate for the measurement of the cube. Finally, these three ROIs were united into a single ROI (see Figure 7A) in branded objects. Similarly, the two ROIs were united into a single ROI (see Figure 7C) in unbranded objects. After determining the ROI, they were registered in the Neurovault database (https://neurovault.org, accessed on 1 April 2021) for decoding. Subsequently, cognitive decoding was performed for the ROI through the Neurovault database, which is internally connected with Neurosynth. The results of the decoding are shown in Table 8 and Figure 7B,D. In this case, we adopted the top 40 terms, excluding both anatomical terms, disease and experimental task-related terms. The word cloud was created using Python. The higher the correlation values a term had, the larger the font size was set, and vice versa. Accordingly, the font size in the word cloud of branded objects is larger than that of unbranded objects as correlation values in decoded results of branded objects are relatively larger than those in decoded results of unbranded objects. The colors were randomly allocated to each term. In branded objects, the results show that both memory- and emotion-related terms are primarily dominant. Especially, emotion-related terms were ranked as the top 10 terms. This indicates that the emotional episodic memories of objects in consumers’ minds play a crucial role in differentiating between branded objects and unbranded objects. In contrast, in unbranded objects, many executive control-related terms (e.g., “competing”, “judgment”, “reasoning”, “switching”, “control network”, “conflict”, “executive control”, “cognitively”, and “monitoring”) were ranked. Besides this term category, language-related terms (“fluency”, “verbal fluency”, “lexical decision”) and social cognition related terms (“pain”, “default network”, “empathy”) were ranked. Although the decoded terms of unbranded object-related brain regions were not converged into specific categories as were the results of brain regions related to branded objects, the executive control-related terms were characteristic in the decoded results of unbranded objects’ related brain regions.
Overall, the findings of this study are consistent with previous theoretical and empirical brand equity studies. Specifically, the emotional and positive experiences of consumers influence their attitudes toward brands [82], and vice versa. Similarly, it has been revealed that emotional aspects influence value-based decision-making in neuroeconomics and neurofinance studies [83]. These emotional experiences are stored in consumers’ minds along with multimodal sensory information [84]. In addition, the link between emotional episodes and brands help form brand associations, which is one of the crucial elements in brand equity [2]. Hence, a strong brand association is created by episodic memories that are based on emotional experiences [82,85]. In collaboration, this study indicates that the PHG may be involved in emotional aspects of brand associations and the LG may function as a semantic hub connecting various multimodal elements of brand associations. Meanwhile, given that terms related to the executive control network were decoded in analysis of the IPL and the DMPFC, it is presumed that making decisions about unbranded objects may be effortfully executed based on rational mental processes. Therefore, regarding mental processes of branded objects, emotional aspects may be relatively dominant in decision-making. In contrast, cognitive and deliberative aspects may be relatively dominant in mental processes of unbranded objects.
The results of this study also provide useful implications for practitioners. First, when launching a new brand, managers should prioritize the creation of emotional brand associations, aside from other marketing practices. In this regard, they should carefully observe the emotional brand associations and related scores like “familiarity”, in addition to other indices, for tracking brand equity and managing an established brand. Second, when conducting qualitative research, such as in-depth interviews and focus groups, researchers should focus on eliciting emotional episodes on a brand from consumers. In this case, episodes that are visually vivid, spatially concrete, and positively presented can be core factors that strengthen brand associations.
Although the present study provided useful findings to both academicians and practitioners, there are several limitations that should be noted. First, the analyses were conducted using data with stimuli from B2C products and services. In other words, the data in this study included cross product and services data among B2C categories. Depending on these categories, it is possible that different results can be obtained when using data that focus on a specific product/category. In unbranded objects, inconsistent results among the three assessments may be induced owing to these reasons. Further, research on a specific product/category is required in near future. In addition, controlling the attributes and facets in both branded and unbranded objects will be required to overcome the inconsistencies of results for unbranded objects during analysis. Second, the analyses were conducted without considering the heterogeneous sample profiles such as sex, age, occupations, personalities, attitudes toward a brand, and brand usages. In marketing, segmented groups of consumers play a crucial role in setting a strategy and evaluating an outcome. However, in this study, both demographic and psychographic factors were not considered in the analyses. Consequently, it is possible that different results can be derived from these factors. Finally, regarding the analysis by machine learning, it is possible that the performance of the model was improved by conducting the more precise feature engineering. For example, the latter approach added other variables such as raw coordinate data, a task factor, and product categories. Therefore, the results of this study should be carefully interpreted before drawing any conclusions. In this way, although the study has several limitations with this approach, this is the first study in which the watershed brain regions between the branded and unbranded objects were comprehensively revealed based on the enormous brain regions that activated imaging data. However, additional work is required for more precisely identifying a neural mechanism of brand equity and mental processes in it.

5. Conclusions

This study identified the unique brain regions related to brand equity and assessed the mental processes derived from these regions. For this purpose, three analysis methods (i.e., the qualitative meta-analysis approach, chi-square tests, and machine learning) were conducted. In total, 65 studies (1412 foci) investigating branded objects with brand equity and unbranded objects without brand equity were examined, while the neural systems involved for these two brain regions were contrasted. Based on the findings, the brain regions around the PHG and LG were the watershed nodes for distinguishing branded objects with brand equity and unbranded objects without brand equity. This study revealed that both the PHG and LG can be involved in a brand association. In particular, the PHG might be engaged in emotional episodic elements of a brand association. Meanwhile, the LG might play a crucial role in the semantic hub on a brand association via word processing. This study indicated that mental processes of branded objects may be automatic information processing based on emotional associative memories derived from these regions, while unbranded objects’ related mental processes may be deliberative and cognitive mental processes.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/brainsci11121619/s1, Table S1(a): Branded studies included in the meta-analysis, Table S1(b): Unbranded studies included in the meta-analysis. Table S2: Convergence of sum of squared error (SSE). Table S3: Results of hyperparameter tuning.

Funding

This work was supported by a JSPS KAKENHI Grant (No. JP20K13633).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets in this study are available by request to the corresponding author. The statistical and pattern weight maps are available on the Neurovault repository, under collection 11099 (https://neurovault.org/collections/KGRYTGFI/, accessed on 7 September 2021 and 11539 (https://neurovault.org/collections/WZWOAKWH/, accessed on 25 October 2021).

Conflicts of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that can be construed as potential conflicts of interest.

References

  1. Farquhar, P.H. Managing Brand Equity. Mark. Res. 1989, 1, 24–33. [Google Scholar]
  2. Aaker, D.A. The Value of Brand Equity. J. Bus. Strategy 1992, 13, 27–32. [Google Scholar] [CrossRef]
  3. McClure, S.M.; Li, J.; Tomlin, D.; Cypert, K.S.; Montague, L.M.; Montague, P.R. Neural Correlates of Behavioral Preference for Culturally Familiar Drinks. Neuron 2004, 44, 379–387. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Erk, S.; Spitzer, M.; Wunderlich, A.P.; Galley, L.; Walter, H. Cultural Objects Modulate Reward Circuitry. NeuroReport 2002, 13, 2499–2503. [Google Scholar] [CrossRef]
  5. Koeneke, S.; Pedroni, A.F.; Dieckmann, A.; Bosch, V.; Jäncke, L. Individual Preferences Modulate Incentive Values: Evidence from Functional MRI. Behav. Brain Funct. 2008, 4, 55. [Google Scholar] [CrossRef] [Green Version]
  6. Murawski, C.; Harris, P.G.; Bode, S.; Domínguez D, J.F.; Egan, G.F. Led into Temptation? Rewarding Brand Logos Bias the Neural Encoding of Incidental Economic Decisions. PLoS ONE 2012, 7, e34155. [Google Scholar] [CrossRef] [Green Version]
  7. Enax, L.; Krapp, V.; Piehl, A.; Weber, B. Effects of Social Sustainability Signaling on Neural Valuation Signals and Taste-Experience of Food Products. Front. Behav. Neurosci. 2015, 9, 247. [Google Scholar] [CrossRef] [Green Version]
  8. Jung, D.; Sul, S.; Lee, M.; Kim, H. Social Observation Increases Functional Segregation Between MPFC Subregions Predicting Prosocial Consumer Decisions. Sci. Rep. 2018, 8, 3368. [Google Scholar] [CrossRef]
  9. Bartra, O.; McGuire, J.T.; Kable, J.W. The Valuation System: A Coordinate-Based Meta-Analysis of BOLD FMRI Experiments Examining Neural Correlates of Subjective Value. Neuroimage 2013, 76, 412–427. [Google Scholar] [CrossRef] [Green Version]
  10. Petit, O.; Merunka, D.; Anton, J.L.; Nazarian, B.; Spence, C.; Cheok, A.D.; Raccah, D.; Oullier, O. Health and Pleasure in Consumers’ Dietary Food Choices: Individual Differences in the Brain’s Value System. PLoS ONE 2016, 11, e0156333. [Google Scholar] [CrossRef]
  11. Motoki, K.; Sugiura, M.; Kawashima, R. Common Neural Value Representations of Hedonic and Utilitarian Products in the Ventral Stratum: An FMRI Study. Sci. Rep. 2019, 9, 15630. [Google Scholar] [CrossRef] [PubMed]
  12. Setton, R.; Fisher, G.; Spreng, R.N. Mind the Gap: Congruence Between Present and Future Motivational States Shapes Prospective Decisions. Neuropsychologia 2019, 132, 107130. [Google Scholar] [CrossRef]
  13. Schaefer, M.; Berens, H.; Heinze, H.J.; Rotte, M. Neural Correlates of Culturally Familiar Brands of Car Manufacturers. Neuroimage 2006, 31, 861–865. [Google Scholar] [CrossRef] [PubMed]
  14. Schaefer, M.; Rotte, M. Thinking on Luxury or Pragmatic Brand Products: Brain Responses to Different Categories of Culturally Based Brands. Brain Res. 2007, 1165, 98–104. [Google Scholar] [CrossRef]
  15. Chen, Y.P.; Nelson, L.D.; Hsu, M. From “Where” to “What”: Distributed Representations of Brand Associations in the Human Brain. J. Mark. Res. 2015, 52, 453–466. [Google Scholar] [CrossRef] [Green Version]
  16. Chib, V.S.; Rangel, A.; Shimojo, S.; O’Doherty, J.P. Evidence for a Common Representation of Decision Values for Dissimilar Goods in Human Ventromedial Prefrontal Cortex. J. Neurosci. 2009, 29, 12315–12320. [Google Scholar] [CrossRef] [Green Version]
  17. Tusche, A.; Bode, S.; Haynes, J.D. Neural Responses to Unattended Products Predict Later Consumer Choices. J. Neurosci. 2010, 30, 8024–8031. [Google Scholar] [CrossRef] [Green Version]
  18. Berns, G.S.; Capra, C.M.; Moore, S.; Noussair, C. Neural Mechanisms of the Influence of Popularity on Adolescent Ratings of Music. Neuroimage 2010, 49, 2687–2696. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Van der Laan, L.N.; De Ridder, D.T.; Viergever, M.A.; Smeets, P.A. Appearance Matters: Neural Correlates of Food Choice and Packaging Aesthetics. PLoS ONE 2012, 7, e41738. [Google Scholar] [CrossRef] [Green Version]
  20. Kang, M.J.; Camerer, C.F. FMRI Evidence of a Hot-Cold Empathy Gap in Hypothetical and Real Aversive Choices. Front. Neurosci. 2013, 7, 104. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Lee, Y.; Chong, M.F.; Liu, J.C.; Libedinsky, C.; Gooley, J.J.; Chen, S.; Wu, T.; Tan, V.; Zhou, M.; Meaney, M.J.; et al. Dietary Disinhibition Modulates Neural Valuation of Food in the Fed and Fasted States. Am. J. Clin. Nutr. 2013, 97, 919–925. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Lighthall, N.R.; Huettel, S.A.; Cabeza, R. Functional Compensation in the Ventromedial Prefrontal Cortex Improves Memory-Dependent Decisions in Older Adults. J. Neurosci. 2014, 34, 15648–15657. [Google Scholar] [CrossRef] [Green Version]
  23. Yokoyama, R.; Nozawa, T.; Sugiura, M.; Yomogida, Y.; Takeuchi, H.; Akimoto, Y.; Shibuya, S.; Kawashima, R. The Neural Bases Underlying Social Risk Perception in Purchase Decisions. NeuroImage 2014, 91, 120–128. [Google Scholar] [CrossRef] [PubMed]
  24. Giuliani, N.R.; Pfeifer, J.H. Age-Related Changes in Reappraisal of Appetitive Cravings During Adolescence. NeuroImage 2015, 108, 173–181. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Stuke, H.; Gutwinski, S.; Wiers, C.E.; Schmidt, T.T.; Gröpper, S.; Parnack, J.; Gawron, C.; Hindi Attar, C.; Spengler, S.; Walter, H.; et al. To Drink or Not to Drink: Harmful Drinking Is Associated with Hyperactivation of Reward Areas Rather than Hypoactivation of Control Areas in Men. J. Psychiatry Neurosci. 2016, 41, E24–E36. [Google Scholar] [CrossRef] [Green Version]
  26. Waskow, S.; Markett, S.; Montag, C.; Weber, B.; Trautner, P.; Kramarz, V.; Reuter, M. Pay What You Want! A Pilot Study on Neural Correlates of Voluntary Payments for Music. Front. Psychol. 2016, 7, 1023. [Google Scholar] [CrossRef] [Green Version]
  27. De Martino, B.; Bobadilla-Suarez, S.; Nouguchi, T.; Sharot, T.; Love, B.C. Social Information Is Integrated into Value and Confidence Judgments According to Its Reliability. J. Neurosci. 2017, 37, 6066–6074. [Google Scholar] [CrossRef]
  28. Reimann, M.; Castaño, R.; Zaichkowsky, J.; Bechara, A. How We Relate to Brands: Psychological and Neurophysiological Insights into Consumer–Brand Relationships. J. Con. Psychol. 2012, 22, 128–142. [Google Scholar] [CrossRef]
  29. Yoon, C.; Gutchess, A.H.; Feinberg, F.; Polk, T.A. A Functional Magnetic Resonance Imaging Study of Neural Dissociations Between Brand and Person Judgments. J. Con. Res. 2006, 33, 31–40. [Google Scholar] [CrossRef]
  30. Deppe, M.; Schwindt, W.; Krämer, J.; Kugel, H.; Plassmann, H.; Kenning, P.; Ringelstein, E.B. Evidence for a Neural Correlate of a Framing Effect: Bias-Specific Activity in the Ventromedial Prefrontal Cortex During Credibility Judgments. Brain Res. Bull. 2005, 67, 413–421. [Google Scholar] [CrossRef]
  31. Kato, J.; Ide, H.; Kabashima, I.; Kadota, H.; Takano, K.; Kansaku, K. Neural Correlates of Attitude Change Following Positive and Negative Advertisements. Front. Behav. Neurosci. 2009, 3, 6. [Google Scholar] [CrossRef] [Green Version]
  32. Plassmann, H.; Ramsøy, T.Z.; Milosavljevic, M. Branding the Brain: A Critical Review and Outlook. J. Con. Psychol. 2012, 22, 18–36. [Google Scholar] [CrossRef]
  33. Klucharev, V.; Smidts, A.; Fernández, G. Brain Mechanisms of Persuasion: How “Expert Power” Modulates Memory and Attitudes. Soc. Cogn. Affect. Neurosci. 2008, 3, 353–366. [Google Scholar] [CrossRef] [Green Version]
  34. Plassmann, H.; O’Doherty, J.; Shiv, B.; Rangel, A. Marketing Actions Can Modulate Neural Representations of Experienced Pleasantness. Proc. Natl. Acad. Sci. USA 2008, 105, 1050–1054. [Google Scholar] [CrossRef] [Green Version]
  35. Knutson, B.; Rick, S.; Wimmer, G.E.; Prelec, D.; Loewenstein, G. Neural Predictors of Purchases. Neuron 2007, 53, 147–156. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Lancaster, J.L.; Tordesillas-Gutiérrez, D.; Martinez, M.; Salinas, F.; Evans, A.; Zilles, K.; Mazziotta, J.C.; Fox, P.T. Bias Between MNI and Talairach Coordinates Analyzed Using the ICBM-152 Brain Template. Hum. Brain Mapp. 2007, 28, 1194–1205. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Brett, M.; Johnsrude, I.S.; Owen, A.M. The Problem of Functional Localization in the Human Brain. Nat. Rev. Neurosci. 2002, 3, 243–249. [Google Scholar] [CrossRef]
  38. Turkeltaub, P.E.; Eden, G.F.; Jones, K.M.; Zeffiro, T.A. Meta-Analysis of the Functional Neuroanatomy of Single-Word Reading: Method and Validation. Neuroimage 2002, 16 Pt 1, 765–780. [Google Scholar] [CrossRef] [PubMed]
  39. Eickhoff, S.B.; Bzdok, D.; Laird, A.R.; Kurth, F.; Fox, P.T. Activation Likelihood Estimation Meta-Analysis Revisited. Neuroimage 2012, 59, 2349–2361. [Google Scholar] [CrossRef] [Green Version]
  40. Eickhoff, S.B.; Laird, A.R.; Grefkes, C.; Wang, L.E.; Zilles, K.; Fox, P.T. Coordinate-Based Activation Likelihood Estimation Meta-Analysis of Neuroimaging Data: A Random-Effects Approach Based on Empirical Estimates of Spatial Uncertainty. Hum. Brain Mapp. 2009, 30, 2907–2926. [Google Scholar] [CrossRef] [Green Version]
  41. Acar, F.; Seurinck, R.; Eickhoff, S.B.; Moerkerke, B. Assessing Robustness Against Potential Publication Bias in Activation Likelihood Estimation (ALE) Meta-Analyses for FMRI. PLoS ONE 2018, 13, e0208177. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Turkeltaub, P.E.; Eickhoff, S.B.; Laird, A.R.; Fox, M.; Wiener, M.; Fox, P. Minimizing Within-Experiment and Within-Group Effects in Activation Likelihood Estimation Meta-Analyses. Hum. Brain Mapp. 2012, 33, 1–13. [Google Scholar] [CrossRef] [Green Version]
  43. Eickhoff, S.B.; Nichols, T.E.; Laird, A.R.; Hoffstaedter, F.; Amunts, K.; Fox, P.T.; Bzdok, D.; Eickhoff, C.R. Behavior, Sensitivity, and Power of Activation Likelihood Estimation Characterized by Massive Empirical Simulation. Neuroimage 2016, 137, 70–85. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. LeDell, E.; Poirier, S. H2O AutoMLml: Scalable Automatic Machine Learning. In Proceedings of the 7th ICML Workshop on Automated Machine Learning, Vienna, Austria, 17–18 July 2020; Available online: https://www.automl.org/wp-content/uploads/2020/07/AutoML_2020_paper_61.pdf (accessed on 15 July 2021).
  45. Diana, R.A.; Yonelinas, A.P.; Ranganath, C. Imaging Recollection and Familiarity in the Medial Temporal Lobe: A Three-Component Model. Trends Cogn. Sci. 2007, 11, 379–386. [Google Scholar] [CrossRef]
  46. Kirwan, C.B.; Stark, C.E. Medial Temporal Lobe Activation During Encoding and Retrieval of Novel Face-Name Pairs. Hippocampus 2004, 14, 919–930. [Google Scholar] [CrossRef] [Green Version]
  47. Düzel, E.; Habib, R.; Rotte, M.; Guderian, S.; Tulving, E.; Heinze, H.J. Human Hippocampal and Parahippocampal Activity During Visual Associative Recognition Memory for Spatial and Nonspatial Stimulus Configurations. J. Neurosci. 2003, 23, 9439–9444. [Google Scholar] [CrossRef] [Green Version]
  48. Ekstrom, A.D.; Bookheimer, S.Y. Spatial and Temporal Episodic Memory Retrieval Recruit Dissociable Functional Networks in the Human Brain. Learn. Mem. 2007, 14, 645–654. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Epstein, R.A. Parahippocampal and Retrosplenial Contributions to Human Spatial Navigation. Trends Cogn. Sci. 2008, 12, 388–396. [Google Scholar] [CrossRef] [Green Version]
  50. Epstein, R.; Kanwisher, N. A Cortical Representation of the Local Visual Environment. Nature 1998, 392, 598–601. [Google Scholar] [CrossRef]
  51. Henson, R.N.; Rugg, M.D.; Shallice, T.; Josephs, O.; Dolan, R.J. Recollection and Familiarity in Recognition Memory: An Event-Related Functional Magnetic Resonance Imaging Study. J. Neurosci. 1999, 19, 3962–3972. [Google Scholar] [CrossRef] [PubMed]
  52. Uncapher, M.R.; Rugg, M.D. Encoding and the Durability of Episodic Memory: A Functional Magnetic Resonance Imaging Study. J. Neurosci. 2005, 25, 7260–7267. [Google Scholar] [CrossRef]
  53. Ranganath, C.; Yonelinas, A.P.; Cohen, M.X.; Dy, C.J.; Tom, S.M.; D’Esposito, M. Dissociable Correlates of Recollection and Familiarity Within the Medial Temporal Lobes. Neuropsychologia 2004, 42, 2–13. [Google Scholar] [CrossRef]
  54. Woodruff, C.C.; Johnson, J.D.; Uncapher, M.R.; Rugg, M.D. Content-Specificity of the Neural Correlates of Recollection. Neuropsychologia 2005, 43, 1022–1032. [Google Scholar] [CrossRef] [PubMed]
  55. Aminoff, E.M.; Kveraga, K.; Bar, M. The Role of the Parahippocampal Cortex in Cognition. Trends Cogn. Sci. 2013, 17, 379–390. [Google Scholar] [CrossRef] [Green Version]
  56. de Gelder, B.; Tamietto, M.; Pegna, A.J.; Van den Stock, J. Visual Imagery Influences Brain Responses to Visual Stimulation in Bilateral Cortical Blindness. Cortex 2015, 72, 15–26. [Google Scholar] [CrossRef] [PubMed]
  57. Burianova, H.; McIntosh, A.R.; Grady, C.L. A Common Functional Brain Network for Autobiographical, Episodic, and Semantic Memory Retrieval. Neuroimage 2010, 49, 865–874. [Google Scholar] [CrossRef]
  58. Zhang, L.; Qiao, L.; Chen, Q.; Yang, W.; Xu, M.; Yao, X.; Qiu, J.; Yang, D. Gray Matter Volume of the Lingual Gyrus Mediates the Relationship Between Inhibition Function and Divergent Thinking. Front. Psychol. 2016, 7, 1532. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Jin, H.; Liu, H.L.; Mo, L.; Fang, S.Y.; Zhang, J.X.; Lin, C.D. Involvement of the Left Inferior Frontal Gyrus in Predictive Inference Making. Int. J. Psychophysiol. 2009, 71, 142–148. [Google Scholar] [CrossRef]
  60. Andreasen, N.C.; O’Leary, D.S.; Arndt, S.; Cizadlo, T.; Hurtig, R.; Rezai, K.; Watkins, G.L.; Ponto, L.B.; Hichwa, R.D. Neural Substrates of Facial Recognition. J. Neuropsychiatry Clin. Neurosci. 1996, 8, 139–146. [Google Scholar] [CrossRef]
  61. Xiao, Z.; Zhang, J.X.; Wang, X.; Wu, R.; Hu, X.; Weng, X.; Tan, L.H. Differential Activity in Left Inferior Frontal Gyrus for Pseudowords and Real Words: An Event-Related fMRI Study on Auditory Lexical Decision. Hum. Brain Mapp. 2005, 25, 212–221. [Google Scholar] [CrossRef]
  62. Vitacco, D.; Brandeis, D.; Pascual-Marqui, R.; Martin, E. Correspondence of Event-Related Potential Tomography and Functional Magnetic Resonance Imaging During Language Processing. Hum. Brain Mapp. 2002, 17, 4–12. [Google Scholar] [CrossRef] [PubMed]
  63. Hinojosa, J.A.; Martín-Loeches, M.; Gómez-Jarabo, G.; Rubia, F.J. Common Basal Extrastriate Areas for the Semantic Processing of Words and Pictures. Clin. Neurophysiol. 2000, 111, 552–560. [Google Scholar] [CrossRef]
  64. Van de Putte, E.; De Baene, W.; Price, C.J.; Duyck, W. “Neural Overlap of L1 and L2 Semantic Representations across Visual and Auditory Modalities: A Decoding Approach”. Neuropsychologia 2018, 113, 68–77. [Google Scholar] [CrossRef]
  65. Zhang, C.; Lee, T.M.C.; Fu, Y.; Ren, C.; Chan, C.C.H.; Tao, Q. Properties of Cross-Modal Occipital Responses in Early Blindness: An ALE Meta-Analysis. NeuroImage Clin. 2019, 24, 102041. [Google Scholar] [CrossRef]
  66. Musz, E.; Thompson-Schill, S.L. Semantic Variability Predicts Neural Variability of Object Concepts. Neuropsychologia 2015, 76, 41–51. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Buckner, R.L.; Andrews-Hanna, J.R.; Schacter, D.L. The Brain’s Default Network: Anatomy, Function, and Relevance to Disease. Ann. N. Y. Acad. Sci. 2008, 1124, 1–38. [Google Scholar] [CrossRef] [Green Version]
  68. Lee, T.W.; Xue, S.W. Functional Connectivity Maps Based on Hippocampal and Thalamic Dynamics May Account for the Default-Mode Network. Eur. J. Neurosci. 2018, 47, 388–398. [Google Scholar] [CrossRef] [PubMed]
  69. Andrews-Hanna, J.R. The Brain’s Default Network and Its Adaptive Role in Internal Mentation. Neuroscientist 2012, 18, 251–270. [Google Scholar] [CrossRef]
  70. Vatansever, D.; Menon, D.K.; Stamatakis, E.A. Default Mode Contributions to Automated Information Processing. Proc. Natl. Acad. Sci. USA 2017, 114, 12821–12826. [Google Scholar] [CrossRef] [Green Version]
  71. Zago, L.; Tzourio-Mazoyer, N. Distinguishing Visuospatial Working Memory and Complex Mental Calculation Areas Within the Parietal Lobes. Neurosci. Lett. 2002, 331, 45–49. [Google Scholar] [CrossRef]
  72. Arsalidou, M.; Pawliw-Levac, M.; Sadeghi, M.; Pascual-Leone, J. Brain Areas Associated with Numbers and Calculations in Children: Meta-Analyses of fMRI Studies. Dev. Cogn. Neurosci. 2018, 30, 239–250. [Google Scholar] [CrossRef] [PubMed]
  73. Vickery, T.J.; Jiang, Y.V. Inferior Parietal Lobule Supports Decision Making Under Uncertainty in Humans. Cereb. Cortex 2009, 19, 916–925. [Google Scholar] [CrossRef] [Green Version]
  74. Gloy, K.; Herrmann, M.; Fehr, T. Decision Making under Uncertainty in a Quasi Realistic Binary Decision Task—An fMRI Study. Brain Cogn. 2020, 140, 105549. [Google Scholar] [CrossRef] [PubMed]
  75. Qiu, L.; Su, J.; Ni, Y.; Bai, Y.; Zhang, X.; Li, X.; Wan, X. The Neural System of Metacognition Accompanying Decision-Making in the Prefrontal Cortex. PLOS Biol. 2018, 16, e2004037. [Google Scholar] [CrossRef]
  76. Rushworth, M.F.S.; Walton, M.E.; Kennerley, S.W.; Bannerman, D.M. Action Sets and Decisions in the Medial Frontal Cortex. Trends Cogn. Sci. 2004, 8, 410–417. [Google Scholar] [CrossRef]
  77. Rushworth, M.F.; Buckley, M.J.; Behrens, T.E.; Walton, M.E.; Bannerman, D.M. Functional Organization of the Medial Frontal Cortex. Curr. Opin. Neurobiol. 2007, 17, 220–227. [Google Scholar] [CrossRef]
  78. Venkatraman, V.; Huettel, S.A. Strategic Control in Decision-Making Under Uncertainty. Eur. J. Neurosci. 2012, 35, 1075–1082. [Google Scholar] [CrossRef]
  79. Menon, V.; Uddin, L.Q. Saliency, Switching, Attention and Control: A Network Model of Insula Function. Brain Struct. Funct. 2010, 214, 655–667. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  80. Li, W.; Mai, X.; Liu, C. The Default Mode Network and Social Understanding of Others: What Do Brain Connectivity Studies Tell Us. Front. Hum. Neurosci. 2014, 8, 74. [Google Scholar] [CrossRef]
  81. Ridderinkhof, K.R.; Ullsperger, M.; Crone, E.A.; Nieuwenhuis, S. The Role of the Medial Frontal Cortex in Cognitive Control. Science 2004, 306, 443–447. [Google Scholar] [CrossRef] [PubMed]
  82. Chang, P.-L.; Chieng, M.-H. Building Consumer–Brand Relationship: A Cross-Cultural Experiential View. Psychol. Mark. 2006, 23, 927–959. [Google Scholar] [CrossRef]
  83. Srivastava, M.; Sharma, G.D.; Srivastava, A.K.; Kumaran, S.S. What’s in the Brain for Us: A Systematic Literature Review of Neuroeconomics and Neurofinance. Qual. Res. Financ. Markets 2020, 12, 413–435. [Google Scholar] [CrossRef]
  84. Supphellen, M. Understanding Core Brand Equity: Guidelines for In-Depth Elicitation of Brand Associations. Int. J. Mark. Res. 2000, 42, 1–14. [Google Scholar] [CrossRef]
  85. Aaker, D.A.; Jacobson, R. The Value Relevance of Brand Attitude in High-Technology Markets; SAGE Publications: Sage, CA, USA, 2001. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Prisma flow diagram.
Figure 1. Prisma flow diagram.
Brainsci 11 01619 g001
Figure 2. All foci plotted on the Colin27 template. Top left = coronal view; Top right = sagittal view; Lower right = axial view.
Figure 2. All foci plotted on the Colin27 template. Top left = coronal view; Top right = sagittal view; Lower right = axial view.
Brainsci 11 01619 g002
Figure 3. Explanations of the data structure. (A) Initial data structure. (B) Data structure after k-means clustering. (C) Data structure for machine learning. (D) Overall explanation of feature engineering.
Figure 3. Explanations of the data structure. (A) Initial data structure. (B) Data structure after k-means clustering. (C) Data structure for machine learning. (D) Overall explanation of feature engineering.
Brainsci 11 01619 g003
Figure 4. Results of ALE (A)-1 sagittal view of brand equity-related brain regions, crosshair = (−4, 42, −16); (A)-2 coronal view of brand equity-related brain regions, crosshair = (18, −4, 16); (A)-3 axial view of brand equity-related brain regions, crosshair = (−18, −74, −4); (B)-1 sagittal view of unbranded object-related brain regions, crosshair = (−6, 40, −14); (B)-2 coronal view of unbranded object-related brain regions, crosshair = (38, −36, 38); (B)-3 axial view of unbranded object-related brain regions, crosshair = (52, −24, 18). Abbreviations: ALE: activation likelihood estimation; ACC: anterior cingulate cortex; VMPFC: ventromedial prefrontal cortex; CDH: caudate head; PHG: parahippocampal gyrus; LG: lingual gyrus; MFG: middle frontal gyrus; IPL: inferior parietal lobule; INS: insula.
Figure 4. Results of ALE (A)-1 sagittal view of brand equity-related brain regions, crosshair = (−4, 42, −16); (A)-2 coronal view of brand equity-related brain regions, crosshair = (18, −4, 16); (A)-3 axial view of brand equity-related brain regions, crosshair = (−18, −74, −4); (B)-1 sagittal view of unbranded object-related brain regions, crosshair = (−6, 40, −14); (B)-2 coronal view of unbranded object-related brain regions, crosshair = (38, −36, 38); (B)-3 axial view of unbranded object-related brain regions, crosshair = (52, −24, 18). Abbreviations: ALE: activation likelihood estimation; ACC: anterior cingulate cortex; VMPFC: ventromedial prefrontal cortex; CDH: caudate head; PHG: parahippocampal gyrus; LG: lingual gyrus; MFG: middle frontal gyrus; IPL: inferior parietal lobule; INS: insula.
Brainsci 11 01619 g004
Figure 5. Elbow plot.
Figure 5. Elbow plot.
Brainsci 11 01619 g005
Figure 6. Results of k-means clustering. top left = coronal view; top right = sagittal view; lower right = axial view.
Figure 6. Results of k-means clustering. top left = coronal view; top right = sagittal view; lower right = axial view.
Brainsci 11 01619 g006
Figure 7. ROI of watershed brain regions and result of decoding study. (A) The watershed brain regions for distinguishing branded objects with brand equity and unbranded objects without brand equity; red squares represent ROIs, crosshairs = (22, −79, −3); left = sagittal view; top right = coronal view; lower right = axial view. The brain region of centroid in CL9 is the PHG; the brain region of centroid in CL15 and 20 is the LG. (B) The result of decoding study in branded objects via the cognitive decoding function in Neurosynth. (C) Clusters with influences on mental processes of unbranded objects; green squires represent ROIs, crosshairs = (−21, 34, 40); left = sagittal view; top right = coronal view; lower right = axial view. The brain region of centroid in CL8 is the IPL (BA40). The brain region of centroid in CL25 is the SFG (BA8, DLPFC). (D) The result of the decoding study in unbranded objects via the cognitive decoding function in Neurosynth. Abbreviations: ROI: region of interest; CL: cluster; PHG: parahippocampal gyrus; LG: lingual gyrus; IPL: inferior parietal lobule; SFG: superior frontal gyrus; DLPFC: dorsolateral prefrontal cortex.
Figure 7. ROI of watershed brain regions and result of decoding study. (A) The watershed brain regions for distinguishing branded objects with brand equity and unbranded objects without brand equity; red squares represent ROIs, crosshairs = (22, −79, −3); left = sagittal view; top right = coronal view; lower right = axial view. The brain region of centroid in CL9 is the PHG; the brain region of centroid in CL15 and 20 is the LG. (B) The result of decoding study in branded objects via the cognitive decoding function in Neurosynth. (C) Clusters with influences on mental processes of unbranded objects; green squires represent ROIs, crosshairs = (−21, 34, 40); left = sagittal view; top right = coronal view; lower right = axial view. The brain region of centroid in CL8 is the IPL (BA40). The brain region of centroid in CL25 is the SFG (BA8, DLPFC). (D) The result of the decoding study in unbranded objects via the cognitive decoding function in Neurosynth. Abbreviations: ROI: region of interest; CL: cluster; PHG: parahippocampal gyrus; LG: lingual gyrus; IPL: inferior parietal lobule; SFG: superior frontal gyrus; DLPFC: dorsolateral prefrontal cortex.
Brainsci 11 01619 g007
Table 1. Descriptive statistics.
Table 1. Descriptive statistics.
DatabaseVariablesVariable TypeNMeanSDMedianMinMax
ALLxNumerical14120.4332.42−2−69.1770.58
yNumerical1412−10.8841.99−6−105.0772.9
zNumerical141211.3924.58−5275.36
FlagCategorical1412-----
BrandedxNumerical6791.7432.040−69.1770.58
yNumerical679−13.1843.3−8−105.0772.56
zNumerical6799.5723.76−4875.36
FlagCategorical679-----
UnbrandedxNumerical733−0.7832.74−3−6669
yNumerical733−8.7440.66−3−10572.9
zNumerical73313.0825.1211−5274
FlagCategorical733-----
Table 2. Results of ALE.
Table 2. Results of ALE.
(a) Brand Equity−Related Brain Regions
Cluster #SideBrain RegionBAPeak Voxel Coordinates (MNI)ALE
Values
Cluster Size (mm3)
xyz
1LACC (rostral region/VMPFC)32−442−160.0466368
LACC (MPFC)32−44480.030
RACC (rostral region/MPFC)321050−60.027
LMedial Frontal Gyrus (MPFC)10−1052100.023
LMedial Frontal Gyrus (MPFC)1005860.021
2RPHG (entorhinal cortex)2818−4−160.0362216
RPHG (hippocampus)30−18−180.020
3LCaudate Head (VS)−612−40.0341936
4RPCC (retosplenial region)306−52160.0261064
LPCC (retosplenial region)30−6−58120.020
LPCC (retosplenial region)29−4−50140.019
5LLingual Gyrus18−18−74−40.0331032
LLingual Gyrus18−6−78−20.019
(b) Unbranded Objects−Related Brain Regions
Cluster #SideBrain RegionBAPeak Voxel Coordinates(MNI)ALE
Values
Cluster Size (mm3)
xyz
1LACC (rostral region/VMPFC)32−640−140.03923912
LACC (MPFC)24−23640.0299
RACC (MPFC)3244420.0205
2LCaudate Head (VS)−101000.04483624
3RInferior Parietal Lobule4038−36380.05762560
4RCaudate Head (VS)1214−80.03001608
RCaudate Body101460.0254
5RInsula1352−24180.02861024
6LMedial Frontal Gyrus6−420440.02581000
BA: Brodmann Area; MNI: Montreal Neurological Institute; ALE: activation likelihood estimation; L: left; R: right; ACC: anterior cingulate cortex; VMPFC: ventromedial prefrontal cortex; MPFC: medial prefrontal cortex; PHG: Parahippocampal Gyrus; VS: ventral striatum; PCC: posterior cingulate cortex.
Table 3. Centroids in each cluster.
Table 3. Centroids in each cluster.
Cluster_IDCoordinates (MNI)L/RBrain Regions
xyz
cl_015−59−36RPyramis
cl_1−144−10LMedial Frontal Gyrus (VMPFC)
cl_2−40935LMiddle Frontal Gyrus (BA6)
cl_30−5411IPosterior Cingulate
cl_448824RInferior Frontal Gyrus
cl_5−2103LLateral Ventricle
cl_6−38−57−12LFusiform Gyrus (BA37)
cl_7−47−4−4LSuperior Temporal Gyrus (BA22)
cl_848−3232RInferior Parietal Lobule (BA40)
cl_930−5−11RPHG
cl_10243736RSuperior Frontal Gyrus
cl_11−43−6029LMiddle Temporal Gyrus
cl_12−39312LInferior Frontal Gyrus
cl_1341−48−8RSub−Gyral
cl_14−256−16LSubcallosal Gyrus
cl_15−15−870LLingual Gyrus (BA17)
cl_164−5747RPrecuneus (BA7)
cl_1741340RInferior Frontal Gyrus
cl_18−54−2527LInferior Parietal Lobule
cl_19−12−23−9LMidbrain
cl_2020−855RLingual Gyrus (BA17)
cl_215−840RMiddle Cingulate Gyrus (BA24)
cl_2240−6430RAngular Gyrus
cl_23−55316LMedial Frontal Gyrus (BA9)
cl_24−31−3357LPostcentral Gyrus
cl_25−132646LSuperior Frontal Gyrus (BA8)
BA: Brodmann Area; MNI: Montreal Neurological Institute; L: left; R: right; VMPFC: ventromedial prefrontal cortex; PHG: parahippocampal gyrus.
Table 4. Results of chi-square test.
Table 4. Results of chi-square test.
Cluster_IDContingency TableAdjusted Residualp-Value
BrandedUnbrandedBrandedUnbrandedBrandedUnbranded
cl_01725−1.00231.00230.31620.3162
cl_13746−0.65960.65960.50950.5095
cl_21925−0.66170.66170.50810.5081
cl_329231.1296−1.12960.25860.2586
cl_429221.2775−1.27750.20140.2014
cl_53652−1.39191.39190.1640.164
cl_629310.0389−0.03890.9690.969
cl_72326−0.16390.16390.86980.8698
cl_82950−2.08342.08340.0372 **0.0372 **
cl_954293.1900−3.19000.0014 ***0.0014 ***
cl_102025−0.49720.49720.61910.6191
cl_1121220.0999−0.09990.92040.9204
cl_122637−1.10811.10810.26780.2678
cl_1328250.7044−0.70440.48120.4812
cl_141923−0.37530.37530.70750.7075
cl_1534153.0373−3.03730.0024 ***0.0024 ***
cl_161930−1.32791.32790.18420.1842
cl_172839−1.05701.05700.29050.2905
cl_181424−1.40651.40650.15960.1596
cl_1924161.5297−1.52970.12610.1261
cl_202973.9497−3.94970.0001 ***0.0001 ***
cl_212127−0.61200.61200.54050.5405
cl_221224−1.79491.79490.0727 *0.0727 *
cl_2343341.4010−1.40100.16120.1612
cl_241018−1.32361.32360.18560.1856
cl_252938−0.80640.80640.420.42
* p < 0.1, ** p < 0.05, *** p < 0.005. Gray-shaded areas are statistically significant.
Table 5. Descriptive statistics (after feature engineering).
Table 5. Descriptive statistics (after feature engineering).
DatabaseVariablesVariable TypeNMeanSDMedianMinMax
ALLxNumerical9456.8530.894−68.6770.58
yNumerical945−12.942.89−12−105.0772.56
zNumerical94513.2824.1512−5275.36
FlagCategorical945-----
BrandedxNumerical4627.8929.76−68.6770.58
yNumerical462−15.0544.53−13−105.0772.56
zNumerical46210.8422.968.26−4875.36
FlagCategorical462-----
UnbrandedxNumerical4835.8531.992−6669
yNumerical483−10.8441.2−10−10566
zNumerical48315.6125.0415−5274
FlagCategorical483-----
Table 6. Performance indices.
Table 6. Performance indices.
RankModel_IDAUCLoglossAUC-PR
1XRT_1_AutoML_20210907_1316230.58410.68040.5426
2XGBoost_1_AutoML_20210907_1316230.58260.67940.5449
3GBM_2_AutoML_20210907_1316230.58100.68130.5390
4XGBoost_3_AutoML_20210907_1316230.57960.68330.5375
5GBM_4_AutoML_20210907_1316230.57820.68140.5361
6DRF_1_AutoML_20210907_1316230.57740.68270.5378
7XGBoost_grid__1_AutoML_20210907_131623_model_10.57700.68290.5383
8XGBoost_grid__1_AutoML_20210907_131623_model_60.57590.68100.5333
9GBM_grid__1_AutoML_20210907_131623_model_60.57570.68090.5359
10GBM_5_AutoML_20210907_1316230.57570.68030.5343
AUC: area under the curve; logloss: logarithmic loss metric; AUC-PR: area under the precision-recall.
Table 7. Feature importance.
Table 7. Feature importance.
Feature ValuesCluster Centered Brain RegionsRelative_ImportanceScaled_Importance
Cluster_id.cl_9PHG29.0231.000
Cluster_id.cl_15Lingual Gyrus (BA17)20.7850.716
Cluster_id.cl_20Lingual Gyrus (BA17)19.4540.670
Cluster_id.cl_25Superior Frontal Gyrus (BA8)15.7620.543
Cluster_id.cl_16Precuneus (BA7)13.7210.473
Cluster_id.cl_19Midbrain12.7470.439
Cluster_id.cl_5Lateral Ventricle11.9150.411
Cluster_id.cl_17Inferior Frontal Gyrus11.4770.396
Cluster_id.cl_4Inferior Frontal Gyrus11.2660.388
Cluster_id.cl_3PCC10.9380.377
Feature values were sorted by importance values; BA: Brodmann Area; PHG: parahippocampal gyrus; PCC: posterior cingulate cortex.
Table 8. Results of decoding study by Neurosynth. Each term was sorted by higher correlation values order.
Table 8. Results of decoding study by Neurosynth. Each term was sorted by higher correlation values order.
Branded ObjectsUnbranded Objects
Cognitive TermsCorrelationCognitive TermsCorrelation
Fearful0.129Recognition memory0.079
Unpleasant0.124Belief0.068
Negative0.116Tactile0.051
Affective0.109Pain0.043
Emotional0.101Fluency0.036
Happy0.098Demands0.035
Aversive0.098Verbal fluency0.033
Emotions0.093Competing0.03
Angry0.09Judgment0.03
Fear0.088Reasoning0.029
Emotionally0.08Default network0.025
Disgust0.078Sensations0.023
Positive negative0.077Painful0.022
Arousal0.071Interference0.02
Anxiety0.068Switching0.02
Expression0.065Integrative0.019
Affect0.06Relational0.019
Pleasant0.054Empathy0.019
Valence0.052Risky0.019
Threatening0.049Control network0.019
Emotional valence0.049Conflict0.017
Emotion regulation0.047Multisensory0.017
Mood0.045Speakers0.017
Emotional information0.041Consciousness0.016
Memories0.038Lexical decision0.016
Sighted0.036Semantic0.016
Learning0.029Cognitively0.015
Negative emotional0.029Executive control0.014
Modality0.027Concrete0.013
Intense0.024Referential0.013
Signaling0.023Word0.013
Salient0.022Demand0.012
Mental imagery0.021Retrieval0.011
Gain0.021Words0.011
Sad0.019Imagine0.011
Episodic memory0.018Learned0.011
Encoding0.016Monitoring0.011
Intensity0.013Expectancy0.011
Arithmetic0.013Judgments0.011
Anger0.012Target detection0.009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Watanuki, S. Watershed Brain Regions for Characterizing Brand Equity-Related Mental Processes. Brain Sci. 2021, 11, 1619. https://doi.org/10.3390/brainsci11121619

AMA Style

Watanuki S. Watershed Brain Regions for Characterizing Brand Equity-Related Mental Processes. Brain Sciences. 2021; 11(12):1619. https://doi.org/10.3390/brainsci11121619

Chicago/Turabian Style

Watanuki, Shinya. 2021. "Watershed Brain Regions for Characterizing Brand Equity-Related Mental Processes" Brain Sciences 11, no. 12: 1619. https://doi.org/10.3390/brainsci11121619

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop