Next Article in Journal
A Super-Bagging Method for Volleyball Action Recognition Using Wearable Sensors
Previous Article in Journal
Design of Digital Interaction for Complex Museum Collections
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decision Aids in Online Review Portals: An Empirical Study Investigating Their Effectiveness in the Sensemaking Process of Online Information Consumers

by
Amal Ponathil
1,*,
Anand Gramopadhye
2 and
Kapil Chalil Madathil
1,2
1
Glenn Department of Civil Engineering, Clemson University, Clemson, SC 29634, USA
2
Department of Industrial Engineering, Clemson University, Clemson, SC 29634, USA
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2020, 4(2), 32; https://doi.org/10.3390/mti4020032
Submission received: 18 March 2020 / Revised: 15 June 2020 / Accepted: 17 June 2020 / Published: 23 June 2020

Abstract

:
There is an increasing concern about the trustworthiness of online reviews as there is no editorial process for verification of their authenticity. This study investigated the decision-making process of online consumers when reacting to a review, with the reputation score of the reviewer and the number of previous reviews incorporated along with anonymous and non-anonymous reviews. It recruited 200 participants and developed a 3 × 2 × 2 × 2 × 2 mixed experimental study, with the independent variables being the reaction to a review of a restaurant at 3 levels, the reputation score at 2 levels, the number of previous reviews at 2 levels, the valence of the reviews at 2 levels, and the level of anonymity at 2 levels. Five dependent variables were analyzed: level of trust, likelihood of going to the restaurant, a choice question of whether to go to the restaurant, confidence in the decision and the NASA-TLX workload. This study found that the reputation scores complemented the reaction to a review, improving the trust in the information and confidence in the decision made. The findings suggest that incorporating a user rating scale such as the reputation score of a user deters people from writing false or biased reviews and helps improve their accuracy. Although no significant effect of level of anonymity was found in this study, additional personal information about the users writing the review such as photos or other social media links may make a significant difference in the decision-making process.

1. Introduction

Because of the Internet, online review portals in the form of electronic word-of-mouth (eWOM) have become a key source for consumers to obtain detailed information from people sharing their past experiences [1]. eWOM is defined as “any positive or negative statement made by potential, actual, or former customers about a product or company made available to a large audience of both people and institutions via the Internet” [2]. People share this information in the form of blogs (e.g., tumblr.com), reviews on consumer review websites (e.g., yelp.com, google reviews), e-commerce websites (e.g., amazon.com, alibaba.com), or the official product website (e.g., nike.com, marriot.com, earlestreetkitchenandbar.com). In addition to consulting with friends and relatives, consumers today rely on eWOM for valuable information about products, especially for the hospitality, healthcare, e-commerce and tourism industries [3,4,5], meaning their decision making has become influenced by this eWOM information [6]. These reviews are important as they offer an avenue for consumers to use for judging a product’s quality and value before buying it. For this reason, consumers tend to use eWOM to obtain information to reduce their level of uncertainty about a product [7]. According to Bilgihan, Peng and Kandampully [8], eWOM impacts more than $10 billion in purchases every year, with 81 percent of the people using the Internet to obtain advice from their followers on these social networking sites and 74 percent indicating these opinions influenced their decisions.
However, these online reviews are considered an imperfect source since the information posted generally is not subject to an editorial process for verification [9]. More importantly, these reviews are difficult to judge since they may be biased towards a product or a service such as a restaurant [10], and the content posted on the Internet containing such opinion spam, inappropriate or fake reviews written to sound authentic, can easily deceive the information consumer. This opinion spam can range from posting negative reviews about competitors to damage their reputations to posting positive reviews to offset the negative ones as in the case of the Belkin product incident on Amazon where the business development representatives offered to pay people to write positive reviews about their products [11]. Another common issue with eWOM is people trying to remain anonymous, by either choosing not to include their personal information or providing fake identities with their reviews. This issue creates doubt in the minds of the readers regarding the reliability of the information [12].
As this analysis suggests, trustworthiness is a critical factor in the online review system. A trustworthy review is one “that is perceived by the reader as the honest, sincere, truthful, and non-commercial opinion of a customer who has experienced a product or a service” [13]. This trust relationship leaves a person vulnerable and dependent on the one who is trusted, resulting in giving up some degree of control or power [14,15]. Trust involves being able to predict the behavior, integrity, honesty and moral character of the other person. In face-to-face interactions, people can see a number of cues such as the body language and facial expressions of the person. However, in an online environment, these cues about personal identity are limited, depending only on pictorial or textual information and, thus, affecting the level of trust [14]. In addition to being limited, these cues in the virtual environment may not be credible [16].
To assist the consumers in having trust and understanding the credibility of the information and making a more informed decision, many websites such as yelp.com and tripadvisor.com have established supplementary decision aid systems that provide additional cues, for example a rating score of the product, a linked Facebook profile of the reviewer, a personal profile of the reviewer on the product website, photos uploaded by the reviewer, the number of previous reviews written by the reviewer and users voting in the form of thumbs up or down to a review of the product (yelp.com, tripadvisor.com). eBay has moved one step further to include one of the simplest and best known online reputation scoring systems, providing the reputation score of the product seller [17]. On this site, after each transaction, the buyer has the option to leave comments about the seller in addition to providing a positive, negative or neutral rating based on their experience. The seller, who is receiving the rating, gains a +1 point for a positive rating, 0 for a neutral rating or a −1 for a negative rating, which is then added to his feedback score [18].
Different cues in the virtual environment affecting trust and source credibility which subsequently influence decision making have been extensively researched in the domain of hospitality. For example, Sparks and Browning [19] investigated the effect of cues such as review valence, defined as the positive or negative orientation of information about a situation; review framing strategy; review target; and the presence of consumer generated numerical rating on the user’s choice and perception of trust [20]. They found a higher level of trust on positively framed reviews, which subsequently increased the consumer’s booking intention. They also observed that easy-to-evaluate information such as star ratings for products played a large role in users’ purchase decisions. Similarly, de Matos and Rossi [21] also found the valence of the eWOM to play a vital part in influencing consumer behaviors like trust and satisfaction. In a study focused on the perceived credibility of the reviews, Xie, Miao, Kuo and Lee [22] found the presence of personal identifying information of a reviewer had a positive effect on the credibility of a review. Further, these researchers observed that the user’s intention to book a hotel was influenced by negative eWOM information irrespective of their initial impression. They found this influence to be intensified when the personal identifying information of the reviewer was available. Similarly, Chih, Hsu and Ortiz found that interpersonal determinants such as tie strength and homophily, and information determinants such as source trustworthiness and customer endorsements had a significant positive effect on the perceived eWOM credibility [23]. Kim, Kandampully and Bilgihan also found a similar relationship between homophily and tie strength with source credibility which in turn influenced their attitude towards eWOM [24]. In addition, Ismagilova, Dwivedi, and Slade found that as the frustration level of the user increased, they perceived the helpfulness of the eWOM to be negative [25].
In general, a number of studies have found attribute performance, i.e., the user’s actual experience with the product, to be an important predictor of eWOM behaviors [26,27]. In a study examining the restaurant experiences that trigger a positive eWOM, Jeong and Jang [26] found that choosing to share a positive experience is related to the food price, quality, satisfaction and good atmosphere, whereas according to Boo and Kim [28], consumers with previous experience with eWOM, i.e., those who have previously checked online reviews or blogs, and prefer to use e-mail as a communication tool, tend to write negative reviews when they have faced an unsatisfactory experience. Wang, Li and Liu also found that the eWOM information quality affected consumer’s use behavior [29], while Qahri-Saremi and Montazemi found that in addition to a user’s prior experience with online services, their motivation to elaborate eWOM influenced the negativity bias associated with it [30]. Vermeulen and Seegers [31] found positive eWOM to have stronger effects of improved awareness and hotel considerations for lesser known hotels. However, Doh and Hwang [32] found that the credibility of these eWOM is damaged in the long run if all are positive. Similarly, the response of hotel managers to negative eWOM reduced users’ purchase intentions as they perceived the responses to be commercially oriented and, thus, less credible [33,34]. Additionally, a user’s previous eWOM experience drove their frequency and intention of engaging in eWOM [35]. Similarly, Nam, Baker, Ahmad and Goo found that dissatisfaction with previous experiences led users to not trust the eWOM on an online review website [36], while Liu, Jiang and Zhou found in their research that the experience of the eWOM poster had a positive effect on a user’s trust in the information [37]. In addition, the number of eWOMs and the platform on which they are posted were also seen to influence consumer decision making, with an increase in the number resulting in a higher probability of the user being willing to buy the product [38,39]. Similarly, Wang and Li found the more complete the eWOM the more satisfied the user was with the website [40]. The study conducted by Maslowska, Malthouse, and Viswanathan [41] investigating the effects of price and exposure to a review in addition to review valence and volume found the effect of valence on purchase probability to be the strongest when the number of reviews was high, consumers read the reviews and the product was expensive. Mauri and Minazzi [33] also saw a similar effect for review valence.
Previous research, as explained above, has extensively focused on perceived credibility, trust and consumer behavior, along with the subsequent decision made based on eWOM. However, there is limited research investigating the decision-making process of online consumers when provided with eWOM and other decision aids such as the reputation score of a user, reactions to a review and the number of previous reviews. To address this knowledge gap, especially in the hospitality industry, this study focuses on the decision-making process of consumers in a restaurant evaluation system. Klein’s Data-Frame Theory of Sensemaking was used to characterize the human behavior involved in interpreting data from online review portals [42].

Data-Frame Theory of Sensemaking

The sensemaking process, which is initiated in response to an inadequate understanding of a situation, consists of developing meanings, arranging events into a framework and then questioning the initial perception. Asking questions about the prior perception of a problem or situation increases our understanding of the perceived information, followed by further attempts to obtain and integrate additional information, thus leading to a fuller understanding of the situation. The sensemaking process is the underlying mechanism for the naturalistic decision-making process, explaining how users make sense of the information in a real-world setting [42,43,44,45]. The ultimate goal of sensemaking is to develop an understanding that includes adequate information about the current state of the situation to support informed decision making [46]. Sensemaking is, thus, the process of creating situation awareness in uncertain situations [47,48].
The macrocognitive model proposed by Klein et al. [43] provides an understanding of the cognitive phenomena found in real-world scenarios. This framework consists of 6 elements: planning, problem detection, sensemaking, adaptation, coordination and naturalistic decision making. Sensemaking, which is a key function in this model, is based on the data frame theory of knowledge representation proposed by Minsky [49], who suggested that when people identify a new situation requiring a substantial change in their current viewpoints, they select a structure from memory, called a frame, which is then adapted to fit the new context.
According to Klein, Phillips, Rall and Peluso [45], humans try to make sense of a situation by starting from an explanatory framework, which organizes relationships as causal, spatial, temporal or functional. Specifically, a frame facilitates defining the elements in the scenario and identifying their significance within a context. An important characteristic of this model is the closed loop process introduced through the data frame theory, which suggests that data are used to identify this frame, which, in turn, determines what data are considered next as shown in the top of Figure 1 [42].
According to this model, sensemaking includes the seven activities of mapping the data to the frame, elaborating a frame, questioning a frame, preserving a frame, comparing frames, reframing, and constructing or finding a frame, any one of which can be the starting point for the process. As this analysis of the data-frame model suggests, sensemaking is a complex cognitive activity triggered by a need to find more information and involving finding data based on an initial framework, organizing information into representations, and refining and modifying these representations based on the new information.
In this study, we incorporated the reputation score (rating system) of a user instead of a restaurant to ensure the authenticity of the user comments and to avoid consistent false or biased reviews. In addition, we looked at the effectiveness of decision aids, such as reactions to a review and number of previous reviews along with the valence of the reviews and anonymity of the user posting the reviews. There has been no previous experimental study looking at these variables, and we believe our study would help in aiding the users have a better understanding of the authenticity of the information and subsequently make an educated decision. Specifically, this work attempts to answer the following research questions (RQ’s):
  • RQ1. What is the effect of decision aids and level of anonymity on the level of trust in the reviews?
  • RQ2. What is the effect of decision aids and level of anonymity on the likelihood rating (choosing or rejecting the restaurant) based on the reviews?
  • RQ3. What is the effect of decision aids and level of anonymity on the level of confidence in the decision based on the reviews?

2. Methods

2.1. Participants

A priori power analysis was conducted to calculate the sample size with medium population effect size (f = 0.25) at a significance level of 0.05 and power of 0.95. This analysis suggested a sample size of 144 participants. However, to be on the conservative side and to equally divide the number of participants among the 4 levels of 2 between subject groups, we recruited a total of 200 participants with a mean age of 36.8 (SD = 9.54). These participants, recruited from Amazon Mechanical Turk, a crowdsourcing marketplace, were Mechanical Turk Master Workers. These Master Workers are those who have consistently completed Human Intelligence Tasks (HITs) and provided high-quality results, as indicated by the requester approval rates. The Master Worker qualification is given based on the statistical models developed to analyze all the workers using the requester and marketplace provided data points [50]. Each participant completed one randomly assigned condition. The demographic details of the participants are provided in Table 1:

2.2. Apparatus

The study was created and the data collected using the Qualtrics Research Suite, and the participants accessed and completed it through Amazon Mechanical Turk. It was divided into three sections: an initial pre-test demographic questionnaire; a set of 12 single restaurant reviews and related questions including whether to go to the restaurant, the level of trust in the information, the likelihood of going to the restaurant and the level of confidence in the decision made; and the NASA-TLX workload assessment questionnaire [51]. Since past research has shown that individuals frequently use yelp.com, a crowd sourcing review platform for information seeking purposes [52], we downloaded the reviews related to the food and the service of the restaurant from this website.
The reviews were selected based on an analysis of their emotional tone using Linguistic Inquiry and Word Count (LIWC) text analysis technology [53]. The emotional tone scores range from 0 to 100, with a score near 100 suggesting a positive tone, i.e., a supportive review, and a score near 0 suggesting a negative tone, i.e., a non-supportive review. In this study, we consistently chose reviews with emotional tone scores of 99 for the supporting reviews and 1 for the non-supporting ones. In addition, to determine the valence of the experimental stimuli, a pilot study was conducted with 10 participants, where each participant was provided with a review and was asked to categorize it as supporting or non-supporting. An interrater reliability analysis using the Fleiss’ Kappa statistic was conducted to determine consistency in the scoring of the response among the raters. The raters were in complete agreement on the valence of the review stimuli used for the study, κ = 1.0, 95% CI [0.94, 1.06], p < 0.001. Additionally, manipulation check was conducted on all levels of the independent variables manipulated in the study.

2.3. Experimental Design

The study used a mixed experimental design with the within-subject variables being the reaction to a review, the reputation score and the number of previous reviews:
  • Reaction to a Review: examined at three levels: no reaction, thumbs up and thumbs down.
  • Reputation Score: examined at two levels: high reputation (5 star) and low reputation (1 star).
  • Number of Previous Reviews: Examined at two levels: large number of previous reviews (~1000) and low number of previous reviews (~10).
The between-subjects variables were valence of the reviews and the level of anonymity:
  • Valence of Reviews: examined at two levels: supporting reviews and non-supporting reviews.
  • Level of Anonymity: examined at two levels: completely anonymous reviews (no personal information was provided) and non-anonymous reviews (personal information such as name, age and location were provided).
Examples of the reviews used in this study can be seen in Figure 2, Figure 3 and Figure 4:

2.4. Dependent Variables

Five variables were analyzed to determine the effect of the independent variables:
  • Response to the choice question about whether to go to the restaurant (measured on a binary scale as yes/no)
  • Level of trust in the information (measured on a 7-point Likert scale with 1 being the lowest and 7 being the highest)
  • Likelihood of going to the restaurant (measured on a 7-point Likert scale ranging from extremely unlikely to extremely likely)
  • Confidence level in the decision (measured on a 7-point Likert scale with 1 being the lowest and 7 being the highest)
  • NASA-TLX workload (measured on a scale from 0 to 100)

2.5. Procedure

On the day of the study, participants first read the informed consent form and agreed to participate. They were then randomly assigned to one of the four study conditions shown in Figure 5, after which they completed a pre-test questionnaire asking for demographic information as well as information regarding their experiences using the Internet and social networks. Next, they read a review of a restaurant. To minimize order effects, the restaurant reviews were presented in a randomized order. Each review was followed by a set of questions including one about whether to go to the restaurant, the level of trust in the information, the likelihood of going to the restaurant and the level of confidence in the decision made. After the participants completed all 12 randomly assigned restaurant reviews, they were asked to complete the NASA-TLX workload assessment. Upon completing the experimental study, the participants received their monetary gift.

2.6. Hypotheses

To study the effects of decision aids and the level of anonymity on the decision-making process, the following research hypotheses were developed:
Hypothesis 1 (H1).
The number of previous reviews moderates the relationship between the reputation score and the level of trust. Specifically, the level of trust for a review increases as the number of previous reviews and the reputation score increase.
Hypothesis 2 (H2).
The relationship between the likelihood of going to a restaurant and the reputation score is moderated by the number of previous reviews. Specifically, the likelihood to visit a restaurant increases as the number of previous reviews and the reputation score increase.
Hypothesis 3 (H3).
The reputation score moderates the relationship between the reaction to a review and the level of trust. Specifically, the participants will have an increased level of trust in the review as the reputation score increases and the reaction to a review changes from people disapproving it (thumbs down) to approving it (thumbs up).
Hypothesis 4 (H4).
The relationship between the likelihood of going to a restaurant and the reaction to a review is moderated by the reputation score. Specifically, the likelihood of going to a restaurant increases as the reputation score increases and the reaction to a review changes from people disapproving it (thumbs down) to approving it (thumbs up).
Hypothesis 5 (H5).
The participants will have an increased level of trust when they view a non-anonymous review compared to an anonymous review.
Hypothesis 6 (H6).
The participants will be more likely to visit the restaurant when they view a non-anonymous review compared to an anonymous review.

3. Results

IBM SPSS Statistics 24 was used to analyze the data. An LSD adjustment was applied to the four-way interactions, three-way interactions, simple three-way interactions, simple two-way interactions and simple main effects, with statistical significance being evaluated at the p < 0.05 level. All simple pairwise comparisons were evaluated at an alpha level of 0.05.

3.1. Trust

A five-way mixed ANOVA was conducted to determine the effects of the reaction to a review, the reputation score, the number of previous reviews, the valence of the reviews and the level of anonymity on trust. The four-way and five-way interactions were not statistically significant. However, there was a statistically significant three-way interaction among reaction to a review, reputation score and valence of the review, F (1.899, 372.282) = 7.47, p = 0.001, ε = 0.950. Within this three-way interaction, we found a statistically significant simple two-way interaction between reaction to the review and reputation score for supporting reviews, but not for non-supporting reviews. Subsequently, a simple main effect and post-hoc analysis, reported below, were conducted to compare the significant mean differences. Figure 6 is a graphical representation of the three-way interactions, and Table 2 provides the resulting mean values of level of trust.
We found a statistically significant simple main effect of reaction to a review with a high reputation score and supporting review, F (1.622, 159.002) = 80.81, p < 0.001, ε = 0.811. The participants trusted the reviews with a thumbs-up reaction more than the thumbs-down reaction, with a significant mean difference of 1.19, 95% CI (0.94, 1.43), p < 0.001 and the no reaction more than the thumbs-down reactions, with a significant mean difference of 1.16, 95% CI (0.94, 1.38), p < 0.001.
We also found a statistically significant simple main effect of reaction to a review with a low reputation score and a supporting review, F (1.746, 171.067) = 8.39, p = 0.001, ε = 0.873. The participants again trusted the reviews with a thumbs-up reaction more than the no reaction, with a significant mean difference of 0.39, 95% CI (0.15, 0.63), p = 0.002; and the thumbs-up reaction more than the thumbs-down reaction, with a significant mean difference of 0.52, 95% CI (0.21, 0.82), p = 0.001.
Overall, these results suggest that the users had a higher level of trust when the reputation score of an author of a review was high compared to a low reputation score. A supporting review with a high reputation and a thumbs up or no reaction resulted in a higher level of trust compared to a thumbs-down reaction. However, with a high reputation, the level of trust was similar between reviews with a thumbs-up reaction and no reaction. For a low reputation score and a supporting review, the level of trust was significantly higher for a thumbs-up reaction than for no reaction and a thumbs-down reaction. This result suggests that with a low reputation score, additional cues in the form of reactions to a review increase trust among the users. When the reviews are non-supporting, there seems to be no difference in the trust among the users irrespective of having a reaction to a review or a reputation score.

3.2. Likelihood

A five-way mixed ANOVA was conducted to determine the effects of reaction to a review, reputation score, number of previous reviews, valence of the reviews and level of anonymity on the likelihood of going to the restaurant. We found a statistically significant four-way interaction among reaction to a review, reputation score, number of previous reviews and valence of the reviews, F (2, 392) = 9.52, p < 0.001. Subsequently, we found a simple three-way interaction between number of previous reviews, reputation score and reaction to the review for both supporting and non-supporting reviews. On further analyzing the simple two-way interactions, significant effects were found between reputation score and reaction to the review with a high number of previous reviews but not for a low number of reviews. Finally, a simple main effect and post-hoc analysis, reported below, were conducted to compare the significant mean differences. Figure 7 is a graphical representation of the four-way interaction, and Table 3 provides the mean values of the likelihood scores.
We found a statistically significant simple main effect of reaction to a review with a large number of previous reviews and a high reputation score for supporting reviews, F (1.510, 147.957) = 66.35, p < 0.001, ε = 0.755. The participants were more likely to go to the restaurant when they viewed a thumbs-up reaction than a thumbs-down reaction, with a significant mean difference of 1.33, 95% CI (1.02, 1.64), p < 0.001, and a no reaction than a thumbs-down reaction, with a significant mean difference of 1.41, 95% CI (1.10, 1.73), p < 0.001.
We also found a statistically significant simple main effect of reaction to a review with a large number of previous reviews and a high reputation score for non-supporting reviews, F (1.876, 187.642) = 4.15, p = 0.019, ε = 0.938. The participants were more likely to go to the restaurant when they viewed a thumbs-down reaction than a no reaction, with a significant mean difference of 0.39, 95% CI (0.12, 0.65), p = 0.005.
The results showed a statistically significant simple main effect of reaction to a review with a large number of previous reviews and a low reputation score for supporting reviews, F (1.676, 164.247) = 42.52, p < 0.001, ε = 0.838. The participants were more likely to go to the restaurant when they viewed the thumbs-up reaction than the no reaction, with a significant mean difference of 0.86, 95% CI (0.54, 1.18), p < 0.001; the thumbs-up reaction than the thumbs-down reaction, with a significant mean difference of 1.38, 95% CI (1.04, 1.73), p < 0.001; and the no reaction than the thumbs-down reaction, with a significant mean difference of 0.53, 95% CI (0.30, 0.75), p < 0.001.
The results also showed a statistically significant simple main effect of reaction to a review with a large number of previous reviews and a low reputation score for non-supporting reviews, F (1.837, 183.709) = 25.83, p < 0.001, ε = 0.919. The participants were more likely to go to the restaurant when they viewed a thumbs-down reaction than a no reaction, with a significant mean difference of 0.55, 95% CI (0.31, 0.78), p < 0.001; a thumbs-down reaction than a thumbs-up reaction, with a significant mean difference of 0.87, 95% CI (0.60, 1.15), p < 0.001; and a no reaction than a thumbs-up reaction, with a significant mean difference of 0.33, 95% CI (0.11, 0.54), p = 0.003.
Overall, these results suggest that the users were more likely to go to the restaurant when they saw a supporting review with a high reputation, a large number of previous reviews and a thumbs-up reaction or no reaction than a thumbs-down reaction. As expected, the users were less likely to go to the restaurant after reading a supporting review with a low reputation score compared to a high reputation score. When reading these reviews with low reputation scores, reviews with a thumbs-up reaction acted as an additional cue for the users compared to no reaction. Thus, the users had a higher likelihood score and also higher trust in this information than that with no reaction or a thumbs-down reaction.
A non-supporting review with a thumbs-down reaction indicated that the previous users disagreed with the review and their assessment of its poor quality while a thumbs-up reaction suggested that the previous users supported the reviews and poor quality of the restaurant. Hence, users were more likely to go the restaurant with a thumbs-down reaction compared to no reaction and a thumbs-up reaction.

3.3. Probability of Choosing Whether to Go to the Restaurant

A multilevel binomial logistic regression was conducted to predict the probability of choosing whether to go to the restaurant. The no category (not to go to the restaurant) was selected as the initial reference category. We found a statistically significant three-way interaction among reaction to a review, reputation score and valence of review, F (2, 2363) = 3.68, p = 0.025. Within this three-way interaction, we found a statistically significant simple two-way interaction between reaction to the review and reputation score for both supporting and non-supporting reviews. Subsequently, simple main effect and post-hoc analysis was conducted to compare the significant mean differences. All the simple main effect analyses were significant except the reaction to a review for non-supporting reviews with a high reputation. Figure 8 is a graphical representation of the three-way interaction, and Table 4 provides the mean probability values.
The results showed a statistically significant simple main effect of reaction to a review with high reputation scores for supporting reviews, F (2, 2363) = 19.96, p < 0.001. The participants had a higher probability of going to the restaurant when they viewed the review with no reaction than the thumbs-down reaction, with a significant mean difference of 0.25, 95% CI (0.18, 0.33), p < 0.001 and the thumbs-up reaction than the thumbs-down reaction, with a significant mean difference of 0.25, 95% CI (0.17, 0.32), p < 0.001.
We found a statistically significant simple main effect of reaction to a review with low reputation scores for supporting reviews, F (2, 2363) = 49.08, p < 0.001. The participants had a higher probability of going to the restaurant when they viewed the thumbs-up reaction than the no reaction, with a significant mean difference of 0.34, 95% CI (0.24, 0.44), p < 0.001; the thumbs-up reaction than the thumbs-down reaction, with a significant mean difference of 0.48, 95% CI (0.38, 0.57), p < 0.001; and no reaction than the thumbs-down reaction, with a significant mean difference of 0.14, 95% CI (0.06, 0.22), p = 0.001.
We also found a statistically significant simple main effect of reaction to a review with low reputation scores for non-supporting reviews, F (2, 2363) = 8.95, p < 0.001, but not for high reputation ones, F (2, 2363) = 1.51, p = 0.221. The participants had a higher probability of going to the restaurant when they viewed the thumbs-down reaction than no reaction, with a significant mean difference of 0.16, 95% CI (0.08, 0.23), p < 0.001 and the thumbs-down reaction than the thumbs-up reaction, with a significant mean difference of 0.17, 95% CI (0.09, 0.24), p < 0.001.
Similar to the trend found for the likelihood of going to the restaurant, users reading a supporting review with a high reputation and a thumbs-up reaction or no reaction showed a higher probability of going to the restaurant compared to a thumbs-down reaction, whereas with supporting reviews with a low reputation score, a thumbs-up reaction indicated a higher probability compared to no reaction and a thumbs-down reaction. As explained previously, a non-supporting review with a thumbs-down reaction indicated that the previous users disagreed with the review and their assessment of its poor quality. Hence, users had a higher probability of going to the restaurant compared to no reaction and a thumbs-up reaction.

3.4. Confidence Level

A five-way mixed ANOVA was conducted to determine the effects of reaction to a review, reputation score, number of previous reviews, valence of the reviews and level of anonymity on the confidence level. The four-way and five-way interactions were not statistically significant. However, we found a statistically significant three-way interaction among reaction to a review, reputation score and valence of the reviews, F (1.775, 347.803) = 9.65, p < 0.001, ε = 0.887. Within this three-way interaction, we found a statistically significant simple two-way interaction between the reaction to the review and the reputation score for both supporting and non-supporting reviews. Additional simple main effect and post-hoc analyses were conducted to compare the significant mean differences, as reported below. The simple main effects for both supporting and non-supporting reviews with low reputation scores were not significant. Figure 9 is a graphical representation of the three-way interaction, and Table 5 provides the mean values of confidence level.
We found a statistically significant simple main effect of reaction to a review with a high reputation score and supporting review, F (1.597, 156.526) = 58.15, p < 0.001, ε = 0.799. The participants were more confident in their decision when they viewed the thumbs-up reaction than the thumbs-down reaction, with a significant mean difference of 0.99, 95% CI (0.75, 1.23), p < 0.001, and the no reaction than the thumbs-down reaction, with a significant mean difference of 0.95, 95% CI (0.73, 1.17), p < 0.001.
We also found a statistically significant simple main effect of reaction to a review with a high reputation score and non-supporting review, F (1.783, 178.348) = 19.25, p < 0.001, ε = 0.892. The participants were more confident in their decision when they viewed the thumbs-up reaction than the no reaction, with a significant mean difference of 0.22, 95% CI (0.03, 0.41), p = 0.022; the thumbs-up reaction than the thumbs-down reaction, with a significant mean difference of 0.67, 95% CI (0.42, 0.92), p < 0.001; and the no reaction than the thumbs-down reaction, with a significant mean difference of 0.45, 95% CI (0.24, 0.66), p < 0.001.
Overall, these results suggest that the users were more confident after reading a supporting review with a thumbs-up reaction or no reaction compared to a thumbs-down reaction. However, when the review was non-supporting, the thumbs-up reaction acted as an additional cue which made the user more confident than no reaction. Although a thumbs-up reaction for a non-supporting review resulted in the users choosing not to go to the restaurant, they were more confident in their decisions. On the other hand, a thumbs-down reaction for a non-supporting review led to a higher likelihood of a user going to the restaurant but with reduced confidence in the decision.

3.5. Non-Significant Results

There was no statistically significant interaction or main effects of the independent variables on the NASA-TLX workload indicators.

4. Discussion

This study examined the effects of the decision aids of reputation score, reaction to a review and number of previous reviews on the decision-making process of users reading an online consumer review as well as the effect of the anonymity of the user posting the reviews and the valence of the reviews. This study applied Klein’s data-frame theory of sensemaking to investigate how the users interpreted this information [42,44]. According to this theory, the initial stimuli act as the anchors for the initial understanding of the situation, forming what is referred to as the initial frame. Based on the literature on reviews and decision aids, we consider the initial frame in the user’s mental model to be formed by the reviews of the restaurant [54]. Subsequent cues shaped the way the reader further developed this initial frame and made sense of the information. The data-frame theory suggests that consumers elaborate their frame by extending their understanding, i.e., by seeking additional data to confirm their initial frames when the information is straightforward and no surprises or inconsistencies are observed [45]. In this study, when the participants were presented with a supporting review with a thumbs-up reaction and a high reputation score, they recognized that the additional cues (decision aids) supported their initial frames. The confirming cues explain their high scores on the likelihood and probability of going to the restaurant, their trust in the information and their confidence in their decisions. A similar sensemaking pattern was observed for supporting reviews with no reaction and a high reputation score, the high reputation score serving as data confirming the users’ initial frames.
According to the data-frame theory, when users are presented with information that contradicts their expectancy, they doubt their initial frames, questioning them [45]. In this study, when the participants were shown supporting reviews with thumbs-up reactions but low reputation scores, they appeared to realize that the latter cue did not support their initial frames, beginning to question whether their previous understanding (based on the initial frame) was correct. According to the data-frame theory, this examination of the accuracy of the frame leads to either preserving it by explaining the inconsistency or seeking a new frame by finding new anchors [45]. Based on the results of this study, the participants explained the inconsistency in making the decision to go the restaurant as they placed significant emphasis on the supporting review and the thumbs-up reaction. However, because of the inconsistency, they had an average level of trust and confidence in their final decisions. A similar sensemaking pattern of questioning the frame and preserving the initial frame was seen when the supporting reviews were accompanied by thumbs-down reactions and a high reputation score.
When the participants were presented with supporting reviews with thumbs-down reactions and low reputation scores, the data elements were not consistent with their initial frames. In this study since these cues contradicted the initial frame, the participants replaced it with a new one, i.e., not to go the restaurant, which was confirmed by their low likelihood and probability scores. The mean level of trust measured across the condition was also low, indicating that the participants did not trust this contradicting information. A similar sensemaking pattern of questioning and replacing the initial frame was observed when the participants read a supporting review with a low reputation score and no reaction.
When the participants were presented with a non-supporting review, the initial stimulus formed through the initial frame was not to go to the restaurant. On seeing a thumbs-up reaction and a high reputation score with this non-supporting review, participants may not have expected this seeming contradiction in the data elements. However, upon further consideration, they discovered that these additional cues support that the quality of the restaurant is as bad as the non-supporting reviews suggest, causing the participants to believe that the restaurant is not a good choice, a conclusion supported by the low likelihood and probability scores and relatively high level of trust and confidence in their decisions. A similar sensemaking pattern was seen for a non-supporting review with a thumbs-up reaction and low reputation, albeit at a lower level of confidence since the reputation of the reviewer was low.
When the participants were presented with a non-supporting review with a thumbs-down reaction and a high reputation, the contradicting information led them to question their initial frames. The reviewer had a high reputation, yet the review had been downvoted, indicating that others disagreed with the negative review. While the participants could have replaced their initial frames and decided to go to the restaurant, the data show that they preserved them perhaps because of the negative valence of the review. The participants also had little trust in the information due to such contradicting information, perhaps also explaining their unwillingness to replace their initial frames. When the participants were shown non-supporting reviews with thumbs-down reactions and low reputation scores, they realized that the cues and their initial frames did not agree, these additional cues causing them to rethink their initial mental models. A thumbs-down reaction to a non-supporting review indicates that the users suggest the restaurant is not bad. While the participants could have developed a new frame to go to the restaurant based on the data elements, the results from this study found that they preserved their initial frames, perhaps because the additional cues did not convince them to replace their initial mental models.
Participants reading a non-supporting review with no reaction and a high reputation score chose to trust the review and confidently decided not to go to the restaurant by elaborating their initial frame with the cues obtained from the reputation score. A similar sensemaking pattern was observed for a non-supporting review with no reaction and a low reputation score as participants decided not to go to the restaurant. However, they were not confident in their decision due to the lack of cues to support their decision and the low reputation score.
The findings from this study indicate that supporting reviews result in a higher likelihood and probability of going to the restaurant compared to non-supporting reviews, results supported by prior studies on the impact of eWOM on the likelihood of completing a scenario [19]. As explained by Fiske [55], people tend to pay more attention to negative information than positive, believing that it suggests a cautious approach. The results from this study agree with this analysis because when the participants read a non-supporting review, they did not replace their initial mental model (i.e., not to go to the restaurant), choosing to be cautious. In addition, like Sparks and Browning [19], this study found that heuristics, like reputation scores as cues, function as indicators for informed and efficient decision making.
This study also extended the previous work conducted by Ponathil, Agnisarman, Khasawneh, Narasimha, and Madathil [56] which focused on the effectiveness of reaction to a review alone on the trust, confidence and likelihood scores. We found a significant difference between thumbs up/down compared to no reaction on these dependent variables when the reputation score was included as an additional cue, while the previous study did not find any significant differences. Supporting reviews with a low reputation resulted in significant differences with a thumbs-up reaction compared to no reaction, and for non-supporting reviews there was significant difference between a review with a thumbs-down reaction compared to no reaction. This demonstrates the contribution of reputation score in conjunction with reaction to a review in the decision-making process.
Previous work conducted by Ponathil et al. [56] found that a reaction to a review alone did not result in a significant difference in the trust, confidence and likelihood scores among the users when they read a review with thumbs-up reaction compared to no reaction, meaning that reactions alone in the form of thumbs up do not make a significant contribution towards the decision making process. In this study, when a reputation score was included with the reactions to a review, supporting reviews with a low reputation resulted in a significant difference in the dependent variables among the users for a review with a thumbs-up reaction compared to no reaction, and for non-supporting reviews there was significant difference between a review with a thumbs-down reaction compared to no reaction, demonstrating the contribution of both the reaction to a review and reputation score in the decision making process.
We expected that the non-anonymous reviews would enhance the sensemaking process by providing additional cues for decision making [57]. However, we found that the trust in the information, the likelihood and the probability of going to the restaurant, and confidence in the decision were not significantly different between non-anonymous and anonymous reviews. One potential reason for this finding may be because the user’s personal information included in the non-anonymous review was not sufficient for increasing the trust in the system. Additional data elements other than name, age and location like a photo of the user and/or links to social media accounts may enhance the sensemaking process. Another potential reason could be the participants predominantly focused on the data elements like reaction to a review and the reputation score of a user while making a decision; if this is the case, then providing other data would not add insight into the primary reason for reading the review, i.e., to decide whether to go to the restaurant.

5. Conclusions and Limitations

This study evaluated the effect of cues in the form of decision aids as well as anonymity of the user and valence of reviews on the trust in the information, the likelihood and probability of going to the restaurant and the level of confidence in the decision. As expected, users presented with a supporting review with a thumbs-up reaction and a high reputation scored the highest on the dependent variables, while supporting reviews with a thumbs-down reaction and a low reputation had the lowest. Similarly, when the participants read a non-supporting review with a thumbs-down reaction, they decided to go the restaurant although they were not confident in their decisions. Similar to Liu and Park [58], we also found that in the online environment where consumers have limited resources for making an educated decision about a product, information about a reviewer’s historical data in the form of decision aids like the reputation score improved the trust and usefulness of a review. These findings can be implemented in the online review portals so that the users can decide which information to trust based on the cues. Further, perhaps incorporating this practice would deter people from writing false or biased reviews and improve their accuracy, thus helping the users differentiate between the companies that rely on biased reviews to draw customers from those that do not. Future studies could explore this aspect.
This research contributes to the literature by outlining specific aspects of eWOM that the consumers consider in making a decision. Our expectation that the participants would have significantly more trust in the information, the likelihood and probability of going to the restaurant, and confidence in their decision when they were presented with non-anonymous reviews compared to anonymous reviews was not met in this study. Future studies could include supplementary personal information in addition to the factors considered in this study to aid the sensemaking process. Furthermore, since the study was conducted remotely, we could not include a post-test think aloud session to collect qualitative feedback, a limitation of the study. Future studies could collect these data to help enhance our understanding of the reasoning behind the decisions made as well as providing user feedback on ways to improve trust in the system. We also could not answer any doubts or questions the participants had about the study as a result of it being remote. Conducting an in-person study in the future would address this limitation. In addition, future studies can incorporate multiple reviews for a user’s decision making instead of a single review, another limitation of this study. Finally, along with multiple reviews, multiple restaurants and different information architecture designs could be explored to determine the most efficient and effective system of information portrayal.

Author Contributions

Conceptualization, A.P., A.G. and K.C.M.; methodology, A.P. and K.C.M.; software, A.P.; formal analysis, A.P. and K.C.M.; data curation, A.P. and K.C.M.; writing—original draft preparation, A.P.; writing—review and editing, A.P., A.G. and K.C.M.; visualization, A.P.; supervision, A.G. and K.C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gretzel, U.; Yoo, K.H. Use and Impact of Online Travel Reviews. In Information and Communication Technologies in Tourism 2008; Springer: Vienna, Austria, 2008; pp. 35–46. [Google Scholar]
  2. Hennig-Thurau, T.; Gwinner, K.P.; Walsh, G.; Gremler, D.D. Electronic Word-of-Mouth via Consumer-Opinion Platforms: What Motivates Consumers to Articulate Themselves on the Internet? J. Interact. Mark. 2004, 18, 38–52. [Google Scholar] [CrossRef]
  3. Litvin, S.W.; Goldsmith, R.E.; Pan, B. Electronic Word-of-Mouth in Hospitality and Tourism Management. Tour. Manag. 2008, 29, 458–468. [Google Scholar] [CrossRef]
  4. Pantelidis, I.S. Electronic Meal Experience: A Content Analysis of Online Restaurant Comments. Cornell Hosp. Q. 2010, 51, 483–491. [Google Scholar] [CrossRef]
  5. Agnisarman, S.; Ponathil, A.; Lopes, S.; Chalil Madathil, K. An Investigation of Consumer’s Choice of a Healthcare Facility When User-Generated Anecdotal Information Is Integrated into Healthcare Public Reports. Int. J. Ind. Ergon. 2018, 66, 206–220. [Google Scholar] [CrossRef]
  6. Goldenberg, J.; Libai, B.; Muller, E. Talk of the Network: A Complex Systems Look at the Underlying Process of Word-of-Mouth. Mark. Lett. 2001, 12, 211–223. [Google Scholar] [CrossRef]
  7. Ye, Q.; Law, R.; Gu, B.; Chen, W. The Influence of User-Generated Content on Traveler Behavior: An Empirical Investigation on the Effects of E-Word-of-Mouth to Hotel Online Bookings. Comput. Hum. Behav. 2011, 27, 634–639. [Google Scholar] [CrossRef]
  8. Bilgihan, A.; Peng, C.; Kandampully, J. Generation Y’s Dining Information Seeking and Sharing Behavior on Social Networking Sites: An Exploratory Study. Int. J. Contemp. Hosp. Manag. 2014, 26, 349–366. [Google Scholar] [CrossRef]
  9. Johnson, T.J.; Kaye, B.K. Webelievability: A Path Model Examining How Convenience and Reliance Predict Online Credibility. J. Mass Commun. Q. 2002, 79, 619–642. [Google Scholar] [CrossRef]
  10. Houser, D.; Wooders, J. Reputation in Auctions: Theory, and Evidence from eBay. J. Econ. Manag. Strategy 2006, 15, 353–369. [Google Scholar] [CrossRef] [Green Version]
  11. Belkin Caught Paying For Positive Reviews. Available online: https://consumerist.com/2009/01/19/belkin-caught-paying-for-positive-reviews/ (accessed on 10 June 2020).
  12. Rains, S.A.; Scott, C.R. To Identify or Not to Identify: A Theoretical Model of Receiver Responses to Anonymous Communication. Commun. Theory 2007, 17, 61–91. [Google Scholar] [CrossRef]
  13. Filieri, R. What Makes an Online Consumer Review Trustworthy? Ann. Touris. Res. 2016, 58, 46–64. [Google Scholar] [CrossRef] [Green Version]
  14. Tanis, M.; Postmes, T. A Social Identity Approach to Trust: Interpersonal Perception, Group Membership and Trusting Behaviour. Eur. J. Soc. Psychol. 2005, 35, 413–424. [Google Scholar] [CrossRef] [Green Version]
  15. Fetchenhauer, D.; Dunning, D. Do People Trust Too Much or Too Little? J. Econ. Psychol. 2009, 30, 263–276. [Google Scholar] [CrossRef]
  16. Kusumasondjaja, S.; Shanka, T.; Marchegiani, C. Credibility of Online Reviews and Initial Trust: The Roles of Reviewer’s Identity and Review Valence. J. Vacat. Mark. 2012, 18, 185–195. [Google Scholar] [CrossRef]
  17. Resnick, P.; Zeckhauser, R. Trust among Strangers in Internet Transactions: Empirical Analysis of eBay’s Reputation System. In The Economics of the Internet and E-commerce; Emerald Group Publishing Limited: Bentley, UK, 2002; pp. 127–157. [Google Scholar]
  18. Feedback Scores, Stars, and Your Reputation. Available online: http://pages.ebay.com/help/feedback/scores-reputation.html (accessed on 10 June 2020).
  19. Sparks, B.A.; Browning, V. The Impact of Online Reviews on Hotel Booking Intentions and Perception of Trust. Tour. Manag. 2011, 32, 1310–1323. [Google Scholar] [CrossRef] [Green Version]
  20. Frijda, N.H. The Emotions; Cambridge University Press: Cambridge, UK, 1986. [Google Scholar]
  21. De Matos, C.A.; Rossi, C.A.V. Word-of-Mouth Communications in Marketing: A Meta-Analytic Review of the Antecedents and Moderators. J. Acad. Mark. Sci. 2008, 36, 578–596. [Google Scholar] [CrossRef]
  22. Xie, H.J.; Miao, L.; Kuo, P.-J.; Lee, B.-Y. Consumers’ Responses to Ambivalent Online Hotel Reviews: The Role of Perceived Source Credibility and Pre-Decisional Disposition. Int. J. Hosp. Manag. 2011, 30, 178–183. [Google Scholar]
  23. Chih, W.H.; Hsu, L.C.; Ortiz, J. The Antecedents and Consequences of the Perceived Positive eWOM Review Credibility. Ind. Manag. Data Syst. 2020. [Google Scholar] [CrossRef]
  24. Kim, S.; Kandampully, J.; Bilgihan, A. The Influence of eWOM Communications: An Application of Online Social Network Framework. Comput. Hum. Behav. 2018, 80, 243–254. [Google Scholar] [CrossRef]
  25. Ismagilova, E.; Dwivedi, Y.K.; Slade, E. Perceived Helpfulness of eWOM: Emotions, Fairness and Rationality. J. Retail. Consum. Serv. 2020, 53. [Google Scholar] [CrossRef] [Green Version]
  26. Jeong, E.; Jang, S. Restaurant Experiences Triggering Positive Electronic Word-of-Mouth (eWOM) Motivations. Int. J. Hosp. Manag. 2011, 30, 356–366. [Google Scholar] [CrossRef]
  27. Zhang, Z.; Zhang, Z.; Law, R. Positive and Negative Word of Mouth about Restaurants: Exploring the Asymmetric Impact of the Performance of Attributes. Asia Pac. J. Tour. Res. 2014, 19, 162–180. [Google Scholar] [CrossRef]
  28. Boo, S.; Kim, J. Comparison of Negative eWOM Intention: An Exploratory Study. J. Qual. Assur. Hosp. Tour. 2013, 14, 24–48. [Google Scholar] [CrossRef]
  29. Wang, P.; Li, H.; Liu, Y. Disentangling the Factors Driving Electronic Word-of-Mouth Use through a Configurational Approach. Internet Res. 2020, 30, 925–943. [Google Scholar] [CrossRef] [Green Version]
  30. Qahri-Saremi, H.; Montazemi, A.R. Negativity Bias in the Effects of EWoM Reviews: An Elaboration Likelihood Perspective in Online Service Adoption Context. In Proceedings of the European Conference on Information Systems, Marrakech, Morocco, 15–17 June 2020. [Google Scholar]
  31. Vermeulen, I.E.; Seegers, D. Tried and Tested: The Impact of Online Hotel Reviews on Consumer Consideration. Tour. Manag. 2009, 30, 123–127. [Google Scholar] [CrossRef]
  32. Doh, S.-J.; Hwang, J.-S. How Consumers Evaluate eWOM (electronic Word-of-Mouth) Messages. Cyberpsychol. Behav. 2009, 12, 193–197. [Google Scholar] [CrossRef]
  33. Mauri, A.G.; Minazzi, R. Web Reviews Influence on Expectations and Purchasing Intentions of Hotel Potential Customers. Int. J. Hosp. Manag. 2013, 34, 99–107. [Google Scholar] [CrossRef]
  34. Law, R.; Buhalis, D.; Cobanoglu, C. Progress on Information and Communication Technologies in Hospitality and Tourism. Int. J. Contemp. Hosp. Manag. 2014, 26, 727–750. [Google Scholar] [CrossRef]
  35. Yen, C.L.A.; Tang, C.H.H. The Effects of Hotel Attribute Performance on Electronic Word-of-Mouth (eWOM) Behaviors. Int. J. Hosp. Manag. 2019, 76, 9–18. [Google Scholar] [CrossRef]
  36. Nam, K.; Baker, J.; Ahmad, N.; Goo, J. Dissatisfaction, Disconfirmation, and Distrust: An Empirical Examination of Value Co-Destruction through Negative Electronic Word-of-Mouth (eWOM). Inf. Syst. Front. 2018, 1–18. [Google Scholar] [CrossRef]
  37. Liu, Y.; Jiang, D.; Zhou, G. The Effect of eWOM on Tourist Purchase Intentions: The Mediating Effect of Trust. In Proceedings of the International Conference on Education, Management, and Computer, Shenyang, China, 12–14 May 2019. [Google Scholar]
  38. Nieto-García, M.; Muñoz-Gallego, P.A.; González-Benito, Ó. Tourists’ Willingness to Pay for an Accommodation: The Effect of eWOM and Internal Reference Price. Int. J. Hosp. Manag. 2017, 62, 67–77. [Google Scholar]
  39. Ladhari, R.; Michaud, M. eWOM Effects on Hotel Booking Intentions, Attitudes, Trust, and Website Perceptions. Int. J. Hosp. Manag. 2015, 46, 36–45. [Google Scholar] [CrossRef]
  40. Wang, P.; Li, H. Disentangling the Factors Driving User Satisfaction with Travel Review Websites: Content, Social or Hedonic Gratifications. In Proceedings of the Twenty-Third Pacific Asia Conference on Information Systems, Xi’an, China, 8–12 July 2019. [Google Scholar]
  41. Maslowska, E.; Malthouse, E.C.; Viswanathan, V. Do Customer Reviews Drive Purchase Decisions? The Moderating Roles of Review Exposure and Price. Decis. Support Syst. 2017, 98, 1–9. [Google Scholar] [CrossRef]
  42. Klein, G.; Moon, B.; Hoffman, R. Making Sense of Sensemaking 2: A Macrocognitive Model. IEEE Intell. Syst. 2006, 21, 88–92. [Google Scholar] [CrossRef]
  43. Klein, G.; Ross, K.G.; Moon, B.M.; Klein, D.E.; Hoffman, R.R.; Hollnagel, E. Macrocognition. IEEE Intell. Syst. 2003, 18, 81–85. [Google Scholar] [CrossRef]
  44. Klein, G.; Moon, B.; Hoffman, R. Making Sense of Sensemaking 1: Alternative Perspectives. IEEE Intell. Syst. 2006, 21, 70–73. [Google Scholar] [CrossRef]
  45. Klein, G.; Phillips, J.K.; Rall, E.L.; Peluso, D.A. A data-frame theory of sensemaking. In Expertise Out of Context: Proceedings of the Sixth International Conference on Naturalistic Decision Making; Lawrence Erlbaum: New York, NY, USA, 2007; pp. 113–155. [Google Scholar]
  46. Battles, J.B.; Dixon, N.M.; Borotkanics, R.J.; Rabin-Fastmen, B.; Kaplan, H.S. Sensemaking of patient safety risks and hazards. Health Serv. Res. 2006, 41, 1555–1575. [Google Scholar] [CrossRef] [Green Version]
  47. Adams, M.J.; Tenney, Y.J.; Pew, R.W. Situation Awareness and the Cognitive Management of Complex Systems. Hum. Factors 1995, 37, 85–104. [Google Scholar] [CrossRef]
  48. Endsley, M.R. Toward a Theory of Situation Awareness in Dynamic Systems. Hum. Factors 1995, 37, 32–64. [Google Scholar] [CrossRef]
  49. Minsky, M. A Framework for Representing Knowledge. In the Psychology of Computer Vision, 1st ed.; Winston, P.H., Horn, B., Minsky, M., Shirai, Y., Waltz, D., Eds.; McGraw-Hill: New York, NY, USA, 1975; pp. 211–277. [Google Scholar]
  50. Turk, A.M. Amazon Mechanical Turk. Retrieved. August 2012, Volume 17. Available online: https://www.mturk.com/worker/help (accessed on 19 June 2020).
  51. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Advances in Psychology; Hancock, P.A., Meshkati, N., Eds.; Elsevier: Amsterdam, The Netherlands, 1988; pp. 139–183. [Google Scholar]
  52. Hicks, A.; Comp, S.; Horovitz, J.; Hovarter, M.; Miki, M.; Bevan, J.L. Why people use Yelp.com: An exploration of uses and gratifications. Comput. Hum. Behav. 2012, 28, 2274–2279. [Google Scholar] [CrossRef]
  53. Pennebaker, J.W.; Francis, M.E.; Booth, R.J. Linguistic Inquiry and Word Count: LIWC 2001; Mahway, Lawrence Erlbaum Associates: Austin, TX, USA, 2001; Volume 71. [Google Scholar]
  54. Khasawneh, A.; Ponathil, A.; Firat Ozkan, N.; Chalil Madathil, K. How Should I Choose My Dentist? A Preliminary Study Investigating the Effectiveness of Decision Aids on Healthcare Online Review Portals. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Philadelphia, PA, USA, 1–5 October 2018; SAGE Publications Sage: Los Angeles, CA, USA, 2018; pp. 1694–1698. [Google Scholar]
  55. Fiske, S.T. Social cognition and social perception. Annu. Rev. Psychol. 1993, 44, 155–194. [Google Scholar] [CrossRef] [PubMed]
  56. Ponathil, A.; Agnisarman, S.; Khasawneh, A.; Narasimha, S.; Madathil, K.C. An Empirical Study Investigating the Effectiveness of Decision Aids in Supporting the Sensemaking Process on Anonymous Social Media. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Austin, TX, USA, 9–13 October 2017. [Google Scholar]
  57. Thielmann, I.; Heck, D.W.; Hilbig, B.E. Anonymity and incentives: An investigation of techniques to reduce socially desirable responding in the Trust Game. Judgm. Decis. Mak. 2016, 11, 527. [Google Scholar]
  58. Liu, Z.; Park, S. What makes a useful online review? Implication for travel product websites. Tour. Manag. 2015, 47, 140–151. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The data-frame theory of sensemaking (adapted from Klein et al. [45]).
Figure 1. The data-frame theory of sensemaking (adapted from Klein et al. [45]).
Mti 04 00032 g001
Figure 2. Non-anonymous and non-supporting review of a restaurant with thumbs up, high reputation and high number of previous reviews.
Figure 2. Non-anonymous and non-supporting review of a restaurant with thumbs up, high reputation and high number of previous reviews.
Mti 04 00032 g002
Figure 3. Anonymous and supporting user review of a restaurant with no reaction, high reputation and high number of previous reviews.
Figure 3. Anonymous and supporting user review of a restaurant with no reaction, high reputation and high number of previous reviews.
Mti 04 00032 g003
Figure 4. Non-anonymous and non-supporting user review of a restaurant with thumbs down, low reputation and small number of previous reviews.
Figure 4. Non-anonymous and non-supporting user review of a restaurant with thumbs down, low reputation and small number of previous reviews.
Mti 04 00032 g004
Figure 5. Flow chart outlining study procedure.
Figure 5. Flow chart outlining study procedure.
Mti 04 00032 g005
Figure 6. Effect of reaction to a review, reputation score and valence of reviews on level of trust.
Figure 6. Effect of reaction to a review, reputation score and valence of reviews on level of trust.
Mti 04 00032 g006
Figure 7. Effect of reaction to a review, reputation score, number of previous reviews and valence of reviews on likelihood score.
Figure 7. Effect of reaction to a review, reputation score, number of previous reviews and valence of reviews on likelihood score.
Mti 04 00032 g007
Figure 8. Effect of reaction to a review, reputation score and valence of reviews on mean probability.
Figure 8. Effect of reaction to a review, reputation score and valence of reviews on mean probability.
Mti 04 00032 g008
Figure 9. Effect of reaction to a review, reputation score and valence of reviews on confidence level.
Figure 9. Effect of reaction to a review, reputation score and valence of reviews on confidence level.
Mti 04 00032 g009
Table 1. Demographic information.
Table 1. Demographic information.
Variable (N = 200)Number%
Gender
Male10753.5
Female9346.5
Education
High School2613.0
Some college4924.5
Associate degree2613.0
Bachelor’s degree8542.5
Graduate degree147.0
Use of smart phone
Yes19798.5
No31.5
Use of social networking sites like Facebook, LinkedIn or Twitter
Yes19798.5
No31.5
Frequency of social network site visit
Less often than 1 day a week42.0
1 to 2 days a week178.5
3 to 5 days a week2412.0
About once a day5326.5
Several times a day9949.5
Missing31.5
Table 2. Mean values of level of trust.
Table 2. Mean values of level of trust.
Independent VariablesMeanSD
Valence of the ReviewReputation ScoreReaction to a Review
SupportingHighThumbs up5.681.18
No reaction5.661.11
Thumbs down4.501.46
LowThumbs up3.701.41
No reaction3.311.52
Thumbs down3.181.62
Table 3. Mean values of the likelihood scores.
Table 3. Mean values of the likelihood scores.
Independent VariablesMeanSD
Valence of the ReviewNumber of Previous ReviewsReputation ScoreReaction to a Review
SupportingHighHighThumbs up5.950.92
No reaction6.030.84
Thumbs down4.621.53
LowThumbs up4.011.47
No reaction3.151.37
Thumbs down2.631.33
Non-supportingHighHighThumbs up2.121.37
No reaction2.041.32
Thumbs down2.431.26
LowThumbs up2.431.27
No reaction2.751.31
Thumbs down3.301.44
Table 4. Mean probability of going to the restaurant.
Table 4. Mean probability of going to the restaurant.
Reaction to a Review
Valence of Reviews: SupportingReputationNo ReactionThumbs-Up ReactionThumbs-Down Reaction
High0.990.980.73
Low0.250.590.11
Valence of Reviews: Non-SupportingReputationNo ReactionThumbs-Up ReactionThumbs-Down Reaction
High0.040.050.08
Low0.100.090.25
Table 5. Mean values of confidence level.
Table 5. Mean values of confidence level.
Independent VariablesMeanSD
Valence of the ReviewReputation ScoreReaction to a Review
SupportingHighThumbs up5.761.00
No reaction5.720.95
Thumbs down4.761.26
Non-supportingHighThumbs up5.691.25
No reaction5.471.25
Thumbs down5.021.40

Share and Cite

MDPI and ACS Style

Ponathil, A.; Gramopadhye, A.; Chalil Madathil, K. Decision Aids in Online Review Portals: An Empirical Study Investigating Their Effectiveness in the Sensemaking Process of Online Information Consumers. Multimodal Technol. Interact. 2020, 4, 32. https://doi.org/10.3390/mti4020032

AMA Style

Ponathil A, Gramopadhye A, Chalil Madathil K. Decision Aids in Online Review Portals: An Empirical Study Investigating Their Effectiveness in the Sensemaking Process of Online Information Consumers. Multimodal Technologies and Interaction. 2020; 4(2):32. https://doi.org/10.3390/mti4020032

Chicago/Turabian Style

Ponathil, Amal, Anand Gramopadhye, and Kapil Chalil Madathil. 2020. "Decision Aids in Online Review Portals: An Empirical Study Investigating Their Effectiveness in the Sensemaking Process of Online Information Consumers" Multimodal Technologies and Interaction 4, no. 2: 32. https://doi.org/10.3390/mti4020032

Article Metrics

Back to TopTop