Next Article in Journal
YouTube Doctors Confronting COVID-19: Scientific–Medical Dissemination on YouTube during the Outbreak of the Coronavirus Crisis
Next Article in Special Issue
Healthy vs. Unhealthy Food Images: Image Classification of Twitter Images
Previous Article in Journal
A New Paired Associative Stimulation Protocol with High-Frequency Peripheral Component and High-Intensity 20 Hz Repetitive Transcranial Magnetic Stimulation—A Pilot Study
Previous Article in Special Issue
Exploring Impact of Marijuana (Cannabis) Abuse on Adults Using Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Examining Different Factors in Web-Based Patients’ Decision-Making Process: Systematic Review on Digital Platforms for Clinical Decision Support System

1
Department of Computing Engineering, Gachon University, Seoul 13120, Korea
2
Department of Physics, Charles E. Schmidt College of Science, Florida Atlantic University, Boca Raton, FL 33431-0991, USA
3
Department of Management Sciences, Shaheed Zulfikar Ali Bhutto Institute of Science and Technology, Islamabad 44320, Pakistan
4
Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Korea
*
Author to whom correspondence should be addressed.
Int. J. Environ. Res. Public Health 2021, 18(21), 11226; https://doi.org/10.3390/ijerph182111226
Submission received: 16 September 2021 / Revised: 18 October 2021 / Accepted: 21 October 2021 / Published: 26 October 2021
(This article belongs to the Special Issue Health Data: Tools for Decision-Making)

Abstract

:
(1) Background: The appearance of physician rating websites (PRWs) has raised researchers’ interest in the online healthcare field, particularly how users consume information available on PRWs in terms of online physician reviews and providers’ information in their decision-making process. The aim of this study is to consistently review the early scientific literature related to digital healthcare platforms, summarize key findings and study features, identify literature deficiencies, and suggest digital solutions for future research. (2) Methods: A systematic literature review using key databases was conducted to search published articles between 2010 and 2020 and identified 52 papers that focused on PRWs, different signals in the form of PRWs’ features, the findings of these studies, and peer-reviewed articles. The research features and main findings are reported in tables and figures. (3) Results: The review of 52 papers identified 22 articles for online reputation, 15 for service popularity, 16 for linguistic features, 15 for doctor–patient concordance, 7 for offline reputation, and 11 for trustworthiness signals. Out of 52 studies, 75% used quantitative techniques, 12% employed qualitative techniques, and 13% were mixed-methods investigations. The majority of studies retrieved larger datasets using machine learning techniques (44/52). These studies were mostly conducted in China (38), the United States (9), and Europe (3). The majority of signals were positively related to the clinical outcomes. Few studies used conventional surveys of patient treatment experience (5, 9.61%), and few used panel data (9, 17%). These studies found a high degree of correlation between these signals with clinical outcomes. (4) Conclusions: PRWs contain valuable signals that provide insights into the service quality and patient treatment choice, yet it has not been extensively used for evaluating the quality of care. This study offers implications for researchers to consider digital solutions such as advanced machine learning and data mining techniques to test hypotheses regarding a variety of signals on PRWs for clinical decision-making.

1. Introduction

The association between technological advancements and social changes resulted in the development of physician rating websites (PRWs). This novelty was a result of the emergence and rapid growth of the internet [1]. PRWs offer a unique source of information about healthcare service quality from the patients’ viewpoint. PRWs offer patients an opportunity to rate the quality of service while interacting with the physician. Patients may post ratings or write comments on their experience with a physician or read an assessment for peer patients before choosing a physician [2]. Physicians perceive these PRWs as important because patients’ perceptions of healthcare quality are publicly available. This evidence significantly enhances the relevance of patient satisfaction to create positive word of mouth (WOM). In contrast, reviews posted by patients on PRWs provide recommendations for strengthening and improving overall satisfaction with physicians’ quality of care [3].
Recently, there has been growing interest in PRW usage, which has become a part of life for many of us. The Internet has enabled the massive growth of PRWs. PRWs are organized in a similar manner to other rating sites (for instance, tourism, hotels, or restaurants). Rating sites for search (products) and experience goods (hotels and restaurants) have already become popular, but this is a fairly new internet-based rating platform in the medical domain. A variety of investigations were performed using different PRWs in different countries, such as RateMDs [4], Healthgrades [5], and Vitals [6] in the U.S.; Haodf [7] in China; Jameda [8] and Weisse Liste [3] in Germany, and Iwantgreatcare [9] in the U.K. The culture of reviewing in healthcare developed in parallel with a shift in the patient–physician relationship. The notion of reviewing in healthcare results in a change in the doctor–patient (D–P) relationship. The conventional bond between doctor and patient has changed into a patient-centered approach; as a result, patients play a more authoritative role in their health decision-making [10].
The PRWs tend to offer several benefits to patients. They can provide valuable information to them, help patients to search for physicians with high technical skills, and assist them in their choice to select the best suitable physician. PRWs can also boost treatment quality and foster trustworthy relationships between doctors and patients [11]. Patients are more frequently dependent on PRWs if the information that they search is unique to their requirements. There are also some drawbacks of PRWs: physicians are afraid that PRWs promote negative feedback [7]. However, a study of online physician reviews (OPRs) found that these reviews were overwhelmingly positive. Moreover, very few reviews raise questions about the representativeness of decisions and scientific validity, particularly by healthcare providers and health organizations [12]. PRWs are effective if patients search for systemic details (e.g., service availability, operating hours, and place of office) rather than process or outcome aspects. In terms of outcome measures, PRWs are capable of providing information, but PRWs can cause confusion and pose risks to an evaluated individual regarding outcome measures [13].
In our context, healthcare providers may not serve consumers’ best interests and benefit from information asymmetry. For example, healthcare providers who run their own clinics or hospitals may refer patients for unnecessary care to such facilities and benefit financially from doing so. Therefore, to reduce information asymmetry, PRWs offer patients different signals, which help them in their choice toward a specific physician. In healthcare, the role of the signaling mechanism becomes important for the following reasons. First, by delivering informative signals, a good doctor will credibly pass on the medical service quality to patients [14]. Second, in PRWs, patients can put pressure on their physicians by possibly obtaining a second opinion. In the context of online healthcare, physicians send signals about the quality of service to their patients. Upon receiving this information, patients can change their decision on physicians’ quality of service and change their physician choice [14]. If information is asymmetric, patients and doctors need to send meaningful signals to inform these patients in an efficient and reliable manner that is under-informed about the particular doctor’s healthcare quality. The selection of appropriate signals in the PRWs is vital to these rating websites’ success because different signals can express different types of information and eventually lead to uneven outcomes.
The growing importance of PRWs in the patients’ decision-making process has resulted in an increasing number of research studies on PRWs [15,16]. Some researchers have argued in favor of giving PRWs more systematic values [1,17]. Others have measured and reviewed public perceptions and use of PRWs in evaluating the quality of PRWs [2,18]. Researchers found that there was limited research on the usage of PRWs [10,15]. To the best of our knowledge, there is no systematic review of the different signaling mechanisms (online and offline signals) generated by the market and sellers on PRWs. Therefore, we performed a systematic analysis to summarize the study features, research design, analytical methods, and current PRW studies’ main findings. We have shown the different PRW research patterns, identified literature shortcomings, and made recommendations for potential study.

2. Methods

The current research consists of a systematic and detailed literature analysis on different signals related to PRWs in healthcare. We adopt the recommendations suggested by Hong, Liang, Radcliff, Wigfall, and Street [15]. A systematic analysis of the literature has a threefold objective: review planning, to carry out the review, and report the critical findings. This leads to an in-depth understanding based on the theoretical review of existing research. We started to design the systematic review performed in this paper in May 2019. The search for publications was performed from November 2019, with several rounds of refinements and improvements.

2.1. Planning the Systematic Review

The review protocol was developed based on the recommendations of the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols statement [19]. The systematic review will also adhere to the recommendations of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [20]. This protocol is under review for registration with the International Prospective Register of Systematic Reviews.
We designed the review by putting forward research questions related to our research goals. We defined the search strategy, search strings, and inclusion/exclusion criteria. We specified research questions, search strategy, selection criteria for inclusion and exclusion, and findings. A brief explanation of these issues is presented below.

2.2. Research Questions

In this paper, the following questions needed to be addressed:
RQ1. How can the interaction of online and offline signal transmission on PRWs provide benefits regarding patients’ choice for a health consultation?
RQ2. What are the reviewed studies’ dynamics and analytical approaches involved in patients’ decision-making process?

2.3. Search Strategy and Criteria

To carry out our research, we followed the guidelines provided by the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) [19]. Next, the search strategy was then used to evaluate all the available relevant studies to answer the research questions. In this paper, we retrieved relevant studies published between 2010 and 2020 from the major databases, digital libraries, and published proceedings, including PubMed, EMBASE, Google Scholar, Scopus, Web of Science (Clarivate Analytics), Science Direct, Emerald, Taylor & Francis, Springer, Sage, ACM, Wiley, and IEEE, in January 2021 (refer to Table 1). Keywords from previously published studies, including health rating platforms, physician rating websites, review sites, online reviews, online physician reviews, online ratings, patient online reviews, healthcare quality, e-health, and digital health, were used in the search terms. A Boolean operator (AND) was applied between the keywords that the authors used for searching different databases and the search for matching keywords in paper titles or abstracts. Finally, we applied HistCite software (A software program that makes bibliometric analysis and visualization tasks easier for researchers) on the collected literature.
Figure 1 shows the initial search phase using the above keywords, which retrieved 1281 articles. After reviewing the titles and abstracts, based on the snowballing technique, 10 articles were found as duplicates and were removed from our list. Hence, after the deletion of duplicates, the total number of remaining articles was 88. The remaining articles were reviewed against inclusion and exclusion and quality evaluation criteria (see Section 2.4 and Section 2.5). Hence, 45 publications were found to satisfy the criteria. Next, we performed the reference searches of 45 papers; hence, 7 articles were added from cross-referencing. Finally, we identified a total of 52 articles to include in the review (see Table 2). Articles documenting results based on a similar data source, the same research design, and research questions were considered as one study.

2.4. Inclusion and Exclusion Criteria

The inclusion and exclusion criteria of the retrieved studies are listed in Table 3. The quality criterion used by other researchers was used in this systematic analysis [21,22,23,24]. One of the essential criteria to be tested for the inclusion and exclusion of studies is the methodological quality assessment. The quality of each published paper was evaluated using the following three indicators.
  • First, has the Web of science or Scopus database indexed the selected papers?
  • Second, is the study aim/objective clear?
  • Third, is the research context dealt with well?
  • Finally, the last question helped us determine whether the research findings were sufficient for our research purpose.

2.5. Quality Assessment

The evaluation of research quality can be used to direct the interpretation of the synthesis [25]. The quality criterion, as employed by others, was used for this systematic analysis [22,23]. The quality of each study approved was assessed in accordance with the requirements outlined in Section 2.4, as shown in Table 4. With the first criterion (C1), we assessed whether researchers clearly addressed the study aim/objective. The majority of the research (88%) gave a favorable response to this question. To find out if the research context had been adequately addressed and articulated, we used criterion (C2). Overall, 92% of the studies had a favorable response to this question. The final question helped us to determine whether the research findings were sufficient for our research purpose. For the heuristic grades for the quality measurement (C3), 2 reviewers examined and analyzed all of the studies (AS and WM), and a third independent expert (RN) resolved conflicts between the 2 independent reviewers. The quality score of 52 included papers evaluated by 3 reviewers is shown in Table 5.

2.6. Data Extraction and Synthesis

At this point in the analysis, the selected papers were synthesized and classified in accordance with the scope of their various characteristics in the PRW, which affect the patients’ decision-making process. In order to address research questions, the relevant data from 52 papers were collected, analyzed, and summarized. Two co-authors of this study performed content analysis by reviewing all selected articles included in a review [23]. Both co-authors developed a form for the recording of thoughts, concepts, contributions, and findings for each of the 52 studies; subsequent higher-order analysis was assured by using this form. The following data were extracted from each publication: signals used on the PRWs, time and place of the study, signal transmission across different disease specialties, number of reviews by different PRWs, study design and technological roadmap adopted, and key findings, The inter-rater agreement between two researchers was calculated by using the Cohen kappa and was 0.83, indicating good agreement. The researchers then listed the main findings from each article and addressed their differences until an agreement was reached.

3. Findings

This section presents the findings of our review in the context of our research questions.

3.1. Overview of Publications

As indicated in the previous section, we identified 52 articles. Out of the 52 studies, around 16% (8 of them) were published in conferences and 84% (44 of them) in journals. The distribution of the included articles, which were published from 2010 to 2020, is shown in Figure 2. Figure 2 indicates that 75% used quantitative techniques, 12% employed qualitative techniques, and 13% were mixed-methods investigations.

3.2. Evaluation Criteria and Statistical Analysis of the Signaling Mechanism

The signaling mechanisms include online and offline signals produced by different online and offline sources. Online signals are generally produced by both marketers and sellers, whereas sellers are responsible for generating offline signals. The effectiveness of these signals on patients’ choice is evaluated by the adjusted R2, standardized coefficients (β), and significance value.

3.3. RQ1. How Can the Interaction of Online and Offline Signal Transmission on PRWs Provide Benefits Regarding Patients’ Choice for a Health Consultation?

Information asymmetry is most likely to exist between sellers and consumers in online markets [26]. It may cause sellers to influence the consumers’ purchase behaviors, the seller’s performance, and boost fraudulent activities. These arguments are especially true in the case of credence goods, such as online healthcare services. As earlier discussion stated, the services offered in the online healthcare market are considered credence goods such that doctors already know more about their quality of services and patients’ health status [27]; moreover, information asymmetry is more severe in the online healthcare environment.

3.3.1. Physician’s Online Reputation

The physician’s online reputation refers to patients’ perceptions regarding the physician’s online assessment after each interaction [28]. To select a competent physician, health consumers take suggestions from their family members and friends to obtain WOM information. Online reputation is a part of this eWOM information. PRWs allow patients to review the abundant information about various physicians and then use this eWOM information for their health consultation; the information contains a physician’s online reputation for medical services. Out of 52 studies, 22 (22/52, 42%) reported online reputation as a signal of a physician’s healthcare quality [5,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48]. The existing WOM literature validates eWOM as a valuable and efficient channel for disseminating information regarding providers’ reputation to consumers.
In this context, Yang, Guo, Wu, and Ju [47] indicated that online reputation in the form of ratings and experience from others could be used to reduce information asymmetry, which further helps in patients’ decision-making process. Furthermore, Li, Tang, Yen David, and Liu [38] investigated the positive impact of online reputation on the number of physician bookings.
Hence, the above arguments suggest that ratings from peer consumers influence a patient’s choice; healthcare providers could design some programs in encouraging users to rate their doctors and write OPRs in order to fulfill their needs, evaluate healthcare quality, and enhance users’ collaborations in PRWs.

3.3.2. Physician’s Online Effort

On PRWs, the physician conducts online activities to convey quality information in the form of benevolence actions. These activities are linked to the physician’s online efforts, referring to “the amount of energy ‘spent’ by a doctor on an act per unit of time” [38]. A total of 14 (27%) studies were related to physicians’ effort online [14,28,31,32,38,49,50,51,52,53,54,55,56,57]. For example, Liang et al. [58] found that the online efforts and reputations of physicians have a significant impact on the number of new patients. Li, Tang, Jiang, Yen, and Liu [49] and Li, Tang, Yen David, and Liu [38] reported that knowledge contributions as online effort are significantly related to patients’ choice toward a specific physician.

3.3.3. Service Popularity

Popularity is the degree to which many people are familiar with a product, and they assess it from the perspective of quantity. On PRWs, physicians use multiple channels to signal their popularity [45]. The use of online healthcare services can reveal a physician’s service popularity and decrease the perceived risk of low-quality services. For instance, Gao et al. [59] found that service popularity is an important factor in shaping patients’ choices. In our systematic review, 15 (29%) studies reported physician popularity as an indicator of patient decision-making [31,38,45,50,51,53,59,60,61,62,63,64,65,66,67]. Thus, the role of e-health to measure the popularity of provider services has grown and become more popular in recent years.

3.3.4. Linguistic Signaling

PRWs contain a great deal of linguistic signals, a valuable resource for people seeking health information and social support. Information quality as a linguistic signal refers to the message’s persuasive strength, which is commonly measured in terms of its relevance, timeliness, accuracy, and comprehensiveness [52]. Reviews posted by different users are always different in length, accuracy, comprehensiveness, tone, domain, and even logic. Therefore, in an online environment, users perceive the information regarding a particular activity in which they are engaged, considering their expectations and requirements. Most of the literature employed “argument quality or information quality” to measure the information quality as a predictor of users’ behavior in the healthcare domain. In this literature review, we found 17 (32.69%) articles that indicated the impact of linguistic signals on patients’ behavior [52,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82].

3.3.5. Doctor–Patient Concordance Signals

D–P concordance refers to the agreement between a patient and his/her physician regarding the diagnosis and treatment of a condition [83]. Prior studies discussed the concept of D–P concordance, and Banerjee and Sanyal [84] found that strong D–P concordance (agreement) signals the physician’s better trust, which in turn leads to patient satisfaction. According to Audrain-Pontevia et al. [85], patient empowerment and patient commitment are significant indicators of D–P concordance and patient compliance. These findings highlighted the improvement in D–P concordance. In the comprehensive analysis of the previous literature, we identified 15 (29%) articles related to D–P interaction, concordance, and their association with patients’ behavior [29,73,83,84,85,86,87,88,89,90,91,92,93,94,95].

3.3.6. Physician’s Offline Reputation

Most sellers launched online platforms to extend their current traditional offline channels to survive in the competitive market. In this way, users are gradually shifting from single-channel users to multi-channel users through channel extension.
In our research context, patients consume the commodity (healthcare) provided by the physicians. This means that the offline reputation of a (individual) physician is essential to a patient in making a decision to determine the physician’s healthcare quality. The offline reputation refers to the physician’s medical status (his/her title) assignment by the government according to the physician’s competency and ability. A physician with a high reputation transmits the credibility signal to the receiver. For instance, Liu, Guo, Wu, and Wu [29] found the positive impact of a physician’s offline reputation on patients’ choice. In the patients’ health consultation decision, signals such as status affect a provider’s reputation. In this literature review, seven (13%) studies focused on the effect of physicians’ offline reputation on patients’ decision-making process [28,38,53,96,97,98,99].

3.3.7. Physician Trustworthiness Signals

PRWs are a means for physicians to signal their trustworthiness. Trustworthiness is defined as the provider’s ability (credibility) and willingness to provide support and advice in the consumer’s best interest [100]. In other words, trust is a belief and expectations about an exchange partner’s credibility [5]. Researchers have claimed that a patient’s trust in a physician is important for the fairly unknown e-health relationship between physician and patient. Researchers have claimed medical boards’ data and other web sources as a measurement of credibility. They suggested that when linking ratings with the state medical board data, board-certified, highly experienced doctors, and doctors who graduated from higher-ranked schools had superior ratings [5]. It has also been found that a greater number of unsatisfactory ratings was associated with a history of malpractice claims or medical board actions and sanctions identified on the doctor’s page [101]. Moreover, the more awards awarded to a physician, the more credible s/he was perceived to be [5]. In this review, we found 11 (21%) studies that reported on physician credibility and its impact on patients’ decision-making [5,12,40,44,78,100,102,103,104,105,106].

3.4. RQ2. What Are the Reviewed Studies’ Dynamics and Analytical Approaches Involved in Patients’ Decision-Making Process?

Descriptive statistics were performed to capture the data collection procedure, variables, the country where the research was conducted, research methods, and findings.

3.4.1. Time and Place of the Study

Although PRWs have been available for over two decades, the earliest research on different signal transmission on PRWs was published in 2010 [107], and the majority of studies were conducted (61/63, 96.8 per cent) after 2015. As shown in Figure 3, most of the studies belonged to China (38) [7,15,28,30,31,32,33,34,37,38,40,44,45,46,47,48,49,50,51,52,53,54,56,60,61,63,65,73,81,104,106,108,109,110,111,112,113] followed by the U.S. (9) [4,5,14,34,35,67,78,99,114], Germany [3,115] (2), the U.K. (1) [9], Korea (1) [36], and others (1) [116].

3.4.2. Physician Rating Websites

The majority of studies (41/52, 78.84%) considered a single PRW for data analysis using different signals. In contrast, several studies used multiple PRWs for data analysis (6/52, 11.53%), and the rest of the studies performed primary data analysis using survey questionnaires, etc. (5/52, 9.62%). The PRWs used in these investigations differed across different countries. For example, in China, a large number of signal transmissions were accrued on the most popular specialized PRWs on Haodf and Guahao. Next, RateMds, HealthGrades, and Vitals [4,5,6,117] were the most popular and frequently used rating platforms in the U.S., whereas the Jameda [3,115] and Iwantgreatcare (a subsidiary of the National Health System Choices website) [9] were Germany’s and the U.K.’s most popular PRWs, respectively (see Figure 4).

3.4.3. Signal Transmission across Different Contexts and Disease Specialties

Out of the 52 studies, one (1/52, 1.92%) reported different signals in a hospital context, including hospitals, clinics, emergency departments, and nursing homes [36], and 51 (51/52, 98.02%) were focused on physicians from different disease specialties. Of the 51 studies that reported different signals of physicians, nine (9/51, 17.65%) involved different types of physicians (general practitioners and specialists), one study (1/51, 1.96%) reported on dentists [8], and the remaining 41 (41/52, 80.39%) were focused on disease specialists, including cardiologists, oncologists, neurologists, orthopedics, pulmonologists, ENTs, endocrinologists, dermatologists, urologists, and Ob/Gyns. Of these 41 studies on disease specialists, 34 (34/41, 82.92%) were focused on multi-specialties.

3.4.4. Number of Reviews by PRWs

The number of physicians reviewed ranged from 512 to 178,740 in these studies. The number of OPRs analyzed ranged from 3000 to 1,274,255. The amount of OPRs involved in the analyses has increased dramatically over the past 10 years. In total, the highest number of reviews was found on Haodf with 56,334 reviews, followed by Guahao with 28,298 reviews, RateMDs with 24,233 reviews, Healthgrades with 19,233 reviews, Vitals with 15,465, Yelp with 11,675 reviews, Jameda with 9876 reviews, and Iwantgreatcare with 7656 reviews. Overall, 347/351 (99.28%) board-certified physicians had been reviewed on at least one of the eight involved websites.

3.4.5. Study Design and Technological Roadmap Adopted

Several articles (5/52, 9.61%) were descriptive. They only described frequency analyses, including the average number of ratings per physician as a proxy of online reputation, the proportions of physicians that had been reviewed online, and the average rating score of OPRs. Usually, research that concentrated on all kinds of disease specialists collected OPRs directly from PRWs without a preselected list of doctors.
A considerable number of articles (39/52, 75%) were quantitative. Articles that focused on healthcare organizations and different specialties identified the provider’s specialty from the perspective of disease mortality from a state disease control and prevention website. In contrast, studies that focused on all types of specialties retrieved data directly from PRWs without considering a list of specialties.
More than six (12%) articles were purely based on qualitative analysis of OPRs. These articles used different computational methods to retrieve major themes from patients’ comments.
A total of seven (13%) articles also analyzed unstructured comments of OPRs along with quantitative analysis using advanced text mining and artificial intelligence techniques such as natural language processing (NLP) and sentic computing models [5,35,52,71,95]. Approximately five (9.61%) articles describing quantitative studies used traditional survey methods to predict patients’ behavior (i.e., clinical outcomes, healthcare quality, and patient satisfaction) [16,73,85,115,118].
The unstructured comments of OPRs were analyzed in a total of 14 (26.92%) articles. Previous research had used the traditional qualitative content analysis to analyze these comments in order to retrieve the key themes in these OPRs [7,37]. More advanced techniques such as NLP have been used in recent articles [4,33,34,35,36,37,112,114]. For example, topic models, such as Latent Dirichlet Allocation, point out the different themes involved in online reviews that may be linked to one of the topics. This has been extensively used to identify topics from unstructured data in different domains, particularly in healthcare. In addition, one article used PRWs from two different countries (China and the United States) for analysis [34].
Finally, a few articles used secondary panel data to conduct analysis (9/52, 17%). Appendix A lists the number of articles focusing on e-health and also provides details about the core components of the PRWs regarding patients’ decision-making toward the physician’s quality of service and explained variable(s).

3.4.6. An Overview of the Findings of Online Physician Reviews

Most patients commented favorably on their physician’s reputation and claimed that they would recommend their physician to their circle of friends and family members. Out of 52 articles, 22 articles (22/52, 42.30%) reported average rating scores of OPRs ranging from 2.81 to 4.62 (with a 5-point rating scale) having a median score of 4 and a mean value of 3.95. The articles that analyzed the patients’ unstructured comments found that these comments covered the different aspects of healthcare, including doctor value, treatment/operational process, doctor attitude, convenient hospital location, disease diagnosis, patient visit process, medical ethics (relational conduct), medical examination, physician knowledge and confidence, parking availability, treatment cost, and physician skills regarding pain control.

3.4.7. Relationship between Signal Transmission and Clinical Outcomes

Appendix A also includes summaries of the relationship between different signaling mechanisms and patients’ choice as clinical outcomes. The majority of the 52 articles on the relationships between signaling mechanisms and patients’ choice reported a positive relationship. For instance, Li, Tang, Yen David, and Liu [38] found a positive relationship between online reputation and patients’ choice of physician selection. In a similar vein, Shah, Yan, Shah, Shah, and Mamirkulova [5] reported a positive association between a physician’s online or offline reputation and patients’ decision-making process for health consultations.

4. Discussion

Patients’ decision-making process using different signals on PRWs has increasingly gained attention from various stakeholders in the healthcare industry. This literature review aimed to identify different signaling mechanisms to investigate their impact on patients’ choice toward a particular doctor. The reviewed literature demonstrates several advantages of studying different signals that originate from various sources, such as senders (physicians) and receivers (patients). Generally speaking, this is the first systematic analysis of research on different signaling mechanisms on PRWs. The 52 articles included in this review represent a decade of peer-reviewed publications on PRWs from six countries; the research design and main findings have been summarized.
Research on search and experience goods has received considerable attention from the academic community. The relationship between online signals, offline signals as eWOM, and seller reputation has a significant effect on consumers’ choice [119,120] and has been studied by researchers. A major limitation of this literature is that numerous signals are used to investigate the customer purchase decision-making from a search and experience goods perspective. Credence goods such as healthcare services are different from the aforementioned goods. The quality of healthcare services is difficult to measure for patients, even after they have utilized the services. Therefore, a significant gap exists as patients’ decision-making process using different signals on PRWs has not received substantial research attention in the online healthcare environment.
Our comprehensive analysis of the 52 reviewed articles on PRWs showed that the number of physicians being reviewed constituted a small percentage of the total workforce in healthcare. Overall, the relationships between reputation signals and patients’ choice were positive. Only a few articles compared the associations between different signals on PRWs and the patients’ preference toward a specific physician. These articles showed that online and offline signals were strongly associated with the “patient experience” measured by conventional surveys, quantitative approaches, qualitative approaches, and mixed-methods investigations.
The existing PRW literature indicates a fairly new but rapidly growing field. In comparison with the exponential growth in PRW usage, the number of published articles was limited. Therefore, we offer the following suggestions for possible extensions of PRW research in the future.
First, since the context of the study is the online healthcare industry, services received through offline channels are quite different from online channels. Patients search for physicians and disease information online and book appointments through online channels. They visit offline hospitals to seek treatment for the disease and pay the cost of the medical services. Researchers need to develop new techniques for online information consumption to evaluate the provider’s offline service quality through online information channels.
Second, previous research mainly investigated system quality and the quality of technology used [52] rather than the outcome of a provider’s service. In order to better understand the online service quality, further investigations are required, especially for pure industries such as online healthcare services, which require minimal physical interaction. In these settings, the investigation should focus on the different online signals and offline signals generated by the market and sellers and summarize the main antecedents influencing patients’ choices for treatment decisions.
Third, the patient’s clinical decision-making process is an important issue in healthcare and has gained popularity with the growth of social media websites. In the current literature review, we assume that physicians deliver the product (healthcare) to their patients. This assumption suggests that in evaluating service quality, a physician’s organizational status (i.e., offline reputation and credibility) and online status (i.e., online reputation, online reputation, service popularity, information quality, social influence, and D–P interaction, etc.) about the physician’s clinical quality plays a significant role in the patient’s choice. Furthermore, the physician’s offline status is quite different from offline brand status, as the physician’s offline status may be regarded as the physician’s status in the offline hospital. Nevertheless, the offline brand status is considered an attribute of the product. In addition, regarding consultation decisions in an online environment, the product for sale is the healthcare provider. Existing studies in the field of e-health have discussed the various signals sent by patients [14]; nevertheless, as it is fairly recent, the theoretical distinction has been given less attention to these signals and receiver perception is ignored, which calls for further research.
Fourth, in the previous e-market research, there exists a clear gap regarding the classification of the signals. Based on the existing research on different information channels, there exists scant research that has explored the different types of online and offline signals (reputation signals, trustworthiness signals, service-related signals, linguistic signals, and D–P interaction signals) in a single study. Future research should address all these signals in one study to compare their effects directly due to the specific laboratory conditions and independent study outcomes.
Fifth, in order to examine patient behavior, articles with a systematic design, longitudinal character, and broad samples are required. Because of the availability of broad and heterogeneous online data on the rating sites, PRW studies face challenges in the collection and processing of data. The new methods for web scraping have made it possible to retrieve vast volumes of OPR data efficiently. Advanced computational methods such as machine learning, deep learning, artificial intelligence, and other NLP techniques can be used to accelerate large-scale OPR analysis. Such tools can help researchers to analyze a large quantity of patients in real time in order to investigate previous patients’ opinions about the service quality.
Sixth, most of the current studies concentrate on specialists in high-tier cities based on the highest internet usage, the highest number of physicians with an active board license, and the largest population in the U.S. [5,9,67]; to avoid the sampling bias of patients’ behavior, more studies would be needed to understand other disciplines of healthcare providers and those practices in low-tier cities. There is little or a lack of evidence in research documenting different signals on PRWs, such as nursing homes, public health facilities, and centers for drug treatment.
Finally, in the extant literature, patients’ opinions have been identified explicitly using different machine learning and text mining techniques. However, in everyday life, opinions are expressed implicitly depending on the domain and context. We, therefore, conclude that further work is needed to implicitly analyze opinions to examine patients’ behavior using powerful, intelligent systems.

4.1. Implications of the Study

The growing body of PRW-related literature shows its increasing importance in patients’ decision-making process, which offers implications for physicians, patients, PRW developers, and policymakers in both policy and practice.
In particular, the significance of different signals on PRWs should not be underestimated by health providers. Instead, they should consider PRWs’ significance for their “digital branding” and remain conscious of the various signals to prominent PRWs [121]. Physicians may use different signals for patient satisfaction evaluation and determining patient needs. Furthermore, customized and friendly responses to OPRs will boost the positive D-P interaction.
From a consumer viewpoint, patients should realize that only a small number of doctors have been rated online, and the average rating score for a physician may not be adequate to choose a doctor as expected, considering consumers’ propensity to provide reviews on exceptionally favorable or negative encounters [14]. With increasing knowledge about healthcare, we expect that a “market guide” would assist patients in understanding OPRs and making more knowledgeable choices [122].
For PRW developers, as OPRs are often unstructured and the identity of the reviewer cannot be identified, they can take additional social obligations through the introduction of interface components to permit identity authentication, delete offensive or abusive remarks, and help patients in the use of PRWs to prevent fake or mis-information on PRWs [123], which is particularly important in light of the recent COVID-19 crisis [124].
Policymakers still face a major challenge as to whether OPRs can be utilized as an indicator of healthcare quality; policymakers and healthcare providers should recognize and acknowledge their growing importance for patients. The OPRs can reflect immediate feedback from the clinical interactions of patients, their evaluations, and what they really value. Some of the patient experience signals identified by OPR analysis can be applied to support or supplement current healthcare quality measures and to quickly identify perception deficiencies, along with service improvements or other practical quality measures, when necessary [125]. While recognizing the increased weight of OPRs in consumer health conduct and the potential of using OPRs to improve the quality of healthcare, it calls on key stakeholders, including patients, carers, physicians, PRW designers, policymakers, and healthcare service researchers, to engage in discussions and joint efforts to build a positive D-P relationship.
It is necessary to remember any possible biases when evaluating the findings of this study. First, this analysis concentrated on the published studies that focused on PRWs, so the PRW-related results only represented those published studies and not the entire context of PRWs. In view of the large and increasing number of papers written on PRWs, only a small fraction have been researched and reviewed. Second, there are only a few patients who post online reviews and ratings. Younger females, who live in metropolitan areas and spend more time online, are most likely to be these patients. Thus, the current OPRs have a possible bias. Such biases are not designed flaws in performing a systematic review but need preventive measures when interpreting study results.
The fact that users interpret and benefit from such signals available on PRWs containing information about health decision-making contributes to the lengthy search time for patients. This impact will help to improve the patients’ healthcare knowledge [126]. Users faced with meaningful signals spent roughly five fewer minutes while searching for health information online. Researchers and medical practitioners using different signals should be mindful that the structure of signal transmission can be equally as critical as its content in evaluating their impacts on the clinical decision support system [127].
Moreover, signals may also carry possible risks in favor-sensitive decisions. The influence of signaling bias is becoming more and more evident, where signals can supersede decisions relating to risk. Signals are extensively used in patient clinical decision support [128]. It is also possible that signals are being used as clinical decision-making support by other patients, even though they are not declared expressly as decision support [127]. Decision support includes evidence-based resources intended to help patients to make specific healthcare decisions in a value-driven manner [129]. Signals can reduce decision support effectiveness through the presentation of asymmetrical data or by overriding decision-making information through signaler characteristics [130]. For instance, a study conducted by Drewniak, Glässel, Hodel, and Biller-Andorno [127] indicated that clinical decision support systems were more likely to portray patients that were satisfied with the outcome of their treatment decision. This highlights the importance of different signaling mechanisms in PRWs and their impact on patients’ treatment decisions.

4.2. Limitations

This study has certain limitations. First, due to the growing literature in this research field, it is possible that some published studies were not included in the review. Second, the keywords used for publication searches might not have been suitable to obtain all studies on PRWs. In the future, more literature will be found on how to improve the usage of PRWs. Finally, since our research was limited to the English-language literature, the review did not include publications written in other languages. Future research might include publications in other languages as well.

5. Conclusions

The current peer-reviewed literature on the use of PRWs by health consumers and professionals suggests that PRWs are viewed as valuable knowledge platforms when searching for health information online. This review found that online and offline signals generated by the market and sellers tended to be positive. These signals have a significant and positive impact on patients’ decision-making and clinical decision support system. Findings from this systematic literature review provide insights to guide patients, medical practitioners, and policymakers to assist patients in making more informed decisions and promoting the use of PRWs to enhance the quality of healthcare. The findings of this study call for future research using a large sample size and longitudinal study design.

Author Contributions

Conceptualization, K.L; methodology, R.A.N.; software, A.M.S.; validation, A.M.S. and K.L; formal analysis, A.M.S.; investigation, W.M.; resources, A.M.S.; data curation, A.M.S.; writing—original draft preparation, A.M.S.; writing—review and editing, A.M.S.; visualization, K.L; supervision, W.M.; project administration, K.L; funding acquisition, K.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2021-2017-0-01630) and the work (No.2020-0-01907, Development of Smart Signage Technology for Automatic Classification of Untact Examination and Patient Status Based on AI) supervised by the IITP (Institute for Information & communications Technology Promotion).

Data Availability Statement

Data available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Summary of e-health literature from 2010 to 2020 on physician rating websites and patient choice regarding health consultation.
Table A1. Summary of e-health literature from 2010 to 2020 on physician rating websites and patient choice regarding health consultation.
ReferenceData CollectionVariablesResearch MethodFindings
Wu [52]Web scrapingIndividual literacy
social support
Information quality
Service quality
SEM-PLSInformation quality is significantly associated with patient satisfaction.
Jin, Yan, Li and Li [71]Web scrapingInformation qualityText mining
Logistic regression
Higher information quality leads to information adoption.
Storino, Castillo-Angeles, Watkins, Vargas, Mancias, Bullock, Demirjian, Moser, and Kent [116]Collection of 50 pancreatic cancer websitesInformation usefulnessAssessment by expert panelOnline information on pancreatic cancer lacks effective information about alternative therapy.
Zhang, Guo, Lai, and Yi [82]Web crawlingInterpersonal unfairness
Informational unfairness
Logistic regressionInformational unfairness contributes to the development of a quality relationship.
Li, Tang, Yen David, and Liu [38]Web crawlingOnline reputation
Offline status
Online self-representation
Regression analysisOnline reputation, offline status, and online self-representation are also positively related to a physician’s online order volume.
Yang, Guo, Wu, and Ju [47]Web crawlingPatient-generated information
System-generated information
Regression analysisPositive system and patient-generated information will positively affect patients’ decisions.
Shah, Yan, Shah, Shah, and Mamirkulova [5]Web crawlingEWOM
Physician trustworthiness
OLSEWOM and trustworthiness significantly influence physician’s economic returns.
Cao, Liu, Zhu, Hu, and Chen [30]Web crawlingService quality
EWOM
OLSPhysician’s service quality and eWOM would affect patients’ selection decision.
Li, Tang, Jiang, Yen, and Liu [49]Web crawlingOnline knowledge Contribution
Online reputation
Regression analysisKnowledge contribution and reputation are positively related to a physician’s online income.
Deng, Hong, Zhang, Evans, and Chen [32]Web crawlingOnline effort
Online reputation
OLSPatients prefer to choose online physicians with greater effort and reputation.
Wu and Lu [45]Web crawlingOnline reputation
Offline reputation
Online services
OLSImpact of online popularity is positive on online booking service in hospitals.
Yang and Zhang [48]Web crawlingFree feedback
Paid feedback
OLS
GLS
Paid feedback has a greater effect on patient choice than free feedback.
Liu, Guo, Wu, and Wu [28]Web crawlingOffline reputation
Online reputation
HLMPhysician’s reputation and hospital’s reputation have a positive effect on the number of appointments.
Chen, Yan, and Zhang [108]Web crawlingPhysician’s login behaviorRegression analysisPhysician login behavior positively influences patient choice.
Han, Qu, and Zhang [104]QuestionnaireNeighbor-recommended
physician
Trust in OPRs
Review valence
Partial
correlation analysis
Patients with high-risk diseases would be more likely to select a neighbor-recommended physician who has positive reviews than patients with low-risk diseases.
Luo, Chen, Wu, and Li [109]Web crawlingPhysician’s colleague multi-channel access (SI), ratingsOLS
regression
SI and patients’ rating significantly and positively affect multi-channel access in an OHC.
Wu and Lu [44]Web crawlingColleague reputation
Physician reputation
Fractional
logistic
regression
There is significant impact of focal physician’s colleagues’ reputation and focal physician reputation on the patients’ odds of posting treatment experience.
Guo, Guo, Fang, and Vogel [60]Web crawlingStatus capital
Decisional capital
OLSThe social and economic returns of doctors in OHCs are positively associated with their status and decisional capital.
Shan, Wang, Luan, and Tang [106]Eye-tracking experiment
Surveys
Cognitive trust
Affective trust
PLS-SEMCognitive trust and affective trust both positively affected patients’ choice of physician.
Yang, Diao, and Kiang [110]Web crawlingAbility
Reputation
Benevolence
LogarithmiclinearregressionPhysician’s trustworthiness (ability, reputation, and benevolence) has positive impact on a physician’s sales.
Chen, Rai, and Guo [31]Web crawlingCredibility
Benevolence
Regression analysisVolume of reviews significantly moderates the impact of credibility on popularity, while valence and variance significantly moderate the influence of benevolence on online popularity and price premium.
Guo, Guo, Zhang, and Vogel [61]Web crawlingWeak ties
Strong ties
Smart-PLSSignificant effect of D–P tie strength was found on doctors’ returns in the online healthcare context.
Wu and Lu [65]Web crawlingPricing and quantity of online servicesOLSService quantity positively impacts patient satisfaction.
Li, Zhang, Ma, and Liu [50]Web crawlingOnline reputation
Online self-representation
Regression analysisA market served by many doctors with strong online reputations or high levels ofself-representation will be less concentrated.
Zhang, Guo, and Wu [54]Web crawlingOnline contributions
Quantity of online contributions (popularity), Quality of online contributions (reputation)
Regression analysisThe relationship between quantity of online free service contributions and doctor’s private benefits was found to be positive.
Wu and Deng [53]Web crawlingSpecification
Credibility
Coordination
OLS
regression
Physician’s specification, credibility, and coordination are positively related to order quantity.
Liu, Guo, Wu, and Vogel [51]Web crawlingOnline doctor effortsOLS and hierarchical regressionBreadth and depth of online doctor efforts are associated with doctor reputation and popularity.
Yang, Guo, and Wu [46]Web crawlingResponse speed
Interaction frequency
Regression analysisPhysician’s response speed and interaction frequency would significantly affect patient satisfaction.
Lu and Wu [63]Web crawlingOverall review rating
Number of reviews
OLS
regression
The rise in ratings and number of reviews will result in increase in the number of outpatient visits.
Zhao, Li, and Wu [56]Web crawlingExternal WOM
Peer influence
Internal WOMA
ppointment quantity
Three-stage
least square
The number of doctors’ votes, followers, and reviews (external WOM), peer influence, internal WOM, and appointment quantity significantly influence patients’ doctor choice.
Liu, Zhou, and Wu [111]Web crawlingTitle, satisfaction, review volume, service attitude, technical level, clarity of explanation, ethicsNegative
binomial regression
All variables significantly influence online appointment services received by the focal doctor except for service process.
Lu and Wu [40]Web crawlingTechnical quality
Functional quality
OLSPatients make appointments with physicians with technical and functional quality.
Lu and Zhang [73] QuestionnairePhysician–patient communication
Information quality
Decision-making preference
Physician–patient concordance
Smart-PLSPhysician–patient communication in OHCs positively impacts patient compliance through mediations of the perceived quality of internet health information, decision-making preference, and physician–patient concordance.
Emmert, Meier, Pisch, and Sander [115]QuestionnaireInterviews
Laugesen, Hassanein, and Yuan [78]QuestionnairePerceived information asymmetry
Patient–physician concordance
Smart-PLSPerceived information asymmetry and patient–physician concordance significantly influence patient compliance for health consultation.
Chen, Guo, Wu, and Ju [95]Web crawlingPatient activeness
Informational support
Emotional support
Patient satisfaction
Text miningRegression analysisPatient activeness has significant impact on informational and emotional support. While informational and emotional support significantly influence patient satisfaction.
Yang, Du, He, and Qiao [29]Web crawlingReputationMonetary rewardD–P interactionRegression analysisReputation, monetary reward, and D–P interaction significantly influence physician contribution to OHC.
Chen, Baird, and Straub [81]Web crawlingAffective signals
Informational signals
Informational support
Social support
Text mining
Regression analysis
Affective signals and informational signals significantly influence informational support and social support from OHCs.
Khurana, Qiu, and Kumar [14]Web crawlingDoctor response to patient questionsRegression analysisDoctor response significantly influences user perception of medical services offered.
Greenwood, Agarwal, Agarwal, and Gopal [99]Web crawlingIndividual expertise
Organizational expertise
Adoption of new practices
Econometric analysisThere is a significant influence of individual and organizational expertise on physician behavior.
Shah, Yan, Khan, and Shah [67]Web crawlingReview-related signals
Service-related signals
Text mining
Regression analysis
Review-related signals and service-related signals significantly influence patients’ behavior.
Hong, Liang, Radcliff, Wigfall, and Street [15]Systematic reviewContent analysis
Hao and Zhang [33]Web crawlingTopic modeling
Hao, Zhang, Wang, and Gao [34]Web crawlingTopic modeling
Li, Liu, Li, Liu, and Liu [37]Web crawlingContent analysisTopic modeling
James, Villacis Calderon, and Cook [35]Web crawlingTopic modeling
Jung, Hur, Jung, and Kim [36]Web crawlingTopic modeling
Wallace, Paul, Sarkar, Trikalinos, and Dredze [4]Web crawlingTopic modeling
Zhang, Deng, Hong, Evans, Ma, and Zhang [7]Web crawlingContent analysis
Bidmon, Elshiewy, Terlutter, and Boztug [3]Survey Regression analysis
Pang and Liu [112]Web crawlingTopic modelingQualitative analysis
Noteboom and Al-Ramahi [114]Web crawlingTopic modeling

References

  1. Schulz, P.J.; Rothenfluh, F. Influence of health literacy on effects of patient rating websites: Survey study using a hypothetical situation and fictitious doctors. J. Med. Internet Res. 2020, 22, e14134. [Google Scholar] [CrossRef] [PubMed]
  2. Hanauer, D.A.; Zheng, K.; Singer, D.C.; Gebremariam, A.; Davis, M.M. Public Awareness, Perception, and Use of Online Physician Rating Sites. JAMA 2014, 311, 734–735. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Bidmon, S.; Elshiewy, O.; Terlutter, R.; Boztug, Y. What patients value in physicians: Analyzing drivers of patient satisfaction using physician-rating website data. J. Med. Internet Res. 2020, 22, e13830. [Google Scholar] [CrossRef]
  4. Wallace, B.C.; Paul, M.J.; Sarkar, U.; Trikalinos, T.A.; Dredze, M. A large-scale quantitative analysis of latent factors and sentiment in online doctor reviews. J. Am. Med. Inform. Assoc. 2014, 21, 1098–1103. [Google Scholar] [CrossRef]
  5. Shah, A.M.; Yan, X.; Shah, S.A.A.; Shah, S.J.; Mamirkulova, G. Exploring the impact of online information signals in leveraging the economic returns of physicians. J. Biomed. Inform. 2019, 98, 103272. [Google Scholar] [CrossRef] [PubMed]
  6. Kordzadeh, N. An Empirical Examination of Factors Influencing the Intention to Use Physician Rating Websites. In Proceedings of the 52nd Hawaii International Conference on System Sciences, Association for Information Systems, Maui, HI, USA, 8–11 January 2019; pp. 4346–4354. [Google Scholar]
  7. Zhang, W.; Deng, Z.; Hong, Z.; Evans, R.; Ma, J.; Zhang, H. Unhappy patients are not alike: Content analysis of the negative comments from china’s good doctor website. J. Med. Internet Res. 2018, 20, e35. [Google Scholar] [CrossRef] [PubMed]
  8. Emmert, M.; Halling, F.; Meier, F. Evaluations of dentists on a german physician rating website: An analysis of the ratings. J. Med. Internet Res. 2015, 17, e15. [Google Scholar] [CrossRef] [Green Version]
  9. Shah, A.M.; Yan, X.; Tariq, S.; Khan, S. Listening to the patient voice: Using a sentic computing model to evaluate physicians’ healthcare service quality for strategic planning in hospitals. Qual. Quant. 2020, 55, 173–201. [Google Scholar] [CrossRef]
  10. Rothenfluh, F.; Schulz, P.J. Physician rating websites: What aspects are important to identify a good doctor, and are patients capable of assessing them? A mixed-methods approach including physicians’ and health care consumers’ perspectives. J. Med. Internet Res. 2017, 19, e127. [Google Scholar] [CrossRef]
  11. Lin, Y.; Hong, Y.A.; Henson, B.S.; Stevenson, R.D.; Hong, S.; Lyu, T.; Liang, C. Assessing patient experience and healthcare quality of dental care using patient online reviews in the United States: Mixed methods study. J. Med. Internet Res. 2020, 22, e18652. [Google Scholar] [CrossRef]
  12. Rothenfluh, F.; Germeni, E.; Schulz, P.J. Consumer decision-making based on review websites: Are there differences between choosing a hotel and choosing a physician? J. Med. Internet Res. 2016, 18, e129. [Google Scholar] [CrossRef] [Green Version]
  13. Terlutter, R.; Bidmon, S.; Röttl, J. Who uses physician-rating websites? Differences in sociodemographic variables, psychographic variables, and health status of users and nonusers of physician-rating websites. J. Med. Internet Res. 2014, 16, e97. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Khurana, S.; Qiu, L.; Kumar, S. When a doctor knows, it shows: An empirical analysis of doctors’ responses in a Q&A forum of an online healthcare portal. Inf. Syst. Res. 2019, 30, 872–891. [Google Scholar] [CrossRef]
  15. Hong, Y.A.; Liang, C.; Radcliff, T.A.; Wigfall, L.T.; Street, R.L. What do patients say about doctors online? A systematic review of studies on patient online reviews. J. Med. Internet Res. 2019, 21, e12521. [Google Scholar] [CrossRef] [Green Version]
  16. Han, X.; Li, B.; Zhang, T.; Qu, J. Factors Associated with the actual behavior and intention of rating physicians on physician rating websites: Cross-sectional study. J. Med. Internet Res. 2020, 22, e14417. [Google Scholar] [CrossRef] [PubMed]
  17. Schlesinger, M.; Grob, R.; Shaller, D.; Martino, S.C.; Parker, A.M.; Finucane, M.L.; Cerully, J.L.; Rybowski, L. Taking patients’ narratives about clinicians from anecdote to science. N. Engl. J. Med. 2015, 373, 675–679. [Google Scholar] [CrossRef] [PubMed]
  18. Lagu, T.; Metayer, K.; Moran, M.; Ortiz, L.; Priya, A.; Goff, S.L.; Lindenauer, P.K. Website characteristics and physician reviews on commercial physician-rating websites. JAMA 2017, 317, 766–768. [Google Scholar] [CrossRef] [PubMed]
  19. Moher, D.; Shamseer, L.; Clarke, M.; Ghersi, D.; Liberati, A.; Petticrew, M.; Shekelle, P.; Stewart, L.A.; Group, P.-P. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst. Rev. 2015, 4, 1. [Google Scholar] [CrossRef] [Green Version]
  20. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Group, P. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009, 6, e1000097. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Inayat, I.; Salim, S.S.; Marczak, S.; Daneva, M.; Shamshirband, S. A systematic literature review on agile requirements engineering practices and challenges. Comput. Hum. Behav. 2015, 51, 915–929. [Google Scholar] [CrossRef]
  22. Pacheco, C.; Garcia, I. A systematic literature review of stakeholder identification methods in requirements elicitation. J. Syst. Softw. 2012, 85, 2171–2181. [Google Scholar] [CrossRef]
  23. Qazi, A.; Raj Ram, G.; Hardaker, G.; Standing, C. A systematic literature review on opinion types and sentiment analysis techniques: Tasks and challenges. Internet Res. 2017, 27, 608–630. [Google Scholar] [CrossRef]
  24. Bokolo Anthony, J. Use of telemedicine and virtual care for remote treatment in response to COVID-19 Pandemic. J. Med. Syst. 2020, 44, 132. [Google Scholar] [CrossRef]
  25. Keele, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Ver. 2.3 EBSE Technical Report; EBSE: Goyang-si, Korea, 2007. [Google Scholar]
  26. Akerlof, G.A. The market for “lemons”: Quality uncertainty and the market mechanism. Q. J. Econ. 1970, 84, 488–500. [Google Scholar] [CrossRef]
  27. Kerschbamer, R.; Sutter, M. The economics of credence goods—A survey of recent lab and field experiments*. CESifo Econ. Stud. 2017, 63, 1–23. [Google Scholar] [CrossRef] [Green Version]
  28. Liu, X.; Guo, X.; Wu, H.; Wu, T. The impact of individual and organizational reputation on physicians’ appointments online. Int. J. Electron. Commer. 2016, 20, 551–577. [Google Scholar] [CrossRef]
  29. Yang, H.; Du, H.S.; He, W.; Qiao, H. Understanding the motivators affecting doctors’ contributions in online healthcare communities: Professional status as a moderator. Behav. Inf. Technol. 2021, 40, 146–160. [Google Scholar] [CrossRef]
  30. Cao, X.; Liu, Y.; Zhu, Z.; Hu, J.; Chen, X. Online selection of a physician by patients: Empirical study from elaboration likelihood perspective. Comput. Hum. Behav. 2017, 73, 403–412. [Google Scholar] [CrossRef]
  31. Chen, L.; Rai, A.; Guo, X. Physicians’ Online Popularity and Price Premiums for Online Health Consultations: A Combined Signaling Theory and Online Feedback Mechanisms Explanation. In Proceedings of the Thirty Sixth International Conference on Information Systems (ICIS 15), Association for Information Systems, Fort Worth, TX, USA, 13–16 December 2015; pp. 2105–2115. [Google Scholar]
  32. Deng, Z.; Hong, Z.; Zhang, W.; Evans, R.; Chen, Y. The effect of online effort and reputation of physicians on patients’ choice: 3-Wave data analysis of china’s good doctor website. J. Med. Internet Res. 2019, 21, e10170. [Google Scholar] [CrossRef]
  33. Hao, H.; Zhang, K. The voice of chinese health consumers: A text mining approach to web-based physician reviews. J. Med. Internet Res. 2016, 18, e108. [Google Scholar] [CrossRef]
  34. Hao, H.; Zhang, K.; Wang, W.; Gao, G. A tale of two countries: International comparison of online doctor reviews between China and the United States. Int. J. Med. Inform. 2017, 99, 37–44. [Google Scholar] [CrossRef] [PubMed]
  35. James, T.L.; Villacis Calderon, E.D.; Cook, D.F. Exploring patient perceptions of healthcare service quality through analysis of unstructured feedback. Expert Syst. Appl. 2017, 71, 479–492. [Google Scholar] [CrossRef]
  36. Jung, Y.; Hur, C.; Jung, D.; Kim, M. Identifying key hospital service quality factors in online health communities. J. Med. Internet Res. 2015, 17, e90. [Google Scholar] [CrossRef] [PubMed]
  37. Li, J.; Liu, M.; Li, X.; Liu, X.; Liu, J. Developing embedded taxonomy and mining patients’ interests from web-based physician reviews: Mixed-methods approach. J. Med. Internet Res. 2018, 20, e254. [Google Scholar] [CrossRef]
  38. Li, J.; Tang, J.; Yen David, C.; Liu, X. Disease risk and its moderating effect on the e-consultation market offline and online signals. Inf. Technol. People 2019, 32, 1065–1084. [Google Scholar] [CrossRef]
  39. Liu, N.; Finkelstein, S.R.; Kruk, M.E.; Rosenthal, D. When waiting to see a doctor is less irritating: Understanding patient preferences and choice behavior in appointment scheduling. Manag. Sci. 2018, 64, 1975–1996. [Google Scholar] [CrossRef] [Green Version]
  40. Lu, N.; Wu, H. Exploring the impact of word-of-mouth about physicians’ service quality on patient choice based on online health communities. BMC Med. Inform. Decis. 2016, 16, 151. [Google Scholar] [CrossRef] [Green Version]
  41. Lu, S.F.; Rui, H. Can we trust online physician ratings? Evidence from cardiac surgeons in Florida. Manag. Sci. 2018, 64, 2557–2573. [Google Scholar] [CrossRef]
  42. Luca, M.; Zervas, G. Fake it till you make it: Reputation, competition, and yelp review fraud. Manag. Sci. 2016, 62, 3412–3427. [Google Scholar] [CrossRef] [Green Version]
  43. Segal, J.; Sacopulos, M.; Sheets, V.; Thurston, I.; Brooks, K.; Puccia, R. Online doctor reviews: Do they track surgeon volume, a proxy for quality of care? J. Med. Internet Res. 2012, 14, e50. [Google Scholar] [CrossRef]
  44. Wu, H.; Lu, N. How your colleagues’ reputation impact your patients’ odds of posting experiences: Evidence from an online health community. Electron. Commer. Res. Appl. 2016, 16, 7–17. [Google Scholar] [CrossRef]
  45. Wu, H.; Lu, N. Online written consultation, telephone consultation and offline appointment: An examination of the channel effect in online health communities. Int. J. Med. Inform. 2017, 107, 107–119. [Google Scholar] [CrossRef]
  46. Yang, H.; Guo, X.; Wu, T. Exploring the influence of the online physician service delivery process on patient satisfaction. Decis. Support Syst. 2015, 78, 113–121. [Google Scholar] [CrossRef]
  47. Yang, H.; Guo, X.; Wu, T.; Ju, X. Exploring the effects of patient-generated and system-generated information on patients’ online search, evaluation and decision. Electron. Commer. Res. Appl. 2015, 14, 192–203. [Google Scholar] [CrossRef]
  48. Yang, H.; Zhang, X. Investigating the effect of paid and free feedback about physicians’ telemedicine services on patients’ and physicians’ behaviors: Panel data analysis. J. Med. Internet Res. 2019, 21, e12156. [Google Scholar] [CrossRef] [PubMed]
  49. Li, J.; Tang, J.; Jiang, L.; Yen, D.C.; Liu, X. Economic success of physicians in the online consultation market: A signaling theory perspective. Int. J. Electron. Commer. 2019, 23, 244–271. [Google Scholar] [CrossRef]
  50. Li, J.; Zhang, Y.; Ma, L.; Liu, X. The impact of the internet on health consultation market concentration: An econometric analysis of secondary data. J. Med. Internet Res. 2016, 18, e276. [Google Scholar] [CrossRef] [Green Version]
  51. Liu, X.; Guo, X.; Wu, H.; Vogel, D. Doctor’s Effort Influence on Online Reputation and Popularity. In Proceedings of the 2nd International Conference on Smart Health, Beijing, China, 10–11 July 2014; Zheng, X., Zeng, D., Chen, H., Zhang, Y., Xing, C., Neill, D.B., Eds.; Smart Health. Springer: Berlin/Heidelberg, Germany, 2014; pp. 111–126. [Google Scholar]
  52. Wu, B. Patient continued use of online health care communities: Web mining of patient-doctor communication. J. Med. Internet Res. 2018, 20, e126. [Google Scholar] [CrossRef]
  53. Wu, H.; Deng, Z. Knowledge collaboration among physicians in online health communities: A transactive memory perspective. Int. J. Inf. Manag. 2019, 49, 13–33. [Google Scholar] [CrossRef]
  54. Zhang, M.; Guo, X.; Wu, T. Impact of free contributions on private benefits in online healthcare communities. Int. J. Electron. Commer. 2019, 23, 492–523. [Google Scholar] [CrossRef]
  55. Zhang, X.; Liu, S.; Deng, Z.; Chen, X. Knowledge sharing motivations in online health communities: A comparative study of health professionals and normal users. Comput. Hum. Behav. 2017, 75, 797–810. [Google Scholar] [CrossRef]
  56. Zhao, Y.; Li, S.; Wu, J. Exploring the Factors Influencing Patient Usage Behavior Based on Online Health Communities. Proceedings of The 6th International Conference for Smart Health, Wuhan, China, 1–3 July 2018; Chen, H., Fang, Q., Zeng, D., Wu, J., Eds.; Smart Health. Springer: Berlin/Heidelberg, Germany, 2018; pp. 70–76. [Google Scholar]
  57. Zhang, X.; Guo, F.; Xu, T.; Li, Y. What motivates physicians to share free health information on online health platforms? Inform. Process. Manag. 2020, 57, 102166. [Google Scholar] [CrossRef]
  58. Liang, Q.; Luo, F.J.; WU, Y.Z. The impact of doctor’s efforts and reputation on the number of new patients in online health community. Chin. J. Health Policy 2017, 10, 63–71. [Google Scholar]
  59. Gao, G.; Greenwood, B.N.; Agarwal, R.; McCullough, J.S. Vocal minority and silent majority: How do online ratings reflect population perceptions of quality. MIS Q. 2015, 39, 565–590. [Google Scholar] [CrossRef]
  60. Guo, S.; Guo, X.; Fang, Y.; Vogel, D. How doctors gain social and economic returns in online health-care communities: A professional capital perspective. J. Manag. Inf. Syst. 2017, 34, 487–519. [Google Scholar] [CrossRef]
  61. Guo, S.; Guo, X.; Zhang, X.; Vogel, D. Doctor–patient relationship strength’s impact in an online healthcare community. Inf. Technol. Dev. 2018, 24, 279–300. [Google Scholar] [CrossRef]
  62. Holwerda, N.; Sanderman, R.; Pool, G.; Hinnen, C.; Langendijk, J.A.; Bemelman, W.A.; Hagedoorn, M.; Sprangers, M.A.G. Do patients trust their physician? The role of attachment style in the patient-physician relationship within one year after a cancer diagnosis. Acta Oncol. 2013, 52, 110–117. [Google Scholar] [CrossRef] [Green Version]
  63. Lu, W.; Wu, H. How online reviews and services affect physician outpatient visits: Content analysis of evidence from two online health care communities. JMIR Med. Inform. 2019, 7, e16185. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Roettl, J.; Bidmon, S.; Terlutter, R. What predicts patients’ willingness to undergo online treatment and pay for online treatment? Results from a web-based survey to investigate the changing patient-physician relationship. J. Med. Internet Res. 2016, 18, e32. [Google Scholar] [CrossRef] [PubMed]
  65. Wu, H.; Lu, N. Service provision, pricing, and patient satisfaction in online health communities. Int. J. Med. Inform. 2018, 110, 77–89. [Google Scholar] [CrossRef]
  66. Tucker, C.; Zhang, J. How does popularity information affect choices? A field experiment. Manag. Sci. 2011, 57, 828–842. [Google Scholar] [CrossRef]
  67. Shah, A.; Yan, X.; Khan, S.; Shah, J. Exploring the Impact of Review and Service-Related Signals on Online Physician Review Helpfulness: A Multi-Methods Approach. In Proceedings of the Twenty-Fourth Pacific Asia Conference on Information Systems, Association for Information Systems, Dubai, United Arab Emirates, 22–24 June 2020; pp. 1–14. [Google Scholar]
  68. Angst, C.M.; Agarwal, R. Adoption of electronic health records in the presence of privacy concerns: The elaboration likelihood model and individual persuasion. MIS Q. 2009, 33, 339–370. [Google Scholar] [CrossRef] [Green Version]
  69. Batini, C.; Scannapieco, M. Data and Information Quality; Springer: Cham, Switzerland, 2016; p. 500. [Google Scholar]
  70. Beck, F.; Richard, J.-B.; Nguyen-Thanh, V.; Montagni, I.; Parizot, I.; Renahy, E. Use of the internet as a health information resource among french young adults: Results from a nationally representative survey. J. Med. Internet Res. 2014, 16, e128. [Google Scholar] [CrossRef] [PubMed]
  71. Jin, J.; Yan, X.; Li, Y.; Li, Y. How users adopt healthcare information: An empirical study of an online Q&A community. Int. J. Med. Inform. 2016, 86, 91–103. [Google Scholar] [CrossRef] [PubMed]
  72. Keselman, A.; Arnott Smith, C.; Murcko, A.C.; Kaufman, D.R. Evaluating the quality of health information in a changing digital ecosystem. J. Med. Internet Res. 2019, 21, e11129. [Google Scholar] [CrossRef] [Green Version]
  73. Lu, X.; Zhang, R. Impact of physician-patient communication in online health communities on patient compliance: Cross-sectional questionnaire study. J. Med. Internet Res. 2019, 21, e12891. [Google Scholar] [CrossRef] [PubMed]
  74. Memon, M.; Ginsberg, L.; Simunovic, N.; Ristevski, B.; Bhandari, M.; Kleinlugtenbelt, Y.V. Quality of web-based information for the 10 most common fractures. Interact. J. Med. Res. 2016, 5, e19. [Google Scholar] [CrossRef] [PubMed]
  75. Sun, Y.; Zhang, Y.; Gwizdka, J.; Trace, C.B. Consumer evaluation of the quality of online health information: Systematic literature review of relevant criteria and indicators. J. Med. Internet Res. 2019, 21, e12522. [Google Scholar] [CrossRef] [PubMed]
  76. Yoon, T.J. Quality information disclosure and patient reallocation in the healthcare industry: Evidence from cardiac surgery report cards. Mark. Sci. 2019, 39, 636–662. [Google Scholar] [CrossRef]
  77. Wang, G.; Li, J.; Hopp, W.J.; Fazzalari, F.L.; Bolling, S.F. Using patient-specific quality information to unlock hidden healthcare capabilities. Manuf. Serv. Oper. Manag. 2019, 21, 582–601. [Google Scholar] [CrossRef] [Green Version]
  78. Laugesen, J.; Hassanein, K.; Yuan, Y. The impact of internet health information on patient compliance: A research model and an empirical study. J. Med. Internet Res. 2015, 17, e143. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  79. Tong, Y.; Tan, C.-H.; Teo, H.-H. Direct and indirect information system use: A multimethod exploration of social power antecedents in healthcare. Inf. Syst. Res. 2017, 28, 690–710. [Google Scholar] [CrossRef]
  80. Diviani, N.; van den Putte, B.; Giani, S.; van Weert, J.C. Low health literacy and evaluation of online health information: A systematic review of the literature. J. Med. Internet Res. 2015, 17, e112. [Google Scholar] [CrossRef] [Green Version]
  81. Chen, L.; Baird, A.; Straub, D. A linguistic signaling model of social support exchange in online health communities. Decis. Support Syst. 2020, 130, 113233. [Google Scholar] [CrossRef]
  82. Zhang, X.; Guo, X.; Lai, K.-h.; Yi, W. How does online interactional unfairness matter for patient–doctor relationship quality in online health consultation? The contingencies of professional seniority and disease severity. Eur. J. Inf. Syst. 2019, 28, 336–354. [Google Scholar] [CrossRef]
  83. Cousin, G.; Schmid Mast, M.; Roter, D.L.; Hall, J.A. Concordance between physician communication style and patient attitudes predicts patient satisfaction. Patient Educ. Couns. 2012, 87, 193–197. [Google Scholar] [CrossRef]
  84. Banerjee, A.; Sanyal, D. Dynamics of doctor-patient relationship: A cross-sectional study on concordance, trust, and patient enablement. J. Fam. Commu. Med. 2012, 19, 12–19. [Google Scholar] [CrossRef] [Green Version]
  85. Audrain-Pontevia, A.-F.; Menvielle, L.; Ertz, M. Effects of three antecedents of patient compliance for users of peer-to-peer online health communities: Cross-sectional study. J. Med. Internet Res. 2019, 21, e14006. [Google Scholar] [CrossRef]
  86. Chandwani, K.D.; Zhao, F.; Morrow, G.R.; Deshields, T.L.; Minasian, L.M.; Manola, J.; Fisch, M.J. Lack of patient-clinician concordance in cancer patients: Its relation with patient variables. J. Pain Symptom. Manag. 2017, 53, 988–998. [Google Scholar] [CrossRef] [Green Version]
  87. Gross, K.; Schindler, C.; Grize, L.; Späth, A.; Schwind, B.; Zemp, E. Patient–physician concordance and discordance in gynecology: Do physicians identify patients’ reasons for visit and do patients understand physicians’ actions? Patient Educ. Couns. 2013, 92, 45–52. [Google Scholar] [CrossRef]
  88. Kee, J.W.Y.; Khoo, H.S.; Lim, I.; Koh, M.Y.H. Communication skills in patient-doctor interactions: Learning from patient complaints. Health Prof. Educ. 2018, 4, 97–106. [Google Scholar] [CrossRef]
  89. Spencer, K.L. Transforming patient compliance research in an era of biomedicalization. J. Health Soc. Behav. 2018, 59, 170–184. [Google Scholar] [CrossRef] [PubMed]
  90. Tan, S.S.-L.; Goonawardene, N. Internet health information seeking and the patient-physician relationship: A systematic review. J. Med. Internet Res. 2017, 19, e9. [Google Scholar] [CrossRef] [PubMed]
  91. Thornton, R.L.J.; Powe, N.R.; Roter, D.; Cooper, L.A. Patient–physician social concordance, medical visit communication and patients’ perceptions of health care quality. Patient Educ. Couns. 2011, 85, e201–e208. [Google Scholar] [CrossRef] [Green Version]
  92. Zanini, C.; Sarzi-Puttini, P.; Atzeni, F.; Di Franco, M.; Rubinelli, S. Doctors’ insights into the patient perspective: A qualitative study in the field of chronic pain. Biomed. Res. Int. 2014, 2014, 514230. [Google Scholar] [CrossRef] [PubMed]
  93. Schubart, J.R.; Toran, L.; Whitehead, M.; Levi, B.H.; Green, M.J. Informed decision making in advance care planning: Concordance of patient self-reported diagnosis with physician diagnosis. Support Care 2013, 21, 637–641. [Google Scholar] [CrossRef] [PubMed]
  94. Shin, D.W.; Kim, S.Y.; Cho, J.; Sanson-Fisher, R.W.; Guallar, E.; Chai, G.Y.; Kim, H.-S.; Park, B.R.; Park, E.-C.; Park, J.-H. Discordance in perceived needs between patients and physicians in oncology practice: A nationwide survey in Korea. J. Clin. Oncol. 2011, 29, 4424–4429. [Google Scholar] [CrossRef]
  95. Chen, S.; Guo, X.; Wu, T.; Ju, X. Exploring the online doctor-patient interaction on patient satisfaction based on text mining and empirical analysis. Inform. Process Manag. 2020, 57, 102253. [Google Scholar] [CrossRef]
  96. Zhou, J.; Zuo, M.; Ye, C. Understanding the factors influencing health professionals’ online voluntary behaviors: Evidence from YiXinLi, a Chinese online health community for mental health. Int. J. Med. Inform. 2019, 130, 103939. [Google Scholar] [CrossRef] [PubMed]
  97. Hampshire, K.; Hamill, H.; Mariwah, S.; Mwanga, J.; Amoako-Sakyi, D. The application of signalling theory to health-related trust problems: The example of herbal clinics in Ghana and Tanzania. Soc. Sci. Med. 2017, 188, 109–118. [Google Scholar] [CrossRef] [Green Version]
  98. Li, J.; Wu, H.; Deng, Z.; Lu, N.; Evans, R.; Xia, C. How professional capital and team heterogeneity affect the demands of online team-based medical service. BMC Med. Inform. Decis. 2019, 19, 119. [Google Scholar] [CrossRef] [PubMed]
  99. Greenwood, B.N.; Agarwal, R.; Agarwal, R.; Gopal, A. The role of individual and organizational expertise in the adoption of new practices. Organ. Sci. 2019, 30, 191–213. [Google Scholar] [CrossRef]
  100. Özer, Ö.; Subramanian, U.; Wang, Y. Information sharing, advice provision, or delegation: What leads to higher trust and trustworthiness? Manag. Sci. 2018, 64, 474–493. [Google Scholar] [CrossRef]
  101. Gao, G.G.; McCullough, J.S.; Agarwal, R.; Jha, A.K. A changing landscape of physician quality reporting: Analysis of patients’ online ratings of their physicians over a 5-year period. J. Med. Internet Res. 2012, 14, e38. [Google Scholar] [CrossRef] [PubMed]
  102. Lankton, N.K.; McKnight, D.H.; Wright, R.T.; Thatcher, J.B. Research note—Using expectation disconfirmation theory and polynomial modeling to understand trust in technology. Inf. Syst. Res. 2016, 27, 197–213. [Google Scholar] [CrossRef]
  103. Yi, M.Y.; Yoon, J.J.; Davis, J.M.; Lee, T. Untangling the antecedents of initial trust in Web-based health information: The roles of argument quality, source expertise, and user perceptions of information quality and risk. Decis. Support Syst. 2013, 55, 284–295. [Google Scholar] [CrossRef]
  104. Han, X.; Qu, J.; Zhang, T. Exploring the impact of review valence, disease risk, and trust on patient choice based on online physician reviews. Telemat. Inform. 2019, 45, 101276. [Google Scholar] [CrossRef]
  105. Xiao, N.; Sharman, R.; Rao, H.R.; Upadhyaya, S. Factors influencing online health information search: An empirical analysis of a national cancer-related survey. Decis. Support Syst. 2014, 57, 417–427. [Google Scholar] [CrossRef]
  106. Shan, W.; Wang, Y.; Luan, J.; Tang, P. The influence of physician information on patients’ choice of physician in mhealth services using china’s chunyu doctor app: Eye-tracking and questionnaire study. JMIR Mhealth Uhealth 2019, 7, e15544. [Google Scholar] [CrossRef]
  107. Reimann, S.; Strech, D. The representation of patient experience and satisfaction in physician rating sites. A criteria-based analysis of English- and German-language sites. BMC Health Serv. Res. 2010, 10, 332. [Google Scholar] [CrossRef] [Green Version]
  108. Chen, Q.; Yan, X.; Zhang, T. The Impact of Physician’s Login Behavior on Patients’ Search and Decision in OHCs. In Proceedings of the Seventh International Conference for Smart Health, Shenzhen, China, 1–2 July 2019; Chen, H., Zeng, D., Yan, X., Xing, C., Eds.; Smart Health. Springer: Berlin/Heidelberg, Germany, 2019; pp. 155–169. [Google Scholar]
  109. Luo, P.; Chen, K.; Wu, C.; Li, Y. Exploring the social influence of multichannel access in an online health community. J. Assoc. Inf. Sci. Technol. 2018, 69, 98–109. [Google Scholar] [CrossRef]
  110. Yang, M.; Diao, Y.; Kiang, M.Y. Physicians’ Sales of Expert Service Online: The Role of Physician Trust and Price. In Proceedings of the Twenty-Second Pacific Asia Conference on Information Systems (PACIS 18), Association for Information Systems, Yokohama, Japan, 26–30 June 2018; pp. 1–14. [Google Scholar]
  111. Liu, G.; Zhou, L.; Wu, J. What Affects Patients’ Online Decisions: An Empirical Study of Online Appointment Service Based on Text Mining. In Proceedings of the 6th International Conference for Smart Health, Wuhan, China, 1–3 July 2018; Chen, H., Fang, Q., Zeng, D., Wu, J., Chen, H., Fang, Q., Zeng, D., Wu, J., Eds.; Smart Health. Springer: Berlin/Heidelberg, Germany, 2018; pp. 204–210. [Google Scholar]
  112. Pang, P.C.-I.; Liu, L. Why Do Consumers Review Doctors Online? Topic Modeling Analysis of Positive and Negative Reviews on an Online Health Community in China. In Proceedings of the 53rd Hawaii International Conference on System Sciences, Association of Information Systems, Maui, HI, USA, 7–10 January 2020; pp. 705–714. [Google Scholar]
  113. Chen, Q.; Yan, X.; Zhang, T. Converting visitors of physicians’ personal websites to customers in online health communities: Longitudinal study. J. Med. Internet Res. 2020, 22, e20623. [Google Scholar] [CrossRef]
  114. Noteboom, C.; Al-Ramahi, M. What Are the Gaps in Mobile Patient Portal? Mining Users Feedback Using Topic Modeling. In Proceedings of the 51st Hawaii International Conference on System Sciences, Association for Information Systems, Hilton Waikoloa Village, HI, USA, 3–6 January 2018; pp. 564–573. [Google Scholar]
  115. Emmert, M.; Meier, F.; Pisch, F.; Sander, U. Physician choice making and characteristics associated with using physician-rating websites: Cross-sectional study. J. Med. Internet Res. 2013, 15, e187. [Google Scholar] [CrossRef]
  116. Storino, A.; Castillo-Angeles, M.; Watkins, A.A.; Vargas, C.; Mancias, J.D.; Bullock, A.; Demirjian, A.; Moser, A.J.; Kent, T.S. Assessing the accuracy and readability of online health information for patients with pancreatic cancer. JAMA Surg. 2016, 151, 831–837. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  117. Shah, A.M.; Yan, X.; Shah, S.A.A.; Mamirkulova, G. Mining patient opinion to evaluate the service quality in healthcare: A deep-learning approach. J. Ambient. Intell. Humaniz. Comput. 2020, 11, 2925–2942. [Google Scholar] [CrossRef]
  118. Greaves, F.; Pape, U.J.; King, D.; Darzi, A.; Majeed, A.; Wachter, R.M.; Millett, C. Associations between internet-based patient ratings and conventional surveys of patient experience in the English NHS: An observational study. BMJ Qual. Saf. 2012, 21, 600–605. [Google Scholar] [CrossRef]
  119. Nieto-García, M.; Muñoz-Gallego, P.A.; González-Benito, Ó. Tourists’ willingness to pay for an accommodation: The effect of eWOM and internal reference price. Int. J. Hosp. Manag. 2017, 62, 67–77. [Google Scholar] [CrossRef] [Green Version]
  120. Narwal, P.; Nayak, J.K. How consumers form product quality perceptions in absence of fixed posted prices: Interaction of product cues with seller reputation and third-party reviews. J. Retail. Consum. Serv. 2020, 52, 101924. [Google Scholar] [CrossRef]
  121. Wald, J.T.; Timimi, F.K.; Kotsenas, A.L. Managing physicians’ medical brand. In Mayo Clinic Proceedings; Elsevier: Amsterdam, The Netherlands, 2017; Volume 92, pp. 685–686. [Google Scholar]
  122. Guo, L.; Jin, B.; Yao, C.; Yang, H.; Huang, D.; Wang, F. Which doctor to trust: A recommender system for identifying the right doctors. J. Med. Internet Res. 2016, 18, e186. [Google Scholar] [CrossRef] [PubMed]
  123. Strech, D. Ethical principles for physician rating sites. J. Med. Internet Res. 2011, 13, e113. [Google Scholar] [CrossRef]
  124. Liu, Q.; Zheng, Z.; Zheng, J.; Chen, Q.; Liu, G.; Chen, S.; Chu, B.; Zhu, H.; Akinwunmi, B.; Huang, J.; et al. Health communication through news media during the early stage of the COVID-19 outbreak in China: Digital topic modeling approach. J. Med. Internet Res. 2020, 22, e19118. [Google Scholar] [CrossRef] [PubMed]
  125. Greaves, F.; Ramirez-Cano, D.; Millett, C.; Darzi, A.; Donaldson, L. Harnessing the cloud of patient experience: Using social media to detect poor quality healthcare. BMJ Qual. Saf. 2013, 22, 251–255. [Google Scholar] [CrossRef] [PubMed]
  126. Shaffer, V.A.; Hulsey, L.; Zikmund-Fisher, B.J. The effects of process-focused versus experience-focused narratives in a breast cancer treatment decision task. Patient Educ. Couns. 2013, 93, 255–264. [Google Scholar] [CrossRef] [PubMed]
  127. Drewniak, D.; Glässel, A.; Hodel, M.; Biller-Andorno, N. Risks and benefits of web-based patient narratives: Systematic review. J. Med. Internet Res. 2020, 22, e15772. [Google Scholar] [CrossRef] [PubMed]
  128. Afzal, M.; Ali, S.I.; Ali, R.; Hussain, M.; Ali, T.; Khan, W.A.; Amin, M.B.; Kang, B.H.; Lee, S. Personalization of wellness recommendations using contextual interpretation. Expert Syst. Appl. 2018, 96, 506–521. [Google Scholar] [CrossRef]
  129. Krishnamurthy, M.; Marcinek, P.; Malik, K.M.; Afzal, M. Representing social network patient data as evidence-based knowledge to support decision making in disease progression for comorbidities. IEEE Access 2018, 6, 12951–12965. [Google Scholar] [CrossRef]
  130. Stacey, D.; Légaré, F.; Col, N.F.; Bennett, C.L.; Barry, M.J.; Eden, K.B.; Holmes-Rovner, M.; Llewellyn-Thomas, H.; Lyddiatt, A.; Thomson, R.; et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst. Rev. 2014, 28, CD001431. [Google Scholar] [CrossRef]
Figure 1. Stages of the literature search process.
Figure 1. Stages of the literature search process.
Ijerph 18 11226 g001
Figure 2. Categorization of studies.
Figure 2. Categorization of studies.
Ijerph 18 11226 g002
Figure 3. Country-wise location of studies.
Figure 3. Country-wise location of studies.
Ijerph 18 11226 g003
Figure 4. Physician rating websites used in studies.
Figure 4. Physician rating websites used in studies.
Ijerph 18 11226 g004
Table 1. Search sources.
Table 1. Search sources.
Inclusion CriteriaExclusion Criteria
Electronic databasesPubMed, EMBASE, Google Scholar, Scopus, Web of Science (Clarivate Analytics), Science Direct, Emerald, Taylor & Francis, Springer, Sage, ACM, Wiley, IEEE
Searched itemsJournals and conference proceedings
Keywords usedHealth rating platforms, physician rating websites, review sites, online reviews, online physician reviews, online ratings, patient online reviews, healthcare quality, e-health, digital health
Searched applied on Full text to locate publications that fell within the scope of our search and to ensure that we did not overlook those that did not include our search keywords in their titles or abstracts.
LanguageEnglish
Study periodJanuary 2010–December 2020
Table 2. The number of articles that were filtered based on search terms.
Table 2. The number of articles that were filtered based on search terms.
DatabaseRetrievedIncluded
PubMed7511
Science Direct15610
Emerald1892
Taylor & Francis1958
Springer1787
Sage1655
ACM1584
Wiley1493
IEEE162
Total128152
Table 3. Inclusion and exclusion criteria.
Table 3. Inclusion and exclusion criteria.
Inclusion CriteriaExclusion Criteria
Studies that focused on PRWs.Studies that were not written in English.
Studies that reported different signaling mechanisms in healthcare.Excluded papers other than journal articles or conference proceedings.
Studies that analyzed patients’ choice or patient decision-making process.Remove duplicate/similar studies by maintaining the most comprehensive and current version.
Studies that analyzed patients’ opinions as online physician reviews.Studies without any practical, theoretical, or statistical evidence were excluded.
Studies with clear aims/objectives.
Studies that addressed and described the research context properly.
The findings of the studies were in line with our research purpose.
Studies that were peer-reviewed and written in English.
Studies that were qualitative, quantitative or mixed-methods, in nature.
Table 4. Criteria for evaluating the quality of studies.
Table 4. Criteria for evaluating the quality of studies.
CriteriaResponse ScoreScore Obtained
Is the study aim/objective clear?Yes = 1/moderately = 0.5/no = 031 studies 88%
Is the research context dealt with well?Yes = 1/moderately = 0.5/no = 021 studies 92%
Based on the research findings, what percentage is
the quality rate acceptance?
>80% = 1/under
20% = 0/between = 0.5
Table 5. Quality scores of included papers.
Table 5. Quality scores of included papers.
Quality Scores
Poor (<26%)Fair (26–45%)Good (46–65%)Very Good (66–85%)ExcellentTotal
Number of articles3111172052
Percentage of articles5.761.9221.1532.6938.46100
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shah, A.M.; Muhammad, W.; Lee, K.; Naqvi, R.A. Examining Different Factors in Web-Based Patients’ Decision-Making Process: Systematic Review on Digital Platforms for Clinical Decision Support System. Int. J. Environ. Res. Public Health 2021, 18, 11226. https://doi.org/10.3390/ijerph182111226

AMA Style

Shah AM, Muhammad W, Lee K, Naqvi RA. Examining Different Factors in Web-Based Patients’ Decision-Making Process: Systematic Review on Digital Platforms for Clinical Decision Support System. International Journal of Environmental Research and Public Health. 2021; 18(21):11226. https://doi.org/10.3390/ijerph182111226

Chicago/Turabian Style

Shah, Adnan Muhammad, Wazir Muhammad, Kangyoon Lee, and Rizwan Ali Naqvi. 2021. "Examining Different Factors in Web-Based Patients’ Decision-Making Process: Systematic Review on Digital Platforms for Clinical Decision Support System" International Journal of Environmental Research and Public Health 18, no. 21: 11226. https://doi.org/10.3390/ijerph182111226

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop