3.1. Sample and Data Collection
The primary data collection involved a two-pronged approach specifically designed to capture trust dynamics in digital transformation and sustainability contexts:
Digital Transformation-Focused Online Survey: The online survey was conducted from March to September 2024, targeting users of major Chinese social media platforms where digital transformation and sustainability content is frequently shared: Xiaohongshu, Sina Weibo, and WeChat Moments. A stratified sampling method was employed to ensure representation across key demographic groups with varying levels of exposure to digital transformation initiatives. The survey instrument, developed based on the theoretical framework and existing literature on expert credibility in digital innovation contexts, comprised 16 core items measuring key constructs related to trust in digital transformation and sustainability experts. Of the 862 questionnaires collected, 850 were deemed valid after excluding responses with excessively short or long completion times (<3 min or >30 min) and those exhibiting clear response patterns. This resulted in a high effective response rate of 98.6%. All measurement instruments were carefully designed and validated for this study. Detailed measurement scales, including the Expert Credibility Scale, Trust Intention Scale, and other key instruments, are provided in
Appendix A.
The sample demographics demonstrate good diversity in terms of digital literacy and sustainability awareness, aligning reasonably well with the user profiles of the targeted social media platforms. The gender distribution was 57.6% female (n = 490) and 42.4% male (n = 360). The age distribution covered different generations with varying digital adoption patterns, with 28.4% (n = 241) aged 18–25 (digital natives), 42.3% (n = 360) aged 26–35 (early digital adopters), 19.6% (n = 167) aged 36–45 (digital migrants), and 9.7% (n = 82) aged 46 and above (later digital adopters). Educational attainment was diverse and captured varying levels of potential exposure to digital transformation and sustainability education: 60.0% (n = 510) held bachelor’s degrees, 21.9% (n = 186) associate degrees, 10.9% (n = 93) postgraduate degrees, and 7.2% (n = 61) high school diplomas or below. Occupationally, 53.3% (n = 453) were corporate employees (potentially experiencing workplace digital transformation), 25.2% (n = 214) students (learning about digital innovation), 11.6% (n = 99) professionals (potentially implementing digital transformation), and 9.9% (n = 84) freelancers (navigating the gig economy enabled by digital platforms).
Digital Innovation Platform Quasi-Experiment: A quasi-experimental component leveraged Weibo’s updated personal certification system (Orange V/Gold V mechanism). This certification upgrade system, designed to improve the visibility and credibility of expert content related to digital transformation and sustainability initiatives, served as a natural intervention in the digital information ecosystem. The certification system operates based on objective quantitative metrics: Orange V certification requires ≥300,000 monthly reads and ≥100 loyal fans, while Gold V certification requires ≥10 million monthly reads and ≥1000 loyal fans, with automatic system upgrades based on user performance data. The study sample was divided into two groups: an experimental group (users who actively engaged with the new digital expert verification feature) and a control group (users who did not engage with the feature). Expert trust levels specifically regarding digital transformation and sustainability experts were measured for both groups at two time points: pre-intervention (April 2024) and post-intervention (June 2024). This design allows for a difference-in-differences (DID) analysis to assess the causal impact of the social media certification upgrade on trust in digital transformation and sustainability expertise.
Secondary Data with Digital Transformation Context (CGSS): To enhance external validity and provide a broader societal context for digital transformation attitudes, the study incorporates data from the Chinese General Social Survey (CGSS) 2021. The CGSS is a nationally representative, comprehensive, and continuous academic survey in China, highly regarded in policy-making and social research. Seven relevant questions from the CGSS (2021) dataset (valid sample size = 8148) were selected, covering socio-demographic attributes (gender, age, education), digital media usage (digital information sources and frequency), and social trust attitudes (general and institutional trust, including trust in technology experts and sustainability advocates). This secondary data allows for cross-validation of findings about expert trust in digital contexts and helps control for potential sample selection bias in digital transformation perceptions.
3.2. Research Measurement
This study employed a multidimensional measurement approach to operationalize the key constructs related to digital transformation expertise and assess their relationships with trust outcomes. The dependent variable, expert trust in digital transformation and sustainability contexts, was measured using a comprehensive instrument specifically developed for this study to capture the multifaceted nature of trust in these specialized domains. This instrument comprised three distinct, yet interrelated, dimensions particularly relevant to digital transformation: Digital Innovation Trust (DIT), reflecting overall confidence in digital transformation research and development; Digital Transformation Expert Trust (DTET), assessing trust in specific groups of experts within digital innovation and sustainability domains; and Digital Sustainability Advice Trust (DSAT), measuring the willingness to accept and act upon specific digital transformation recommendations or sustainability advice provided by experts. Each dimension was assessed using an 11-point rating scale, ranging from 0 (representing complete distrust in digital transformation expertise) to 10 (indicating complete trust in digital transformation expertise). A composite expert trust score was calculated by averaging the scores across the three dimensions, assigning equal weight to each digital transformation dimension.
Prior to the main data collection, the instrument underwent rigorous pilot testing (
n1 = 30,
n2 = 50) to ensure its reliability and validity for measuring trust in digital transformation contexts. Detailed reliability and validity assessment information are provided in
Appendix C. Results from the pilot tests demonstrated strong internal consistency (Cronbach’s
α = 0.89), composite reliability (CR = 0.92), and average variance extracted (AVE = 0.76). In the main survey (
n = 850), the overall mean expert trust score for digital transformation experts was 6.82 (SD = 1.56), with mean scores for the individual dimensions as follows: DIT (
M = 7.12, SD = 1.48), DTET (
M = 6.75, SD = 1.62), and DSAT (
M = 6.59, SD = 1.58). Confirmatory Factor Analysis (CFA) further supported the three-dimensional structure of the digital transformation trust instrument, yielding satisfactory fit indices (χ
2 = 127.34, df = 24,
p < 0.001, CFI = 0.96, RMSEA = 0.048).
The independent variables, representing key antecedents of expert trust in digital transformation contexts, were measured using multi-item scales. Professional Competence in digital transformation, adapted for digital innovation contexts, captured perceptions of experts’ digital skills, technical knowledge, innovation qualifications, and capabilities in sustainable technology development. This construct was assessed through six items reflecting professional expertise in digital technologies, practical experience implementing digital transformation, academic reputation in sustainability innovation, and professional qualifications in digital fields. Participants rated each item on a five-point Likert scale (1 = strongly disagree, 5 = strongly agree), with the scale demonstrating good internal consistency (α = 0.87, M = 3.92, SD = 0.86).
Perceived Expert Integrity in digital contexts, focused on digital ethics, measured the perceived honesty and objectivity of digital transformation experts. It was assessed using three items evaluating experts’ objectivity in digital innovation matters and their overall trustworthiness when recommending sustainable technology solutions, also employing a five-point Likert scale (1 = strongly disagree, 5 = strongly agree). This scale exhibited good reliability (α = 0.85, M = 3.78, SD = 0.92).
Perceived Expert Benevolence in digital transformation, contextualized for sustainability goals, captured the extent to which digital transformation experts were perceived as acting for societal benefit beyond commercial interests. This construct was measured with three items reflecting perceived warmth and alignment with public sustainability values, using a five-point Likert scale (1 = strongly disagree, 5 = strongly agree) and demonstrating good internal consistency (α = 0.83, M = 3.65, SD = 0.94).
Expert Openness in digital innovation, adapted for digital contexts, reflected digital transformation experts’ willingness to engage with the public about technological change and sustainability initiatives. This dimension was assessed using three items measuring social media engagement about digital innovations, willingness to acknowledge different opinions on digital transformation, and transparency in public communication activities about digital transformation, again utilizing a five-point Likert scale (1 = very poor, 5 = very good). The scale showed strong reliability (α = 0.86, M = 3.71, SD = 0.89).
Two key variables related to digital media usage were also included. Digital Information Quantity assessed the perceived prevalence of expert information about digital transformation and sustainability across various social media channels. This was measured using three self-developed items that gauged participants’ agreement (1 = strongly disagree, 5 = strongly agree) with statements concerning the frequency of encountering digital transformation expert information on social media, the volume of expert content on sustainable development, and the exposure frequency of expert information on social platforms. This scale demonstrated good reliability (α = 0.82, M = 3.54, SD = 0.96).
Digital Information Quality was measured using a four-item scale, tailored to the Chinese digital transformation context. This scale assessed various aspects of digital information quality, including the high quality of expert content encountered, the accuracy and evidence-based nature of expert information, the persuasiveness of expert content, and the professional presentation of expert information. Participants rated each item on a five-point Likert scale (1 = strongly disagree, 5 = strongly agree), and the scale showed strong internal consistency (α = 0.88, M = 3.69, SD = 0.87).
Finally, the study incorporated several control variables to account for potential confounding influences on digital transformation trust. These included demographic variables such as age (reflecting potential digital generation gaps), gender, and education level (influencing digital literacy), as well as measures of general social trust (obtained from the CGSS data) and digital media literacy (measured using a separate scale). These control variables were included in the regression models to isolate the specific effects of the independent variables on expert trust in digital transformation contexts.
3.4. Data Analysis Methods
This research utilized a series of statistical techniques, implemented in R (version 4.1.0), to analyze the data and address the research questions. Statistical significance was consistently set at p < 0.05. Prior to conducting the main analyses, rigorous data screening and assumption checks were performed, including assessments of missing data, outliers, normality, linearity, multicollinearity, and homoscedasticity. Appropriate transformations and data handling techniques were applied where necessary.
3.4.1. Descriptive Statistics and Bivariate Correlations
The analysis began with descriptive statistics (means, standard deviations, frequencies, percentages) to summarize the characteristics of the sample and the distributions of all key variables (expert trust dimensions, expert cognition dimensions, media usage variables, and control variables). This provided an initial overview of the data. Subsequently, Pearson product-moment correlation coefficients (
r) were calculated to examine the bivariate relationships between all continuous variables. This provided a preliminary assessment of the associations between expert trust, its potential antecedents, and media usage patterns. The formula for Pearson’s
r is:
where
represents the Pearson correlation coefficient,
represents the value of variable
for individual
,
represents the value of variable
for individual
,
represents the mean of variable
,
represents the mean of variable
.
3.4.2. Measurement Model Validation (CFA)
Confirmatory Factor Analysis (CFA) was employed to rigorously assess the measurement model, specifically the construct validity of the multi-item scales used to measure expert trust and its antecedents. CFA tests the a priori hypothesized factor structure, evaluating how well the observed data fit the theoretical model. Detailed confirmatory factor analysis results are provided in
Appendix B. The CFA model is represented as:
where
represents the vector of observed variables (items on the scales),
represents the matrix of factor loadings, meaning the relationship between each observed variable and its corresponding latent factor (e.g., expert credibility, trust intention, Digital Information Quantity, Professional Competence, etc.),
represents the vector of latent factors, and
represents the vector of error terms (measurement error for each observed variable).
Model fit was assessed using multiple indices, including the chi-square statistic (χ2), degrees of freedom (df), the comparative fit index (CFI), the Tucker–Lewis index (TLI), the root mean square error of approximation (RMSEA), and the standardized root mean square residual (SRMR). Generally accepted cut-off values for good model fit were CFI and TLI ≥ 0.95, RMSEA ≤ 0.06, and SRMR ≤ 0.08. Additionally, convergent validity was assessed by examining the average variance extracted (AVE) for each latent construct (AVE ≥ 0.50 is desirable), and discriminant validity was assessed by comparing the square root of the AVE for each construct with the correlations between that construct and other constructs.
3.4.3. Multiple Regression Analysis
Multiple linear regression was the primary method for examining the relationships between the independent variables (expert cognition dimensions and media usage variables) and the dependent variable (the composite expert trust score). The general regression equation is:
where
represents the expert trust score for individual
,
represents the intercept (the predicted value of
when all
variables are 0),
represent the unstandardized regression coefficients, representing the change in
associated with a one-unit change in each independent variable
, holding all other independent variables constant,
represent the values of the independent variables for individual
i. These include professional competence, perceived expert integrity, perceived expert benevolence, expert openness, Digital Information Quantity, information quantity and control variables (Age, Gender, Education, General Social Trust, Media Literacy);
represents the error term for individual
, meaning the unexplained variance in
.
Separate regression models were estimated to test the main effects of the expert cognition dimensions and the media usage variables. Hierarchical regression was then used to test for potential interaction effects (moderation), introducing interaction terms (e.g., the product of Professional Competence and Information Quality) to assess whether the relationships between specific independent variables and expert trust were contingent on the levels of other variables.
3.4.4. Mediation Analysis
To test the hypothesis that exposure to expert information mediates the relationship between social media use and expert trust, a mediation analysis was conducted using the bootstrapping method. This approach is robust to violations of normality assumptions and provides more accurate confidence intervals for the indirect effect. The mediation model is represented by the following equations:
Equation for the mediator (
M): Exposure to expert information
Equation for the dependent variable (
Y): Expert Trust
where
represents the expert trust score for individual
,
represents the social media use for individual
.
represents the exposure to expert information for individual
.
represents the path coefficient representing the effect of
on
,
represents the path coefficient representing the effect of
on
, controlling for
, and
represents the path coefficient representing the direct effect of
on
, controlling for
.
and
represent the intercepts;
and
represent the error terms.
The indirect effect (mediation effect), representing the extent to which social media use influences expert trust through exposure to expert information, is calculated as the product of a and b (a × b). Bootstrapping (with 5000 resamples) was used to estimate the 95% confidence intervals for the indirect effect. If the confidence interval does not include zero, the mediation effect is considered statistically significant.
3.4.5. Difference-in-Differences (DID) Analysis
To evaluate the causal impact of the social media platform’s certification upgrade on expert trust, a difference-in-differences (DID) analysis was employed. This quasi-experimental design compares the changes in expert trust over time between a treatment group (users who actively engaged with the new certification system) and a control group (users who did not engage with the feature). The DID model is:
where
represents the expert trust score for individual
at time
,
represents the dummy variable (1 = treatment group, 0 = control group),
represents the dummy variable (1 = post-intervention period, 0 = pre-intervention period),
represents the interaction term (the DID estimator).
represents the DID estimator, meaning the average treatment effect on the treated (ATT). This is the key coefficient of interest, capturing the difference in the change in expert trust between the treatment and control groups. A statistically significant and positive
would indicate that the certification upgrade had a positive causal impact on expert trust.
represents a vector of time-varying and time-invariant control variables (e.g., demographics, baseline expert trust, pre-intervention media usage), and
represents the error term.
The DID analysis was crucial for establishing a causal link between the platform’s certification system and changes in expert trust. The parallel trends assumption, fundamental to the DID design, was carefully assessed. Graphical inspection of pre-intervention trends in expert trust for both the treatment and control groups provided visual evidence. Additionally, a formal statistical test was conducted by regressing pre-intervention expert trust levels on the treatment group indicator and time dummies. A non-significant coefficient on the interaction term in this pre-intervention regression would lend support to the parallel trends’ assumption. Robust standard errors were used in all DID estimations to account for potential serial correlation within individuals over time.
In summary, this study employed a comprehensive analytical strategy to thoroughly investigate the complex interplay between social media use, expert cognition, and expert trust. (1) Descriptive statistics and bivariate correlations provided a foundational understanding of the sample and the relationships among key variables. (2) Confirmatory Factor Analysis (CFA) established the psychometric properties of the measurement instruments, ensuring that the constructs were measured reliably and validly. (3) Multiple regression analysis allowed for the examination of the direct effects of expert cognition and media usage on expert trust, while controlling for potential confounding factors. (4) Mediation analysis provided insights into the underlying mechanisms, testing whether exposure to expert information mediated the relationship between social media use and trust. (5) Finally, Difference-in-Differences (DID) analysis provided a rigorous, quasi-experimental assessment of the causal impact of the specific social media platform certification system on expert trust. By integrating these five complementary analytical approaches, the study aimed to provide robust and nuanced findings, contributing to a deeper understanding of expert trust in the evolving digital landscape. The use of both cross-sectional survey data and longitudinal quasi-experimental data, combined with secondary data validation, strengthens the internal and external validity of the conclusions.