Next Article in Journal
The Effect of Green Supply Chain Management Practices on the Sustainability Performance of Turkish Shipyards
Previous Article in Journal
Political Discourses as A Resource for Climate Change Education: Promoting Critical Thinking by Closing the Gap between Science Education and Political Education
Previous Article in Special Issue
Understanding Risk Culture in the Context of a Sustainable Project: A Preliminary Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Corporate Sustainability Communication as ‘Fake News’: Firms’ Greenwashing on Twitter

by
Divinus Oppong-Tawiah
1,* and
Jane Webster
2
1
Schulich School of Business, York University, 111 Ian MacDonald Blvd., Toronto, ON M3J 1P3, Canada
2
Smith School of Business, Queen’s University, 143 Union St., Kingston, ON K7L 3N6, Canada
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(8), 6683; https://doi.org/10.3390/su15086683
Submission received: 16 October 2022 / Revised: 13 March 2023 / Accepted: 2 April 2023 / Published: 14 April 2023
(This article belongs to the Special Issue Sustainable Development and Organizational Performance)

Abstract

:
Fake news on social media has engulfed the world of politics in recent years and is now posing the same threat in other areas, such as corporate social responsibility communications. This study examines this phenomenon in the context of firms’ deceptive communications concerning environmental sustainability, usually referred to as greenwashing. We first develop and validate a new method for automatically identifying greenwashing, using linguistic cues in a sample of tweets from a diverse set of firms in two highly polluting industries. We then examine the relationship between greenwashing and financial market performance for the firms in our sample. Prior research has identified these issues as some of the most important gaps in the extant literature. By addressing them, we make several important contributions to corporate sustainability research and practice, as well as introducing notable improvements to automatic greenwashing detection methods.

1. Introduction

Firms are increasingly adopting social media for broadcasting corporate news and interacting with their stakeholders [1] such as for making environmental claims as part of their corporate social responsibility (CSR) communications [2]. When these environmental communications mislead, this is called greenwashing [3]. Greenwashing represents a form of organizationally generated fake news [4] involving exaggerated, selective, or deceptive communications about firms’ environmental performance [5,6,7,8].
Many firms have taken to social media to communicate with their stakeholders on sustainability, and Twitter has emerged as one of the most popular media for this [1,9]. However, the interactive and open nature of social media provides additional opportunities for stakeholders to scrutinize and hold firms accountable for their sustainability performance [9,10]. However, if stakeholders cannot reliably detect greenwashing [11,12], such additional opportunities may provide little real value to them and present a significant threat to firms’ reputations, in the case they are wrongly accused of greenwashing. Managers may thus find themselves in an “iron cage” where they are hesitant to disclose all or part of their environmental performance for fear of unfounded accusations or undue punishment [3,7].
Greenwashing raises skepticism among stakeholders concerning whether environmental sustainability represents a genuine priority for firms [6]. In a 2019 industry survey, executives claimed that sustainability is now aligned and integrated with their strategic goals (64% of responders), key performance indicators (54% of responders), and products and services (49% of responders) [13]. However, multiple studies show that greenwashing is a widespread phenomenon in such claims [14,15,16]. In a series of studies, TerraChoice [17], for example, found that 98% of green claims in 1000 North American products misled consumers. The prevalence of firms—particularly those with large environmental footprints—engaging in greenwashing raises serious concerns for firms and their stakeholders: it can erode consumer trust, firm performance, and market value [18,19,20].
Consumers react better to firms that act on environmental issues [18], which puts pressure on firms to present themselves as sustainable entities, despite not being “green” [21]. However, green consumers often hold anti-corporate bias and skepticism against green marketing [22,23,24] and suffer ethical harm when firms engage in greenwashing [25,26]. Consequently, firms can be exposed to potentially ruinous financial liabilities [19]. For example, Volkswagen’s emissions greenwashing scandal in 2015 resulted in a $14.7 billion fine—the biggest penalty to an automobile business under US law [27] at an estimated loss of $34.5 billion in profit and brand reputation, with a spillover effect of up to a 14% drop in share price in the auto-industry [28].
The adverse market reaction should be worrying, as nearly 90% of investors view a sustainability strategy as essential for firms to remain competitive [20]. Thus, discerning genuine from greenwashed sustainability actions has high-stakes consequences. Generally, however, humans rarely outperform chance in deception detection [29,30]. This becomes more of an issue in areas such as environmental sustainability, which are complex, politicized, and open to contentious debates and rival claims [31,32,33], making it difficult for non-specialist stakeholders to analyze and verify firms’ claims against their actions and performance [11,12]. Even for experts, it is challenging to identify greenwashing: more research on developing methods to measure organizations’ greenwashing is needed [5,7,34]. Therefore, an important question for firms and their stakeholders is: “How can stakeholders distinguish greenwashing from authentic corporate environmental communication on social media?” (RQ1).
In addition to detection, greenwashing research must also examine how it relates to important outcomes [35]. This represents a key concern because of the significant environmental footprints of businesses [36]. However, few studies have pondered the question “does greenwashing pay off?” (e.g., [37,38]). These studies provide some evidence that greenwashing negatively affects firms’ performance, yet their scope has been limited to a single industry (e.g., [38]) or does not consider boundary conditions (e.g., [37]). Consequently, a second question guiding this research is “What is the relationship between a firm’s greenwashing on social media and its financial market performance?” (RQ2).
We address these research questions by bringing together the literature on CSR communications (e.g., [39]), greenwashing (e.g., [7]), fake news (e.g., [40,41]), and deception detection (e.g., [42,43]) to theorize greenwashing detection. We then examine Twitter messages posted by large and small firms in two industries with significant environmental footprints, namely, the oil and gas and automotive industries. We develop a linguistic-based measure of organizational greenwashing and show that it is significantly associated with financial market performance.
Our results withstand multiple validation and robustness tests. As such, this research makes several theoretical, methodological, and empirical contributions. In terms of theorizing, our conceptualization of greenwashing as fake news offers a high-level categorization to classify existing greenwashing forms and for thinking about new practices that might emerge in the future. Further, our work complements and extends previous greenwashing theories, including disclosure and decoupling. For our greenwashing detection approach, we developed a new automatic deviation-based linguistic style method that extends previous approaches in multiple ways: it avoids the need for ground truth (authenticity) data, it goes beyond binary (greenwashed or not) classifications, it is explainable based on theoretical justifications, and it can be used to examine greenwashing over time and in other contexts. In terms of empirical contributions, we respond to calls for more CSR research on deceptive communications (e.g., [44]), especially in those situations in which information asymmetry exists between stakeholders and organizations [45] Our study also adds important new findings to the growing evidence on the effect of greenwashing on firm outcomes [37,38,46,47]. In sum, our work addresses an urgent need to identify greenwashing and measure its effects, which constitutes “the single most important” gap in the greenwashing literature ([7], p. 243), while responding to fake news detection more generally [48].
The rest of the paper is organized as follows. First, we provide an overview of the extant literature that constitutes the theoretical and empirical foundation for our study, and we further explicate our research questions. Then, we explain our method and data, present our analyses, and discuss the results. We conclude by discussing methodological, empirical, theoretical, and practical implications.

2. Theoretical Background

Our work integrates and builds on related theoretical arguments from multiple disciplines, including CSR communications (e.g., [39])), greenwashing (e.g., [7]), fake news [40,41], and deception detection [42,43]. CSR communications emphasize the significant role played by social media in enabling firms and stakeholders to co-create brands [1,39,49], but as the greenwashing literature shows (e.g., [7,9]), increased use of social media by firms to brand themselves as environmentally sustainable exposes them to broader and more diverse greenwashing scrutiny. Based on recent studies theorizing how social media amplify the potential for various types of fake news and the difficulties involved in their detection [40,41], we conceptualize three forms of greenwashing as fake news in social media—greenwashing as disinformation, misinformation, and malinformation—and draw on linguistic techniques from the deception-detection literature (e.g., [42,43]) to develop the profiles of truthful and deceptive green communications. Finally, we draw on empirical findings to explore the potential relationship between greenwashing on social media and firm performance.

2.1. Greenwashing by Organizations

The term greenwashing was first used by Jay Westerveld in 1986 to describe a practice in the hospitality industry that was designed to reduce costs but was presented as a pro-environmental practice [50]. In the management literature, early writers, such as Polonsky et al. (1997), equated greenwashing to marketing hype, while Laufer ([51], p. 253), pointed to an emerging literature that “describes corporate ‘greenwashing,’ ‘bluewashing,’ and other forms of disinformation from organizations seeking to repair public reputations and further shape public image”. Scholarly work on greenwashing has expanded considerably since then, and scholars’ and practitioners’ views on the topic have evolved over time, including on such foundational questions as what constitutes greenwashing [7]. Organizations generally make green claims as part of their CSR communications [2], for which they increasingly rely on social media [1,39,49], risking exposure to greater greenwashing scrutiny [7].
The literature classifies firms’ greenwashing activities into two main types: those that concern their products or services and those that involve their organizational policies and practices [7,52]. Product/service greenwashing usually refers to misleading or deceptive communications (e.g., advertising) about the sustainability of a specific product or service offered by a firm, whereas greenwashing involving organizational policies and practices has been primarily viewed as incomplete or selective disclosure of such information for the purpose of misleading firms’ stakeholders [12,53]. For instance, firms’ greenwashing on Twitter may be “message[s] published and propagated through [social] media, carrying false information regardless of the means and motives behind it” ([54], p. 4). Thus, Lyon and Montgomery ([7], p. 223) define greenwashing as “communication that misleads people into forming overly positive beliefs about an organization’s environmental practices or products”. This definition does not assume intent; that is, “greenwashing need not be deliberate” ([7], p. 225). For example, an organizational communication may include nature-evoking elements that unintentionally induce false perceptions of sustainability [5].
Greenwashing theories often fall into one of two main categories: disclosure or decoupling. Disclosure models suggest that managers are hesitant to disclose all or part of their environmental performance because audiences are generally skeptical and prone to accuse firms of greenwashing (e.g., [3,8,53]). From this perspective, greenwashing “lies in the eye of the beholder and [is] not an inherent aspect of a given communication” ([7], p. 228). Decoupling models hold that firms take symbolic environmental actions to deflect stakeholder attention from a lack of concrete environmental performance (e.g., [11,21,37,55]). In this case, studies often explore greenwashing as a gap between corporate sustainability reports and corporate environmental actions. For example, Mateo-Marquez et al. [55] compare organizations’ communications through the Carbon Disclosure Project with their actual greenhouse gas emissions.
Whatever the theoretical approach, it is difficult to identify actual organizational greenwashing [5]. This is because neither organizations’ underlying intent nor the authenticity of their communications is apparent. For example, to determine authenticity, information such as third-party accusations or objective data are required [56]. Most of the empirical literature focuses on detecting product-level greenwashing by either manipulating it (e.g., [3]) or surveying consumer perceptions (e.g., [57]). For detecting organizational-level greenwashing, researchers might manipulate greenwashing through a fictional company’s characteristics (e.g., [12]), make the assumption that the truth will become public (e.g., [53]), rely on third-party accusations (e.g., when an information intermediary actively collects information from firms and evaluates it [58]), measure deviation between actual and reported emissions (e.g., [6]), or make a manual comparison between what was communicated by the organization and the ground truth (e.g., interviewing employees to determine what was actually implemented on the ground [11]). Most of these assessments, however, are based on sustainability reports and longer-term data that take time to produce; this complicates greenwashing detection by giving firms more time to hide or confound it with other organizational activities. Further, these methods generally do not present evidence that their measures are assessing greenwashing, nor do they provide validation data to support their measures.
In sum, firms may greenwash through one of two ways: they may take symbolic environmental actions to deflect stakeholder attention from a lack of concrete environmental performance (decoupling model) or they may choose not to disclose all or part of their environmental performance to avoid stakeholder accusations (disclosure model). As firms increasingly use social media for green communications, they expose themselves to greater stakeholder scrutiny, but also afford new opportunities for text-based social media communications to help third parties to independently verify green claims. Moreover, the organizational greenwashing literature is at a stage where an overarching framework could help streamline the proliferation of different ways in which firms are thought to engage in greenwashing. Based on the greenwashing theories of selective disclosure and decoupling, many forms of greenwashing practices have been proposed, such as cheap talk, symbolic action, false hopes, broken promises, incomplete comparisons, fuzzy reporting, false claims, deceptive manipulation, information selection, and attention diversion (e.g., [3,5,7,56]). However, there has been little conceptual clarity around whether these forms are mutually exclusive and collectively exhaustive of organizational greenwashing. Rather, the field needs to develop an overarching conceptual framework that accounts for the interplay between intent and authenticity of environmental communications as a basis for categorizing organizational greenwashing. To do so, we draw on the fake news and deception-detection literature to address conceptualization, categorization, and detection challenges in organizational greenwashing.

2.2. Greenwashing as Fake News: Conceptualization

Greenwashing by organizations can occur through false messages; that is, fake news (e.g., [48,59]) or information with compromised authenticity [60]. Zhou and Zafarani [48] propose two types of fake news sourced from (i) news outlets (a narrower type) and (ii) messages (a broader type). Whereas the narrower type concerns journalistic content in public news outlets, the broader type includes “articles, claims, statements, speeches, and posts, among other types of information, related to public figures and organizations” ([48], p. 4). Greenwashing falls in this latter category, with the former category outside the scope of this paper.
Like greenwashing, varying labels describe fake news, such as satire news [61], deceptive news [62], false news [63], clickbait [64], and rumor [65]. Nevertheless, researchers have categorized fake news into two higher level categories: intent (to deceive or not) and authenticity (false or true) [41,59,66,67], creating four sub-categories. The first sub-category occurs with no intent to deceive and represents true communications, resulting in information rather than fake news. The other three sub-categories represent fake news: (i) disinformation, i.e., information that is false and disseminated with malicious intent (intent to deceive and false), (ii) malinformation, i.e., information that is based on reality or partially true but which is created, produced, or distributed with intent to cause harm (intent to deceive but partially true), and (iii) misinformation, i.e., information that is false but believed to be true by the disseminator (no intent to deceive, but false). Unlike traditional media, social media amplify the potential for these types of fake news by making content creation and publication easy. while their authenticity and any malicious intent behind them are difficult to establish [40,41]. The past decades have witnessed a sharp increase in firms’ claims of environmentally friendly products, policies, and practices, many of which represent these categories of fake news (see Supplementary Section S1).
Examining each of the fake news categories in turn, greenwashing can fall under disinformation when the manipulation involves bald-faced lying, known as ‘active greenwashing deception’ [3]. Examples include when BP lied in 2010 that an estimated 5000 barrels of oil leaked daily from its Macondo well instead of the actual approximately 100,000 barrels per day [68], and when Daimler reportedly admitted that Mercedes-Benz cars and vans sold in the U.S. were programmed to cheat on emissions tests [27]. Moreover, the authenticity of such greenwashing claims is often unknown. Stakeholders only realize that a firm’s sustainability practices may be at odds with their sustainability communications after environmental incidents (e.g., BP’s Gulf oil spill) or regulatory sanctions (e.g., fines for emission scandals) are publicly exposed. In this regard, greenwashing becomes more difficult to detect than other types of disinformation, because in practice, there are no fact-checking databases for calibrating the veracity of such false claims.
On the other hand, greenwashing could fall under malinformation when the manipulation is more subtle: publishing information that is not necessarily or completely a lie in a way that intends to deceive, akin to ‘information-selection greenwashing deception’ [3]. Examples include half-truths, such as when publicly traded firms use selective disclosure of green practices to mask their true environmental performance [8,53], or when fossil fuel companies, such as BP, use discourse to build and maintain a hegemony on the climate change debate, so as to deflect stakeholders’ pressure without harming its core extractive business [69]. Thus, instead of outright disinformation, firms may adopt sophisticated discourse strategies to conceal deceptive intent in their seemingly genuine sustainability actions. This allows firms to circumvent the increased scrutiny afforded by social media [9] rather than forgoing greenwashing altogether [10], making their detection more difficult. In that regard, greenwashing may be more challenging to detect than other types of malinformation because the blending of reality and fiction is much more subtle. For example, minimal environmental actions may be cloaked in sustainability terms and buzzwords in tweets, which might create the impression of climate awareness, engagement, or leadership.
Finally, greenwashing as misinformation could occur when firms make honest mistakes or lack complete information during communications about their sustainability practices. One might consider misinformation as akin to “mistake” or “unintentional greenwashing deception’, following the logic in [3]. Thus, deception still occurs, irrespective of the firm’s benign intention. Misinformation can arise when firms attempt to be sustainable in their supply chains [4]: a firm may contract with a vendor that markets itself as engaging in green practices and find later that the vendor concealed material information. The firm will suffer negative blowback if the vendor’s green credentials are exposed as misleading or outright false, despite a genuine intention to pursue green production. As with other types of fake news, it may be easier to verify falseness of the claim than the intent.
To detect these types of greenwashing using traditional methods, we would need to establish the authenticity of the information (i.e., the authenticity problem) and the intention behind its dissemination (i.e., the intention problem). In other areas, domain experts and fact-checking platforms exist for manually analyzing information authenticity (e.g., PolitiFact, Snopes, TruthOrFiction, HoaxSlayer, etc.), but these are not available for assessing organizations’ communications. As described earlier, one method of manually determining the authenticity of greenwashing could be by interviewing employees, but they might not reveal deceptive communications. Therefore, manual methods are time-consuming at best and in many cases, cannot address the authenticity problem. Determining intent can be even more difficult [48]. However, as we demonstrate, the literature on deception helps shed light on this issue. Specifically, each type of fake news can be considered deception, whether intended or not. For example, misinformation, although unintended by the organization, can still result in unintentional greenwashing deception [3].
Taken together, the foregoing suggests that greenwashing can escape reliable detection by stakeholders due to difficulty in manually verifying facts and/or determining intent in green claims. To overcome these challenges, researchers are turning to automatic detection methods, as outlined next.

2.3. Detecting Greenwashing

As described, manual methods of detecting greenwashing are time-consuming and can only partially address the authenticity and intention problems. However, automatic detection methods, which fall into four broad categories, namely knowledge, propagation, source, and linguistic style-based approaches [48] show promise but are mostly data-driven. Consequently, there is little theoretical foundation for classifying messages as greenwashing using these methods. However, the style-based approach can be singled out for employing cues drawn from linguistic theories along with machine learning models to classify deceptive communication (e.g., [70,71]). Therefore, to classify greenwashing, we developed a deviation-based linguistic style method that extended the style-based approach.
To help identify greenwashing with our linguistic approach, we drew on both the content of organizations’ Twitter messages (that is, whether they related to environmental sustainability) as well as on their linguistic features. For the latter, we relied on the literature on linguistic cues; that is, the “lexical and syntactic features of language that are independent of content” ([42], p. 605), such as sentence length or word count. This literature represents an interdisciplinary body of research encompassing such areas as communications, psychology, computer science, and information systems (e.g., [72,73,74,75,76]). We relied on recent meta-analyses and comprehensive empirical papers (e.g., [42,43,74]) to identify the most consistent and salient indicators of deception.
Overall, a meta-analysis of computer-supported deception detection found that this body of work has produced mixed but promising results: even though effect sizes are small, a few cues consistently predict deception but are contingent on several boundary conditions [43]. Similarly, another study concluded that context determines which aspects of deception theory are relevant and applicable, and which dimensions and indicators are important or need adjustment [74]. More recently, a review study complemented by a series of experiments on boundary conditions also pointed to the potential role of context in mixed findings [42]. In sum, while many linguistic cues have been previously studied (see [43]), they sometimes show inconsistent results and are limited to specific modes of communication (i.e., verbal, hand-written, or typed-text deception). Therefore, we examine seven deception cues that are most relevant to greenwashing in tweets (i.e., a typed-text deception) and are consistently supported in the literature: quantity, specificity, complexity, diversity, hedging/uncertainty, affect, and vividness/dominance.
Quantity represents one of the foundational maxims of cooperative discourse [77] and hence is often studied in deception detection. Quantity refers to the length of text, which may be measured at different levels of granularity, from a single morpheme to the entire body of text [42]. Generally, it is expected that deceptive accounts will have shorter lengths compared to truthful ones, either because deceivers strategically opt for reticence to limit the risk of providing incriminating information [72,78] or because the added cognitive load associated with fabrication limits their cognitive abilities [42]. Although deception can also be hidden in longer messages, there is overall meta-analytic support for “word count” and “sentence count” as significant predictors of deception [43].
Specificity refers to the level of detail and precision in contextual information presented in a text and is closely related to quantity [42]. It is expected that truthful accounts will be more specific than deceptive ones because the former draw from experienced events embedded within a rich network of perceptual details and contextual and semantic information recorded in memory [79,80]. In contrast, deceitful accounts suffer from a paucity of specific details related, for example, to place and time, to the five senses, and to numbers, because they rely on imaginary memories. Consequently, “descriptive” and “spatio-temporal” words, as well as “generalizing words” (that deceivers use to distance themselves from specific event details), have consistently reflected specificity in prior studies [43]. For example, greenwashing firms discussing their environmental commitments use more “inclusive” (i.e., generalizing) language compared to those that implement sustainable policies and practices [11].
Complexity refers to the ease with which a text can be comprehended [74] and appears at two levels: lexical and syntactic. Lexical complexity refers to word-level comprehensibility and is assessed with the use of single versus polysyllabic words. In contrast, syntactic complexity refers to sentence-level comprehensibility and is measured by differences in syntactic markers, such as punctuations and conjunctions, reflecting simple versus multiple-phrase or multi-clause sentence structures. Deception is thought to be more cognitively demanding than truth-telling, depleting a liar’s cognitive resources and thereby resulting in less lexical and syntactic complexity [81]. Yet another perspective views deceivers as capable of using obfuscation—a writing style involving complex verbiage and language structures—to deceive, although it is less clear whether obfuscation arises out of malicious intent or lapses in individual writing ability [82]. For example, in the context of environmental reporting, one study assumed that firms use obfuscation to “mislead by misrepresenting or concealing unfavorable facts” ([58], p. 1209). However, using complex writing alone to classify deception can be problematic, because on its own, text that is easy to understand presents no cues about the extent of falsity and/or authenticity of what is being claimed; that is, liars may use simple language to escape detection [82]. In support of this, another study found that different firms cover similar content in their reports, but firms that practice what they preach use more complex styles of language than firms that decouple their actions from their statements [11]. Therefore, in line with the linguistic deception literature, we consider low complexity as one of several linguistic cues that collectively help to detect deception. The established indicators of complexity here include words representing cognitive processes and insights, and average sentence length [43].
Diversity refers to the degree of uniqueness or repetition in text [42]. It draws from a similar theoretical rationale as complexity, i.e., deception strains cognitive resources and marks deceivers apart from truth tellers. However, unlike complexity, diversity strictly focuses on variation in the language used [30], with deceivers thought to use less diverse language and more repetition of vocabulary and phrases [74]. The ratios of content words (content-word diversity) and unique words (type-token ratio) to the total word count consistently predict deception across studies [42,43].
Hedging/uncertainty refers to the presence of “vagueness, evasiveness, or ambiguity” in a text’s meaning ([42], p. 609). By using uncertain, indecisive, and noncommittal language, deceivers attempt to avoid definitive and verifiable responses in order to evade detection (e.g., [42,83,84]). Truth tellers, on the other hand, may use more certain language to convey confidence [85], although deceivers may also pepper their accounts with certainty words when trying to be persuasive and assertive [42]. Hedging/uncertainty and specificity appear similar because deceivers attempt to be vague with both cues, but they are conceptually and empirically distinct. That is, specificity refers to the level of detail and precision in contextual information presented in text, as reflected in the use of more descriptive, space (geographical), and time (temporal) words by truth tellers, but hedging/uncertainty refers to scenarios where facts are omitted or embellished in indirect, vague framing, as reflected in the use of tentative words, impersonal pronouns, and modal verbs, among others. Thus, whereas the level of specificity helps detect deception when the speaker draws on few contextual details, the level of uncertainty/hedging helps detect deception because the speaker attempts to equivocate to avoid risk of exposure, irrespective of contextual details.
Affect refers to the amount of positive- versus negative-valence emotional words in a message [42]. It materializes in the form of uncontrolled emotional experiences, such as moral guilt, anxiety, and fear of detection, that may or may not translate into emotional language during deceptive episodes [42,72,73]. For example, negative-emotion words, such as sad and bad, may reflect guilt in deceivers [76,86], but deceivers may also strategically use more positive-emotion words, such as happy, pretty, good, luck, and joy, when attempting to be more pleasant, friendly, and persuasive [87]. Considering that firms want their communications to sound pleasant, the level of positive affect shows potential for assessing deception.
Vividness or dominance is the extent to which words convey intensity and power to dominate conversations. Deceivers may attempt to evade detection by using dominant, expressive, vivid, or forceful language in their communications [42]. However, some deceivers may also strategically use non-dominant or passive voice to evade detection [43]. Thus, in contrast to truth-tellers, who are expected to remain neutral on vividness, deceivers have been shown to substantially deviate from the average level of dominance expected in language use. Vividness/dominance has been consistently assessed with expressivity (or emotiveness in some writings) and active language in prior studies (e.g., [42,88]).
Taken together, assessing these seven linguistic cues (quantity, specificity, complexity, diversity, hedging/uncertainty, affect, and vividness/dominance) in organizations’ Twitter messages relating to environmental sustainability provides us with a means to help distinguish less-greenwashed from more-greenwashed communications, addressing our first research question.

2.4. Relating Greenwashing to Organizational Financial Market Performance

One of the most important questions about fake news, such as greenwashing, concerns its outcomes [35], because without knowing its outcomes, there may be little incentive to identify and counter it. Prior studies have generally examined two types of greenwashing outcomes: consumer perceptions and firms’ performance outcomes. The research examining the former usually concerns product-level greenwashing (e.g., [4,57,89,90]) and therefore is outside this study’s scope. However, the findings from this stream generally show that greenwashing harms consumers’ trust in products and influences their purchase intentions [7]. Fewer studies have investigated the second type of outcome, firms’ performance, but the evidence to date has been mixed. Walker and Wan [37] found that, in highly polluting industries, financial performance (measured in terms of Return on Assets) is negatively associated with greenwashing, because of increased stakeholder scrutiny, heightened demands to mitigate their harmful operations, and abundance of publicly available data on such firms’ environmental performance. They highlighted the need for future research to examine the effects of boundary conditions, such as mediators and moderators, on this relationship. Further, Wu and Shen [38] found that greenwashing had no effect on several measures of financial performance ratios salient to the banking industry. Testa et al. [47] found that publicly traded firms in multiple countries and industries did not significantly benefit from greenwashing in their operational performance, nor did they suffer significant punishment in market value. In contrast, Gatti et al. [3] asked M-Turk participants to think about their intentions to invest and found that they reacted even more negatively to greenwashing than to environmental misconduct. Most recently, Li et al. [46] found positive effects of unperceived greenwashing on financial performance of Chinese firms and suggested that the sampled firms were able to avoid external accusation to minimize impact on financial operations.
This research highlights the effect of scrutiny or exposure of environmental practices and incidents on the relationship between greenwashing and firms’ market performance. For example, firms with lower environmental performance are less likely to engage in selective disclosure, particularly when headquartered in countries with more environmental NGOs and higher levels of liberty and civil rights [8], suggesting that they are mindful of the increased probability of being exposed under heightened scrutiny. Overall, these studies provide initial evidence that greenwashing has some material impact on firms’ financial market performance, particularly when acts of greenwashing are detected and exposed. From a firm’s perspective, stakeholders’ negative reactions to greenwashing, as manifested in negative investor reactions [20] would mean that greenwashing is not a good strategy to deal with stakeholders’ environmental demands. From the stakeholders’ perspective, greenwashing may not only have financial significance (e.g., decrease shareholder value), it can have ramifications for their long-term well-being because of the significant environmental footprints of businesses [36,91]. Therefore, RQ2 deals with the effect of greenwashing on financial market performance, and in line with the role of scrutiny as a potentially important moderator, we further examine how the level of environmental exposure moderates this relationship.

3. Methods

To examine our research questions, we utilized secondary data for firms in two industries (oil and gas and automotive industries). We addressed RQ1 by focusing on developing and validating an automatic profile-deviation-based measure for detecting greenwashing. We examined RQ2 by modeling and testing the association between our measure of greenwashing and financial market performance.

3.1. Data

Our initial sample began with 80 firms, the top and bottom 20 firms on the Forbes Global 2000 list of publicly traded firms for both the oil and gas and automotive industries. The choice of this diverse sample was in line with prior research that has established that greenwashing practices and their outcomes vary by firms’ characteristics and across industries [8,9]. We restricted the period of analysis between 2012 (when many of the firms in our dataset first created their Twitter accounts) and 2019 (to avoid the potentially confounding effects of the COVID-19 pandemic on firms’ tweeting behaviors, e.g., Rickett, 2020; Romero, 2020). Our final sample of 57 firms represented those firms from the initial 80 who had active Twitter accounts for the period of study.
Our dataset from 2012 to 2019 included all tweets posted on the firms’ Twitter handles (422,664 tweets in total). We collected Twitter data by building a Python wrapper around the public Twitter API to crawl historic tweets in firm handles over the observation period. In addition, we collected firms’ financial market, control variables, and environmental exposure data for this same period. Consistent with several prior studies, we gathered these variables from several main sources: firms’ financial market and control variable data from Bloomberg’s terminals [92,93,94] and environmental data from Sustainalytics reports [95,96,97]. Because of these different sources for our measures, common method bias should not be a concern in our regression models.

3.2. Analyses

We followed a multi-step process to conduct our analyses. First, we identified firms’ Twitter messages related to the environment and then compared these messages to deceptive linguistic cues identified in the literature. Next, we created ideal profiles for greenwashed and truthful tweets based on these cues and classified our sample of tweets using these profiles, validating our classification using multiple methods. Finally, we related our greenwashing measure to financial market performance. That is, we devised a multi-stage process involving the following steps: (1) selecting tweets related to environmental sustainability, (2) linguistic analysis, (3) profiling greenwashing, (4) validating our greenwashing detection method, and (5) relating our greenwashing measure to financial market performance. Appendices A and B describe these steps in more detail.

3.2.1. Selecting Tweets Related to Environmental Sustainability

The first step was to select tweets related to the environment. To achieve this, we created a custom dictionary of sustainability terms for the linguistic analysis software LIWC (described in the next section) and compared our sample of tweets to this dictionary in order to identify environmental tweets to analyze for the potential presence of greenwashing. We followed procedures established in prior studies for developing and validating dictionaries [98,99].
We compiled our initial list of environmental terms by examining sources from both industry and academia. Specifically, to collect practitioners’ terminology, we examined environmental indicators described in the MSCI ESG KLD dataset’s manual [100] available through WRDS database, and extracted all the terms related to environmental sustainability. MSCI is a leading provider of business data, including environmental, social, and governance (ESG) data for investors, which are also widely used in sustainability research [101]. Next, to collect academic terminology, we examined the list of top journals in environmental studies ranked based on their Impact Factor by Web of Science, similar to [39]. We extracted all the keywords relevant to environmental sustainability from this list, starting at the top-rated journal, until the keywords reached saturation. We obtained 1369 terms in total. We then asked two judges (a professor and a PhD student undertaking research in sustainability) to rate these keywords on their relevance to environmental sustainability. We omitted those words that were scored less than 4 on a 7-point Likert scale by both judges. This resulted in 1238 green terms in our sustainability dictionary.
We selected tweets for further analysis if they contained at least one term from our dictionary. In other words, a tweet must contain at least one sustainability term to be considered “green enough” for potential greenwashing. This resulted in 82,286 green tweets (out of 422,664 tweets) for the 57 firms from 2012 to 2019.

3.2.2. Linguistic Analysis

The next step was to conduct our linguistic analysis using automatic detection tools. For the reasons described next, we developed a profile-deviation linguistic method that extended existing approaches. First, other automatic methods need ground truth data: however, the accuracy of these data dramatically impacts these methods using machine learning (e.g., [67]), deep learning (e.g., [102]), or network optimization (e.g., [54]) frameworks. Our method addressed this problem by avoiding the need to train an algorithm, and hence the need for ground-truth data altogether. By doing so, our method eliminated problems such as credibility (e.g., due to contentious coding of ground truth), scale (e.g., due to manual labor), and data limitations (e.g., due to human error, incomplete, or outdated data), which, when working together, can render solutions less accurate, less timely, less effective, or even impossible (see online Supplementary Section S2 for comparisons with other automatic methods).
Second, other linguistic style-based detection methods have faced challenges in identifying and validating a set of features that can track deception across multiple contexts [103]. As explained earlier, our method addressed this problem by selecting cues based on consistent findings reported across different research domains, in both field and experimental settings [42,43], to establish theory-driven ideal profiles of truth and deception.
Finally, current methods approach detection as a binary classification problem, often because truthful and deceptive messages are pre-labelled as such (e.g., used in style-based automatic methods [70,71], assessed manually [11], assumed in economic models [53], or experimentally manipulated (e.g., [42]). Binary classification is unable to capture non-traditional fake news, such as malinformation, for example, where information is not entirely false but has “false claims in some parts of the news content” ([48], p. 31). Our profile-deviation method addresses this problem by classifying content within a “deception spectrum” ranging from ideal truth (not greenwashing) to ideal deception (greenwashing), with multiple levels in between.
To conduct our deviation-based linguistic analysis, we studied those theoretical cues related to RQ1 described earlier; that is, quantity, specificity, complexity, diversity, hedging/uncertainty, affect, and vividness/dominance (see Table 1). We used LIWC software [104] for this analysis because it has been applied in prior studies of deception (e.g., [74,105,106,107]) and has considerable overlap with other well-known deception-detection software, such as Agent99Analyzer [108]. We prepared tweets for analysis by following guidelines in the LIWC 2015 manual: for example, by separating URLs and tagging e-mail addresses, and removing noise (e.g., white spaces). Finally, we parsed tweets through LIWC to compute linguistic indicators.
Using LIWC, we selected all available deception indicators to represent the seven cues described earlier. However, LIWC does not include indicators for Diversity and Vividness, and therefore, we created our own indicators for these. Diversity indicators were computed with a Python function that calculates content-words and type-token ratios directly from the parsed tweets. Vividness scores were obtained by parsing tweets through Whissel’s Dictionary of Affect in Language [88,109]. For a fair comparison, we scaled down the indicator scores for tweets for 2018 and 2019 to account for Twitter’s increase in character limits from 140 to 280 at the end of 2017 [110]: this was necessary to minimize the effect of increased word count on several linguistic cues. Second, we ensured that all indicators were in the same direction (a higher score suggested truth)—that is, we reversed any indicators for which high scores suggested deception. With this approach, an overall high raw score suggested truth while an overall low raw score suggested deception. Finally, because we created composite scores with indicators on each cue, we normalized indicators to minimize undue influence of measurement scales.

3.2.3. Profiling Greenwashing

In the third step, we assessed greenwashing patterns using profile deviation. That is, to detect greenwashing in organizational tweets using the linguistic indicators discussed earlier, we primarily drew on Venkatraman’s seminal work on fit as profile deviation [111]. He notes that: “Although this perspective can be adopted in a variety of research situations, it is particularly useful for testing the effects of environment-strategy coalignment … because deviations in strategy from [an] ideal profile should be negatively related to performance” (p. 434). This has been demonstrated in many areas, including technology-business alignment (e.g., [112]), knowledge management (e.g., [113]), and software project risk management (e.g., [114]). We believe that this analytical method is appropriate in the greenwashing context for several reasons. First, greenwashing captures a type of strategic action about an organization’s environmental sustainability practices that can impact the organization’s financial market performance. As we show, the negative outcomes we found for RQ2 were consistent with this expectation. Second, greenwashing lends itself to measurement by the profile-deviation method because linguistic cues offer multiple deception dimensions for forming configurations, rather than bivariate examinations, to describe a holistic and synergistic profile, the core tenet of fit as profile deviation [111].
Fit as profile deviation is “the degree of adherence to an externally specified profile” ([111], p. 433). In the current context, profile deviation is suitable if an ideal truthful (non-greenwashing) profile can be specified across the linguistic dimensions for detecting deception. For any tweet parsed through our sustainability dictionary, the tweet’s degree of fit or adherence to such a multidimensional profile demonstrated a high level of coalignment with a pattern of truthful communication. Conversely, deviation from the profile implied a low level of coalignment between the green tweet and truthful communication, resulting in a high likelihood of deceptive green communication (i.e., greenwashing). Three analytical issues are regularly raised with respect to the profile-deviation perspective of fit: (a) operational difficulties related to developing the ideal profile to calibrate variation in firm tweet profiles, (b) whether and how to add differential weights for the multiple dimensions, and (c) the classification power of the profile-deviation test [115,116,117,118]. We addressed the first issue concerning operationalization by specifying a theoretically ideal profile based on prior work in linguistic deception detection—that is, the ideal non-greenwashing profile had the highest raw scores on all seven linguistic cues previously described, while the ideal greenwashing profile had the lowest raw scores on all cues. For the second issue regarding weights, we relied on the number of indicators of a cue to serve as implicit weights in the composite score: this is because meta-analytic evidence associates the most important cues with a larger number of empirically validated and theoretically meaningful indicators (see [43]). Therefore, we measured profile deviation as the Euclidean distance between the composite score of an observation and the ideal profile score and used decile splits to specify a range of low-to-high greenwashing patterns. Consequently, we expected less-greenwashed tweets (with higher composite scores) to have lower deviation (thus lower Euclidean distances) from the ideal non-greenwashing profile and vice versa for more-greenwashed tweets. For the third issue pertaining to the power of our detection method, we used multiple independent techniques to validate the classification accuracy of the profile-deviation method as part of our robustness tests. We further describe how each method validates our results in online Supplementary Section S4.
In sum, our profile-deviation method allowed us to symbolically express truth and deception using linguistic cues in green tweets. LIWC provided scores for the indicators of the linguistic cues presented in Table 1. The theoretical rationale for selecting these cues and their indicators in the context of organizational greenwashing on social media is detailed above. From a theoretical standpoint, the symbolic meaning of each LIWC indicator score is described by the valence on the deception scale (column 3 of Table 1)—a high LIWC score indicates truth or deception based on prior research. From a methodological standpoint, we implemented the symbolic meaning with a profile-deviation scale [111] by defining an ideal non-greenwashed tweet profile and an ideal greenwashed tweet profile. We show in Supplementary Section S3 that multiple ways of defining greenwashing patterns based on this approach correlate highly with each other, demonstrating the robustness of both our theory and our method for symbolically expressing truth and deception in the context of organizational greenwashing on social media.

3.2.4. Relating Greenwashing to Financial Market Performance

Following prior studies that used LIWC-based measures to study greenwashing and other organizational phenomena (e.g., [11,119,120,121]), we empirically examined the relationship between our measure of greenwashing and financial market performance. We measured our financial market outcome with Share Price, the average daily price at which a firm’s stock was traded. We measured our moderator, environmental exposure, with ESG Controversies, the number of significant controversial events reported in a firm’s environmental, governance, social, and operational incidents (compiled from Sustainalytics reports). Based on prior greenwashing studies [37,38,122], we included the following control variables: three industry controls, Industry Type (i.e., Oil/Gas or Auto), Firm Size (i.e., top or bottom 20 on Forbes Global 2000 by market capitalization), and Region (location of firm’s headquarters: main operations in or outside of North America) and five financial controls, Gross Income, Return on Assets, Operating Income, Adjusted Profit, and Adjusted Revenue.
We specified linear regression equations for Share Price using natural log transformations for Share Price, financial controls, and our moderator variable. We estimated the Share Price models with panel regression techniques to address potential endogeneity issues. For this daily share price panel, the Breusch–Pagan Lagrange Multiplier test [123] rejected the null hypothesis of no panel effect (i.e., entity variation) across firms (p( χ 2 ) < 0.00). Next, the Durbin–Wu–Hausman test [124] failed to reject the null hypothesis of serial correlation across unique errors, suggesting that the random effects (RE) estimator fit the data better than fixed effects. The RE panel included 29,271 firm-day observations from 1 January 2012 to 31 December 2019 (after removing missing values), with an unbalanced panel resulting from firms not tweeting daily.

4. Results

To help distinguish between greenwashing and authentic corporate environmental communications (RQ1), we examined whether greenwashing profile patterns corresponded with greenwashing scores. Specifically, Table 2 presents average greenwashing scores per quantile range (i.e., the relationship between quantile ranges and the average greenwashing scores of tweets in the quantiles). The results show clear separation between quantiles in the expected direction. That is, recalling that an ideal non-greenwashed tweet has a Euclidean distance (i.e., greenwashing score) of zero, we observed that the average greenwashing score was lowest for tweets in the first quantile and largest for tweets in the tenth quantile, while consistently increasing across quantiles (i.e., average greenwashing increases as we move away from the ideal non-greenwashing profile). The quantile range measure correlated well with several dichotomous splits of the Euclidean distance (see the online Supplementary Section S3) and had the advantage that it avoided making absolute claims about whether an organizational tweet was greenwashed, and instead showed degrees of variation in organizational greenwashing. We present illustrative examples of randomly drawn sample tweets from our data along with their greenwashing scores in Table 2.
To further examine the relationship between linguistic cues in tweets and the resulting greenwashing profile, that is, the separation of tweets by linguistic cues, we plotted the profile of quantile mean scores on linguistic scores in Figure 1. As expected, Quantiles 1 to 3, representing the non-greenwashing side of the range of quantiles, obtained relatively high scores on the four “truthful cues” quantity, specificity, complexity, and diversity, and lowest mean scores on “deceptive cues’ hedging, affect, and vividness. The three profiles also showed good separation on mean cue scores. The trend was reversed for Quantiles 4 to 10, which obtained relatively low scores on truthful cues and higher scores on deceptive cues. While separation between mean scores was smaller in this case, the profiles remained distinct and did not overlap. In effect, the increase in average greenwashing scores coupled with reasonably distinct quantile profiles demonstrated that linguistic cues were able to distinguish more-deceptive and less-deceptive green tweets, addressing RQ1.
For RQ2 concerning the relation between greenwashing and financial market performance, random effects model results are presented in Table 3 (see online Supplementary Section S6 for descriptive statistics and correlations between key variables). In Model 1, we regressed industry and financial control variables on Share Price and found that large firms (Top 20 by market cap), higher return on assets, operating income, adjusted profits and revenues were all associated with an increase in daily share price, as expected. Higher gross income was associated with lower share price. Overall, these results suggest that investors prefer to base decisions on tax-adjusted rather than gross profits. In Model 2, we introduced our greenwashing variable and found a significant negative main effect: a unit increase in greenwashing was associated with 0.47% decline in daily Share Price. In Model 3, we introduced the moderator variable and found a significant main effect: a unit increase in ESG controversies was associated with a 0.05% decline in daily share price. In Model 4, we included the interaction between greenwashing and ESG controversies and found it to be positive and significant: the marginal effect of greenwashing on daily share price increased to 0.24% for a unit increase in ESG controversies. To further explore this interaction, we plotted the marginal effects and observed that the interaction effect was non-linear (see Figure 2). In sum, greenwashing was associated with lower market performance for firms with low environmental controversies. As described next, we also conducted several supplementary analyses to examine the robustness of our results.

5. Robustness Tests

5.1. Robustness Tests for RQ1: Validating our Greenwashing Detection Method

To validate our greenwashing measure, we performed four types of tests: methodological, industry comparisons, public perceptions, and external opinions. We provide a detailed description of each test and its results in online Supplementary Section S4, with a summary as follows.
For the methodological validation, we employed clustering as an unsupervised machine learning approach to test the predictive power of the linguistic cues. The rationale was that an alternate greenwashing classification method using the same cues should yield similar greenwashing scores as the profile-deviation method. The second validation approach compared the accuracy of our approach in distinguishing firms’ greenwashing in less green industries (oil/gas and auto) with those in greener industries (the environmental management industry). The rationale was that, unlike environmental management firms. which are perceived as environmentally proactive (e.g., [125]), oil/gas and auto firms face growing institutional pressures to adopt more green practices and thus have higher motivations to greenwash their communications. For our third validation test, we examined the extent to which our greenwashing measure correlated with public perceptions of firms’ green tweets. Based on prior studies that linked social media behaviors, such as likes, favorites, retweets, and votes, to consumer perceptions of greenwashing (e.g., [126]), we expected tweets perceived as greenwashed to correlate negatively with retweets, favorites, and mentions on Twitter. Finally, acknowledging that general Twitter users are not adept at spotting greenwashing, our fourth validation test focused on the extent to which relatively knowledgeable specialists would agree with our greenwashing score for a firm’s tweet. The rationale was that if a firm’s green communication were deceptive and our greenwashing measure adequately labeled it so, we could expect a moderate-to-high correlation between our greenwashing measure and the likelihood of it being tagged as deceptive by those who specialized in calling out corporate greenwashing.
Taken together, our four tests demonstrated that: (a) using profile-deviation is replicable with an alternative method (clustering), (b) our greenwashing measure distinguishes potentially greener industries from potentially less green industries, (c) our measure relates as expected to public sentiment, and (d) a tweet that scores high on our greenwashing measure correlates highly with the likelihood that the firm’s tweet will be tagged as greenwashed. In sum, while no single validation method on its own may be sufficient, the collective results from the four independent validation methods provide strong evidence for the validity of our greenwashing measure.

5.2. Robustness Tests for RQ2: Relating Greenwashing to Financial Market Performance

We conducted several supplementary analyses to examine the robustness of our empirical results for the relationship between greenwashing and market performance. We obtained similar results when: (1) using the greenwashing measure from clustering, suggesting that the greenwashing effect is robust to alternative measurement; (2) using a weekly (rather than daily) panel of estimation, suggesting that the greenwashing effect is also stable through the week; and (3) examining the individual effects of the linguistic cues used in our greenwashing measure, suggesting that each cue contributes to a parsimonious composite greenwashing score. Details on these robustness tests are included in online Supplementary Section S7.

6. Discussion and Implications

Our study addresses organizational communications in the context of the environment, a critical concern for businesses and societies. This stems from its politicized and complex nature [31,127,128], as well as the vested interests of some of the largest firms in highly polluting industries in maintaining and even expanding their environmentally damaging operations. These firms and their stakeholders use a range of strategies to promote and defend their positions and interests, such as engaging in wars of control over information and communication landscapes [33,129]. The rise of social media in recent years as an important conduit of organizational communications has opened a new arena for firms to communicate environmental sustainability messages through broadcasting news-like messages [1].
Our study makes valuable contributions by addressing some important open questions concerning communications, specifically how to detect greenwashing originating from firms as well as exploring greenwashing’s relationships to firms’ financial market performance. We brought together literature from multiple areas to theorize greenwashing detection, responding to calls for more interdisciplinary research in this area (e.g., [3,7]). We then developed and validated an effective method of detecting greenwashing based on some of the most important linguistic cues of deception identified in prior research (RQ1). Using this measure in a random effects model of share price, we found that greenwashing is directly associated with lower financial market performance (RQ2). Further, this relationship is moderated by exposure of environmental controversies. Next, we discuss several implications of our work for research and practice, as well as avenues for future research.

6.1. Theoretical Implications

Taken together, our research contributes to theorizing around organizational greenwashing in three ways: reconceptualizing greenwashing forms, complementing disclosure models, and extending decoupling models of greenwashing. First, prior work has made helpful progress toward identifying many forms of organizational greenwashing practices. However, this progress has not come with conceptual clarity around how these forms may be mutually exclusive and/or collectively exhaustive of organizational greenwashing. Instead, our conceptualization of greenwashing into three categories of fake news offers a high-level categorization for researchers to classify existing greenwashing forms, unravel underexplored ones, and provide a framework for thinking about new practices that might emerge in the future. Particularly, while most existing forms of greenwashing fall under disinformation and misinformation, malinformation greenwashing tends to remain underexplored and merits future research.
Second, our research complements disclosure models of greenwashing by qualifying the argument that greenwashing is not apparent in communications [3,7]. Rather, it appears that, at least in certain communications, such as social media, greenwashing can be detectable. This is in line with growing evidence that greenwashed text may exhibit certain linguistic characteristics that are known to be associated with deceptive communications (e.g., [58]). Therefore, the notion underlying traditional disclosure models that “greenwashing requires visibility and an accusation from a third party” ([3], p. 231) needs to be reassessed to account for instances where apriori accusations or expositions may not be available or even necessary. In qualifying this fundamental assumption, our work thus calls for revisions to disclosure models.
Third, our research extends decoupling models of greenwashing, in which firms take symbolic environmental actions that do not match their actual environmental performance [11]. With our approach, both environmental symbolic actions and actual actions are not required. Instead, we find that green claims on social media can be scrutinized by examining their linguistic expression in text. Further, firm’s actual sustainability actions are often not known, but when actions and commitments are discernible, they generally represent longer-term phenomena. However, given the frequency of social media communications and the exposure to a much wider online audience, a firm may be forced to act more quickly on its public reactions, thereby shortening the distance between communication and action (where decoupling is thought to occur). As the firm must react faster to correct any potential storm on social media, there is less time for it to risk greenwashing. That is, intermittent social media communications from the organization can say a lot about the firm’s green behavior or mindset at any period. Therefore, stakeholders do not need to wait for corporate sustainability reports to infer potential greenwashing. This may also lie at the center of critiques that decoupling models are static; that is, they offer little insight about the “extent of greenwashing and whether it is increasing over time” ([7], p. 228). Greenwashing is likely to be dynamic in a frequent-communication regime, such as social media. For example, when facing systemic shocks (industry-level) or critical environmental incidents (firm-level), there can be a spike in green communications, resulting in a temporary increase in all or some of the three types of greenwashing. Thus, decoupling models should be revised to account for faster coupling between green communications and (in)action by firms.

6.2. Methodological Implications

To address our first research question concerning the identification of greenwashing on social media, we devised a unique automatic detection method—a linguistic approach based on profile-deviation. We first reviewed the literature on linguistic cues of deception to determine those cues with the most potential to identify greenwashing. We then collected firms’ Twitter messages related to the environment through a comparison to a dictionary that we created from the practice and research literature. Next, we compared the linguistic features of these messages to ideal profiles of deceptive and truthful messages found in the literature using LIWC text mining software [104] and validated our classification with multiple methods.
Our approach to detecting greenwashing contributes several extensions to current greenwashing, fake news, and deception-detection methods. First, whereas previous approaches have relied on manually collected ground truth (data with known authenticity) to create experimental treatment conditions or to train automatic learning algorithms, our profile-deviation method does not require these data. To the best of our knowledge, this approach is novel in the greenwashing, fake news, and deception-detection literature, and offers a pathway for overcoming both economic and practical limitations of manual data collection in existing detection models. Second, rather than binary (deception or not) classifications, we introduce a new multi-label categorical classification suited to detecting “non-traditional” fake news such as greenwashing, addressing a major challenge faced by existing methods [48]. That is, existing methods are more effective at classifying explicit false claims, such as disinformation, but are less effective when deception is not so explicit, as with misinformation and malinformation. For these types of fake news, it is important to consider the extent of falsity and/or authenticity in making judgements about the likelihood of deception. Our method addresses this challenge by inferring the likelihood of deception in green communication over a range of possibilities, rather than making a declaration that a tweet is deceptive or not. This innovation will also be germane to studying greenwashing dynamics, particularly in response to an urgent call for methods to track the magnitude and direction of change in greenwashing over time [7]. Third, whereas the deception literature offers many linguistic cues, we selected the most theoretically reliable and empirically established cues to help automatically detect greenwashing in tweets. In so doing so, our method is more “explainable” ([48], p. 32) and can be replicated in different contexts. This adds to limited research on model interpretability, that is, using theories or domain knowledge for more explainable automatic deception detection (e.g., [130]).
Nevertheless, our approach to detecting greenwashing could be extended in several ways. For example, it may be possible to improve its accuracy by combining non-linguistic cues with linguistic ones. Depending on the context and the genre of messages under study, such non-linguistic cues could include the content of the messages, their metadata, and audio and visual cues, such as the tone and body language of the source. Moreover, researchers could use more sophisticated unsupervised learning techniques to detect deception in such types of data (e.g., [131]) and in developing relevant measures to validate such classifications. In the future, it would also be useful to be able to categorize the detected greenwashing into one of the three types of fake news: mis-, mal- or disinformation; however, other data would be required, such as firms’ responses (e.g., admission, apology, protest, honest mistake, evasiveness, etc.). In the current state of the art and available tools, we see no easy way to collect this information without invoking additional manual processes with all the problems that accompany them.
Another future possibility is to use crowdsourcing with experts to label a sample of tweets as greenwashing or authentic, which then could be used to further validate our method. Such labeled data could additionally be used to develop supervised machine learning models to automate the detection of deception [131,132]. However, the success of such an endeavor would require establishing a clear set of criteria to be used by the expert crowds to identify greenwashing in tweets, which is not a straightforward task. Further, future research could make cross-platform comparisons on deception detection. For example, the character limit imposed by some platforms, such as Twitter, may constrain quantity-oriented linguistic cues in communications, compared to other platforms like Facebook or LinkedIn. Additionally, it is possible that firms adopt a different language on employee-oriented platforms, such as LinkedIn, compared to customer-oriented ones, such as Facebook. Finally, our approach focused on text-based false messages in social media, a growing non-traditional source of fake news [48]. Future research may investigate how to extend our detection method to traditional sources of fake news, such as journalistic news media.

6.3. Empirical Implications

To address our second research question, we regressed our greenwashing measure on daily stock prices. Our empirical findings shed light on the growing body of research examining greenwashing outcomes. From a firm’s perspective, prior studies have examined the outcomes of organizational greenwashing with two types of metrics—stakeholders’ reactions and firm performance—but a clear picture of where firms should direct their mitigation strategies remains elusive. While prior results for financial performance have been mixed [37,38,46,47], our cross-industry, multi-country exploratory work suggests that firms should pay closer attention to the effects of greenwashing on financial markets, in line with investor expectations [20]. This also raises several questions for future research. For example, is the degree of separation between greenwashing and financial market performance too wide to detect a relationship? If so, are privately held firms (whose stocks are not publicly traded) more likely to escape the ramifications of greenwashing? Additionally, does the negative relation between perceived greenwashing and retweets on Twitter, as found in our validation analyses, suggest a potential spillover effect from market performance outcomes to stakeholders’ perceptions in product-level greenwashing outcomes (e.g., [4,57,89,90])? Answers to these questions would deepen a firm’s understanding of the implications of perceived and tangible greenwashing in their communications and help managers formulate effective remedial policies.

6.4. Implications for Policy and Practice

Our research also offers valuable insights for both policy and practice. From a practical perspective, our method provides a reliable and moderately easy tool for investors, consultants, auditors, activists, media, and other practitioners to detect potential acts or episodes of greenwashing by focal firms. Given that this has proved a difficult task for non-specialist stakeholders [11], such a tool should be of value to these groups. For managers, those with “clean hands” can now contemplate the possibility that their environmental communications may be more objectively assessed. This represents an important first step to free managers from an iron cage of disclosure where they have little incentive to showcase their environmental actions due to unfounded accusations or undue punishment.
From a policy standpoint, studies have emphasized a shift in focus from maximizing user engagement on social media platforms to addressing fake news interventions by increasing information quality via self- or government regulations [62]. Social media firms have responded by adopting several policies to debunk or warn against fake news [133]. Our detection approach expands the policy options and implementation capabilities for greenwashing interventions on social media platforms, particularly with respect to greenwashing types that are most difficult to detect (such as malinformation). Our technique would allow platforms to automatically score and tag firms’ green tweets to help specialist stakeholders distinguish greenwashing from genuine environmental actions.
Ultimately, evidence is growing that we are running out of time to meet internationally agreed environmental sustainability goals and to avert large-scale and potentially catastrophic changes in our natural environment. Under these circumstances, if greenwashing—particularly by highly polluting firms—goes unnoticed, it can be immensely damaging by luring stakeholders into inaction. Our work represents one step in preventing such damage, and we hope that our method is improved, expanded, and utilized by researchers, practitioners, and activists concerned about our shared future.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/su15086683/s1. References [134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160] are cited in Supplementary Materials.

Author Contributions

Conceptualization, D.O.-T. and J.W.; Methodology, D.O.-T.; Validation, J.W.; Formal analysis, D.O.-T.; Investigation, D.O.-T. and J.W.; Data curation, D.O.-T.; Writing—original draft, D.O.-T.; Writing—review & editing, J.W.; Project administration, J.W.; Funding acquisition, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by a Social Sciences and Humanities Research Council of Canada grant, number 435-2013-0716, to Jane Webster.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data not subject to privacy restrictions will be available on request from the corresponding author after publication of related projects.

Acknowledgments

We thank Zilan Ouyang, Akosua Oppong-Tawiah, and Indrani Karmakar for research assistance and Ali Khan for comments on earlier versions of this paper.

Conflicts of Interest

The authors have no relevant financial or non-financial interest to disclose.

References

  1. Etter, M. Broadcasting, reacting, engaging—Three strategies for CSR communication in Twitter. J. Commun. Manag. 2014, 18, 322–342. [Google Scholar] [CrossRef]
  2. Orazi, D.C.; Chan, E.Y. “They did not walk the green talk!:” How information specificity influences consumer evaluations of disconfirmed environmental claims. J. Bus. Ethics 2020, 163, 107–123. [Google Scholar] [CrossRef] [Green Version]
  3. Gatti, L.; Pizzetti, M.; Seele, P. Green lies and their effect on intention to invest. J. Bus. Res. 2021, 127, 228–240. [Google Scholar] [CrossRef]
  4. Szabo, S.; Webster, J. Perceived greenwashing: The effects of green marketing on environmental and product perceptions. J. Bus. Ethics 2021, 171, 719–739. [Google Scholar] [CrossRef]
  5. de Freitas Netto, S.V.; Sobral, M.F.F.; Ribeiro, A.R.B.; Soares, G.R.D.L. Concepts and forms of greenwashing: A systematic review. Environ. Sci. Eur. 2020, 32, 1–12. [Google Scholar] [CrossRef] [Green Version]
  6. Kim, E.-H.; Lyon, T.P. Greenwash vs. brownwash: Exaggeration and undue modesty in corporate sustainability disclosure. Organ. Sci. 2015, 26, 705–723. [Google Scholar] [CrossRef] [Green Version]
  7. Lyon, T.P.; Montgomery, A.W. The means and end of greenwash. Organ. Environ. 2015, 28, 223–249. [Google Scholar] [CrossRef]
  8. Marquis, C.; Toffel, M.W.; Zhou, Y. Scrutiny, norms, and selective disclosure: A global study of greenwashing. Organ. Sci. 2016, 27, 483–504. [Google Scholar] [CrossRef] [Green Version]
  9. Lyon, T.P.; Montgomery, A.W. Tweetjacked: The impact of social media on corporate greenwash. J. Bus. Ethics 2013, 118, 747–757. [Google Scholar] [CrossRef]
  10. Bowen, F.; Aragon-Correa, J.A. Greenwashing in corporate environmentalism research and practice: The importance of what we say and do. Organ. Environ. 2014, 27, 107–112. [Google Scholar] [CrossRef] [Green Version]
  11. Crilly, D.; Hansen, M.; Zollo, M. The grammar of decoupling: A cognitive-linguistic perspective on firms’ sustainability claims and stakeholders’ interpretation. Acad. Manag. J. 2016, 59, 705–729. [Google Scholar] [CrossRef]
  12. Torelli, R.; Balluchi, F.; Lazzini, A. Greenwashing and environmental communication: Effects on stakeholders’ perceptions. Bus. Strategy Environ. 2020, 29, 407–421. [Google Scholar] [CrossRef] [Green Version]
  13. Business for Social Responsibility. The State of Sustainable Business in 2019|Reports|BSR; Business for Social Responsibility: San Francisco, CA, USA, 2019; Available online: https://www.bsr.org/en/our-insights/report-view/the-state-of-sustainable-business-in-2019 (accessed on 15 May 2020).
  14. Atkinson, L.; Kim, Y. “I drink it anyway and I know I shouldn’t”: Understanding green consumers’ positive evaluations of norm-violating non-green products and misleading green advertising. Environ. Commun. 2015, 9, 37–57. [Google Scholar] [CrossRef]
  15. Baum, L.M. It’s not easy being green … or is it? A content analysis of environmental claims in magazine advertisements from the United States and United Kingdom. Environ. Commun. 2012, 6, 423–440. [Google Scholar] [CrossRef]
  16. De Jong, M.D.T.; Harkink, K.M.; Barth, S. Making green stuff? Effects of corporate greenwashing on consumers. J. Bus. Tech. Commun. 2018, 32, 77–112. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. TerraChoice. The Sins of Greenwashing: Home and Family Edition 2010; TerraChoice: Ottawa, ON, Canada, 2010; p. 22. Available online: https://www.map-testing.com/assets/files/2009-04-xx-The_Seven_Sins_of_Greenwashing_low_res.pdf (accessed on 10 May 2020).
  18. Carlson, L.; Grove, S.J.; Kangun, N. A content analysis of environmental advertising claims: A matrix method approach. J. Advert. 1993, 22, 27–39. [Google Scholar] [CrossRef]
  19. Sun, Z.; Zhang, W. Do government regulations prevent greenwashing? An evolutionary game analysis of heterogeneous enterprises. J. Clean. Prod. 2019, 231, 1489–1502. [Google Scholar] [CrossRef]
  20. Unruh, G.; Kiron, D.; Kruschwitz, N.; Reeves, M.; Rubel, H.; Meyer Zum Felde, A. Investing for a sustainable future: Investors care more about sustainability than many executives believe. MIT Sloan Manag. Rev. 2016, 57, 1–29. [Google Scholar]
  21. Siano, A.; Vollero, A.; Conte, F.; Amabile, S. “More than words”: Expanding the taxonomy of greenwashing after the Volkswagen scandal. J. Bus. Res. 2017, 71, 27–37. [Google Scholar] [CrossRef]
  22. Darke, P.R.; Ritchie, R.J.B. The defensive consumer: Advertising deception, defensive processing, and distrust. J. Mark. Res. 2007, 44, 114–127. [Google Scholar] [CrossRef]
  23. Pomering, A.; Johnson, L.W. Advertising corporate social responsibility initiatives to communicate corporate image: Inhibiting scepticism to enhance persuasion. Corp. Commun. Int. J. 2009, 14, 420–439. [Google Scholar] [CrossRef]
  24. Zinkhan, G.M.; Carlson, L. Green advertising and the reluctant consumer. J. Advert. 1995, 24, 1–6. [Google Scholar] [CrossRef]
  25. Davis, J.J. Ethics and environmental marketing. J. Bus. Ethics 1992, 11, 81–87. [Google Scholar] [CrossRef]
  26. Nyilasy, G.; Gangadharbatla, H.; Paladino, A. Perceived greenwashing: The interactive effects of green advertising and corporate environmental performance on consumer reactions. J. Bus. Ethics 2014, 125, 693–707. [Google Scholar] [CrossRef]
  27. Ewing, J. Daimler to Settle U.S. Emissions Charges for $2.2 Billion. The New York Times. 13 August 2020. Available online: https://www.nytimes.com/2020/08/13/business/daimler-emissions-settlement-us.html (accessed on 21 December 2020).
  28. Trefis Team. The Domino Effect of Volkswagen’s Emissions Scandal. Forbes. 28 September 2015. Available online: https://www.forbes.com/sites/greatspeculations/2015/09/28/the-domino-effect-of-volkswagens-emissions-scandal/ (accessed on 20 December 2020).
  29. George, J.F.; Carlson, J.R.; Valacich, J.S. Media selection as a strategic component of communication. MIS Q. 2013, 37, 1233–1251. [Google Scholar] [CrossRef]
  30. Zhou, L.; Burgoon, J.K.; Twitchell, D.P.; Qin, T.; Nunamaker, J.F., Jr. A comparison of classification methods for predicting deception in computer-mediated communication. J. Manag. Inf. Syst. 2004, 20, 139–166. [Google Scholar] [CrossRef]
  31. Feldman, L.; Hart, P.S. Climate change as a polarizing cue: Framing effects on public support for low-carbon energy policies. Glob. Environ. Change 2018, 51, 54–66. [Google Scholar] [CrossRef]
  32. Lefsrud, L.M.; Meyer, R.E. Science or science fiction? Professionals’ discursive construction of climate change. Organ. Stud. 2012, 33, 1477–1506. [Google Scholar] [CrossRef] [Green Version]
  33. Wittneben, B.B.F.; Okereke, C.; Banerjee, S.B.; Levy, D.L. Climate change and the emergence of new organizational landscapes. Organ. Stud. 2012, 33, 1431–1450. [Google Scholar] [CrossRef] [Green Version]
  34. Nemes, N.; Scanlan, S.J.; Smith, P.; Smith, T.; Aronczyk, M.; Hill, S.; Lewis, S.L.; Montgomery, A.W.; Tubiello, F.N.; Stabinsky, D. An Integrated Framework to Assess Greenwashing. Sustainability 2022, 14, 4431. [Google Scholar] [CrossRef]
  35. Bernard, J.-G.; Dennis, A.; Galletta, D.; Khan, A.; Webster, J. The Tangled Web: Studying Online Fake News. In Proceedings of the ICIS 2019, Munich, Germany, 15–18 December 2019; Available online: https://aisel.aisnet.org/icis2019/panels/panels/4 (accessed on 1 October 2022).
  36. Heede, R. Tracing anthropogenic carbon dioxide and methane emissions to fossil fuel and cement producers, 1854–2010. Clim. Change 2014, 122, 229–241. [Google Scholar] [CrossRef] [Green Version]
  37. Walker, K.; Wan, F. The harm of symbolic actions and green-washing: Corporate actions and communications on environmental performance and their financial implications. J. Bus. Ethics 2012, 109, 227–242. [Google Scholar] [CrossRef] [Green Version]
  38. Wu, M.-W.; Shen, C.-H. Corporate social responsibility in the banking industry: Motives and financial performance. J. Bank. Financ. 2013, 37, 3529–3547. [Google Scholar] [CrossRef]
  39. Okazaki, S.; Plangger, K.; West, D.; Menéndez, H.D. Exploring digital corporate social responsibility communications on Twitter. J. Bus. Res. 2020, 117, 675–682. [Google Scholar] [CrossRef]
  40. Khan, A.; Brohman, K.; Addas, S. The anatomy of ‘fake news’: Studying false messages as digital objects. J. Inf. Technol. 2022, 37, 122–143. [Google Scholar] [CrossRef]
  41. Wardle, C. The need for smarter definitions and practical, timely empirical research on information disorder. Digit. J. 2018, 6, 951–963. [Google Scholar] [CrossRef]
  42. Burgoon, J.K. Predicting veracity from linguistic indicators. J. Lang. Soc. Psychol. 2018, 37, 603–631. [Google Scholar] [CrossRef] [Green Version]
  43. Hauch, V.; Blandón-Gitlin, I.; Masip, J.; Sporer, S.L. Are computers effective lie detectors? A meta-analysis of linguistic cues to deception. Personal. Soc. Psychol. Rev. 2015, 19, 307–342. [Google Scholar] [CrossRef]
  44. Yang, J.; Basile, K. Communicating corporate social responsibility: External stakeholder involvement, productivity and firm performance. J. Bus. Ethics 2022, 178, 501–517. [Google Scholar] [CrossRef]
  45. Aguilera, R.V.; Waldman, D.A.; Siegel, D.S. Responsibility and organization science: Integrating micro and macro perspectives. Organ. Sci. 2022, 33, 483–494. [Google Scholar] [CrossRef]
  46. Li, W.; Li, W.; Seppänen, V.; Koivumäki, T. Effects of greenwashing on financial performance: Moderation through local environmental regulation and media coverage. Bus. Strategy Environ. 2023, 32, 820–841. [Google Scholar] [CrossRef]
  47. Testa, F.; Miroshnychenko, I.; Barontini, R.; Frey, M. Does it pay to be a greenwasher or a brownwasher? Bus. Strategy Environ. 2018, 27, 1104–1116. [Google Scholar] [CrossRef]
  48. Zhou, X.; Zafarani, R. A survey of fake news: Fundamental theories, detection methods, and opportunities. ACM Comput. Surv. 2020, 53, 1–40. [Google Scholar] [CrossRef]
  49. Burton, S.; Soboleva, A.; Daellenbach, K.; Basil, D.Z.; Beckman, T.; Deshpande, S. Helping those who help us: Co-branded and co-created Twitter promotion in CSR partnerships. J. Brand Manag. 2017, 24, 322–333. [Google Scholar] [CrossRef] [Green Version]
  50. Becker-Olsen, K.; Potucek, S. Greenwashing. In Encyclopedia of Corporate Social Responsibility; Idowu, S.O., Capaldi, N., Zu, L., Gupta, A.D., Eds.; Springer: Berlin, Germany, 2013; pp. 1318–1323. [Google Scholar] [CrossRef]
  51. Laufer, W.S. Social accountability and corporate greenwashing. J. Bus. Ethics 2003, 43, 253–261, JSTOR. [Google Scholar] [CrossRef]
  52. Delmas, M.A.; Burbano, V.C. The drivers of greenwashing. Calif. Manag. Rev. 2011, 54, 64–87. [Google Scholar] [CrossRef] [Green Version]
  53. Lyon, T.P.; Maxwell, J.W. Greenwash: Corporate environmental disclosure under threat of audit. J. Econ. Manag. Strategy 2011, 20, 3–41. [Google Scholar] [CrossRef]
  54. Sharma, K.; Qian, F.; Jiang, H.; Ruchansky, N.; Zhang, M.; Liu, Y. Combating fake news: A survey on identification and mitigation techniques. ACM Trans. Intell. Syst. Technol. 2019, 10, 21:1–21:42. [Google Scholar] [CrossRef]
  55. Mateo-Márquez, A.J.; González-González, J.M.; Zamora-Ramírez, C. An international empirical study of greenwashing and voluntary carbon disclosure. J. Clean. Prod. 2022, 363, 132567. [Google Scholar] [CrossRef]
  56. Seele, P.; Gatti, L. Greenwashing revisited: In search of a typology and accusation-based definition incorporating legitimacy strategies. Bus. Strategy Environ. 2017, 26, 239–252. [Google Scholar] [CrossRef]
  57. Chen, Y.-S.; Chang, C.-H. Greenwash and green trust: The mediation effects of green consumer confusion and green perceived risk. J. Bus. Ethics 2013, 114, 489–500. [Google Scholar] [CrossRef]
  58. Fabrizio, K.R.; Kim, E.-H. Reluctant Disclosure and Transparency: Evidence from Environmental Disclosures. Organ. Sci. 2019, 30, 1207–1231. [Google Scholar] [CrossRef]
  59. Kapantai, E.; Christopoulou, A.; Berberidis, C.; Peristeras, V. A systematic literature review on disinformation: Toward a unified taxonomical framework. New Media Soc. 2021, 23, 1301–1326. [Google Scholar] [CrossRef]
  60. Allcott, H.; Gentzkow, M. Social media and fake news in the 2016 election. J. Econ. Perspect. 2017, 31, 211–236. [Google Scholar] [CrossRef] [Green Version]
  61. Tandoc, E.C.; Lim, Z.W.; Ling, R. Defining “Fake News”: A typology of scholarly definitions. Digit. J. 2018, 6, 137–153. [Google Scholar] [CrossRef]
  62. Lazer, D.M.J.; Baum, M.A.; Benkler, Y.; Berinsky, A.J.; Greenhill, K.M.; Menczer, F.; Metzger, M.J.; Nyhan, B.; Pennycook, G.; Rothschild, D.; et al. The science of fake news. Science 2018, 359, 1094–1096. [Google Scholar] [CrossRef]
  63. Vosoughi, S.; Roy, D.; Aral, S. The spread of true and false news online. Science 2018, 359, 1146–1151. [Google Scholar] [CrossRef]
  64. Chen, Y.; Conroy, N.J.; Rubin, V.L. Misleading online content: Recognizing clickbait as “false news”. In Proceedings of the 2015 ACM on Workshop on Multimodal Deception Detection, Seattle, WA, USA, 9–13 November 2015; pp. 15–19. [Google Scholar] [CrossRef]
  65. Zubiaga, A.; Aker, A.; Bontcheva, K.; Liakata, M.; Procter, R. Detection and resolution of rumours in social media: A survey. ACM Comput. Surv. 2018, 51, 32:1–32:36. [Google Scholar] [CrossRef] [Green Version]
  66. Ireton, C.; Posetti, J.; UNESCO. Journalism, “Fake News” and Disinformation: Handbook for Journalism Education and Training; United Nations, Educational, Scientific and Cultural Organization: London, UK, 2018; Available online: http://unesdoc.unesco.org/images/0026/002655/265552E.pdf (accessed on 10 May 2020).
  67. Zhou, X.; Zafarani, R. Network-based fake news detection: A pattern-driven approach. ACM SIGKDD Explor. Newsl. 2019, 21, 48–60. [Google Scholar] [CrossRef]
  68. Finn, K. BP Lied about Size of U.S. Gulf Oil Spill, Lawyers Tell Trial. Reuters, 30 September 2013. Available online: https://www.reuters.com/article/us-bp-trial-idUSBRE98T13U20130930 (accessed on 20 December 2020).
  69. Ferns, G.; Amaeshi, K. Fueling climate (in)action: How organizations engage in hegemonization to avoid transformational action on climate change. Organ. Stud. 2021, 42, 1005–1029. [Google Scholar] [CrossRef]
  70. Feng, S.; Banerjee, R.; Choi, Y. Syntactic Stylometry for Deception Detection. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Jeju Island, Republic of Korea, 8–14 July 2012; pp. 171–175. Available online: https://www.aclweb.org/anthology/P12-2034 (accessed on 28 May 2020).
  71. Zhou, X.; Jain, A.; Phoha, V.V.; Zafarani, R. Fake news early detection: A theory-driven model. Digit. Threat. Res. Pract. 2020, 1, 12:1–12:25. [Google Scholar] [CrossRef]
  72. Buller, D.B.; Burgoon, J.K. Interpersonal deception theory. Commun. Theory 1996, 6, 203–242. [Google Scholar] [CrossRef]
  73. Ekman, P. Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage; W.W. Norton: New York, NY, USA, 2001. [Google Scholar]
  74. Fuller, C.M.; Biros, D.P.; Burgoon, J.; Nunamaker, J.F., Jr. An examination and validation of linguistic constructs for studying high-stakes deception. Group Decis. Negot. 2013, 22, 117–134. [Google Scholar] [CrossRef]
  75. Sporer, S.L. Reality monitoring and detection of deception. In The Detection of Deception in Forensic Contexts; Strömwall, L.A., Granhag, P.A., Eds.; Cambridge University Press, Cambridge Core: Cambridge, UK, 2004; pp. 64–102. [Google Scholar] [CrossRef]
  76. Vrij, A.; Edward, K.; Roberts, K.P.; Bull, R. Detecting deceit via analysis of verbal and nonverbal behavior. J. Nonverbal Behav. 2000, 24, 239–263. [Google Scholar] [CrossRef]
  77. Grice, P. Studies in the Way of Words; Harvard University Press: Cambridge, MA, USA, 1989. [Google Scholar]
  78. Ten Brinke, L.; Porter, S. Cry me a river: Identifying the behavioral consequences of extremely high-stakes interpersonal deception. Law Hum. Behav. 2012, 36, 469–477. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  79. Johnson, M.K.; Raye, C.L. Reality monitoring. Psychol. Rev. 1981, 88, 67–85. [Google Scholar] [CrossRef]
  80. Steller, M.; Köhnken, G. Criteria-Based Content Analysis. In Psychological Methods in Criminal Investigation and Evidence; Raskin, D.C., Ed.; Springer Pub Co.: New York, NY, USA, 1989; pp. 217–245. [Google Scholar]
  81. Qin, T.; Burgoon, J.; Nunamaker, J.F., Jr. An exploratory study on promising cues in deception detection and application of decision tree. In Proceedings of the 37th Annual Hawaii International Conference on System Sciences, Big Island, HI, USA, 5–8 January 2004; pp. 23–32. [Google Scholar] [CrossRef]
  82. Courtis, J.K. Corporate report obfuscation: Artefact or phenomenon? Br. Account. Rev. 2004, 36, 291–312. [Google Scholar] [CrossRef]
  83. Bachenko, J.; Fitzpatrick, E.; Schonwetter, M. Verification and implementation of language-based deception indicators in civil and criminal narratives. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING 2008), Manchester, UK, 18–22 August 2008; pp. 41–48. [Google Scholar]
  84. Duran, N.D.; Hall, C.; Mccarthy, P.M.; Mcnamara, D.S. The linguistic correlates of conversational deception: Comparing natural language processing technologies. Appl. Psycholinguist. 2010, 31, 439–462. [Google Scholar] [CrossRef]
  85. Fuller, C.M.; Biros, D.P.; Wilson, R.L. Decision support for determining veracity via linguistic-based cues. Decis. Support Syst. 2009, 46, 695–703. [Google Scholar] [CrossRef]
  86. Bond, G.D.; Lee, A.Y. Language of lies in prison: Linguistic classification of prisoners’ truthful and deceptive natural language. Appl. Cogn. Psychol. 2005, 19, 313–329. [Google Scholar] [CrossRef]
  87. DePaulo, B.M.; Lindsay, J.J.; Malone, B.E.; Muhlenbruck, L.; Charlton, K.; Cooper, H. Cues to deception. Psychol. Bull. 2003, 129, 74–118. [Google Scholar] [CrossRef] [PubMed]
  88. Whissel, C. The dictionary of affect in language. In Emotion: Theory, Research and Experience: The Measurement of Emotions; Plutchik, R., Kellerman, H., Eds.; Academic Press: Cambridge, MA, USA, 1989; Volume 4. [Google Scholar]
  89. Gosselt, J.F.; van Rompay, T.; Haske, L. Won’t get fooled again: The effects of internal and external CSR Eco-labeling. J. Bus. Ethics 2019, 155, 413–424. [Google Scholar] [CrossRef] [Green Version]
  90. Parguel, B.; Benoît-Moreau, F.; Larceneux, F. How sustainability ratings might deter ‘greenwashing’: A closer look at ethical corporate communication. J. Bus. Ethics 2011, 102, 15. [Google Scholar] [CrossRef] [Green Version]
  91. Ekwurzel, B.; Boneham, J.; Dalton, M.W.; Heede, R.; Mera, R.J.; Allen, M.R.; Frumhoff, P.C. The rise in global atmospheric CO2, surface temperature, and sea level from emissions traced to major carbon producers. Clim. Change 2017, 144, 579–590. [Google Scholar] [CrossRef] [Green Version]
  92. Barnett, M.L.; Henriques, I.; Husted, B.W. Beyond good intentions: Designing CSR initiatives for greater social impact. J. Manag. 2020, 46, 937–964. [Google Scholar] [CrossRef]
  93. Egginton, J.F.; McBrayer, G.A. Does it pay to be forthcoming? Evidence from CSR disclosure and equity market liquidity. Corp. Soc. Responsib. Environ. Manag. 2019, 26, 396–407. [Google Scholar] [CrossRef]
  94. Jizi, M. The influence of board composition on sustainable development disclosure. Bus. Strategy Environ. 2017, 26, 640–655. [Google Scholar] [CrossRef]
  95. Garcia-Castro, R.; Francoeur, C. When more is not better: Complementarities, costs and contingencies in stakeholder management. Strateg. Manag. J. 2016, 37, 406–424. [Google Scholar] [CrossRef]
  96. Lu, J.; Herremans, I.M. Board gender diversity and environmental performance: An industries perspective. Bus. Strategy Environ. 2019, 28, 1449–1464. [Google Scholar] [CrossRef]
  97. Surroca, J.; Tribó, J.A.; Waddock, S. Corporate responsibility and financial performance: The role of intangible resources. Strateg. Manag. J. 2010, 31, 463–490. [Google Scholar] [CrossRef]
  98. McHaney, R.; George, J.F.; Gupta, M. An exploration of deception detection: Are groups more effective than individuals? Commun. Res. 2018, 45, 1103–1121. [Google Scholar] [CrossRef]
  99. Wang, W.; Hernandez, I.; Newman, D.A.; He, J.; Bian, J. Twitter analysis: Studying US weekly trends in work stress and emotion. Appl. Psychol. 2016, 65, 355–378. [Google Scholar] [CrossRef]
  100. MSCI. MSCI ESG KLD STATS: 1991–2015 Data Sets; MSCI: New York, NY, USA, 2016. [Google Scholar]
  101. Awaysheh, A.; Heron, R.A.; Perry, T.; Wilson, J.I. On the relation between corporate social responsibility and financial performance. Strateg. Manag. J. 2020, 41, 965–987. [Google Scholar] [CrossRef]
  102. Zhang, W.; Wang, Q.; Li, X.; Yoshida, T.; Li, J. DCWord: A novel deep learning approach to deceptive review identification by word vectors. J. Syst. Sci. Syst. Eng. 2019, 28, 731–746. [Google Scholar] [CrossRef]
  103. Castelo, S.; Almeida, T.; Elghafari, A.; Santos, A.; Pham, K.; Nakamura, E.; Freire, J. A topic-agnostic approach for identifying fake news pages. In Proceedings of the 2019 World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 975–980. [Google Scholar] [CrossRef] [Green Version]
  104. Pennebaker, J.W.; Boyd, R.L.; Jordan, K.; Blackburn, K. The Development and Psychometric Properties of LIWC2015; The University of Texas at Austin: Austin, TX, USA, 2015; Available online: https://repositories.lib.utexas.edu/handle/2152/31333 (accessed on 24 May 2020).
  105. Braun, M.T.; Van Swol, L.M. Justifications Offered, Questions Asked, and Linguistic Patterns in Deceptive and Truthful Monetary Interactions. Group Decis. Negot. 2016, 25, 641–661. [Google Scholar] [CrossRef]
  106. Ho, S.M.; Hancock, J.T.; Booth, C.; Liu, X. Computer-mediated deception: Strategies revealed by language-action cues in spontaneous communication. J. Manag. Inf. Syst. 2016, 33, 393–420. [Google Scholar] [CrossRef]
  107. Ludwig, S.; van Laer, T.; de Ruyter, K.; Friedman, M. Untangling a web of lies: Exploring automated detection of deception in computer-mediated communication. J. Manag. Inf. Syst. 2016, 33, 511–541. [Google Scholar] [CrossRef] [Green Version]
  108. Fuller, C.M.; Biros, D.P.; Twitchell, D.P.; Burgoon, J.K. An analysis of text-based deception detection tools. In Proceedings of the AMCIS 2006 Proceedings, Americas Conference on Information Systems, Acapulco, Mexico, 4–6 August 2006; Available online: http://aisel.aisnet.org/amcis2006/418 (accessed on 24 May 2020).
  109. Whissell, C. Using the Revised Dictionary of Affect in Language to Quantify the Emotional Undertones of Samples of Natural Language. Psychol. Rep. 2009, 105, 509–521. [Google Scholar] [CrossRef]
  110. Jaidka, K.; Zhou, A.; Lelkes, Y. Brevity is the soul of Twitter: The constraint affordance and political discussion. J. Commun. 2019, 69, 345–372. [Google Scholar] [CrossRef]
  111. Venkatraman, N. The concept of fit in strategy research: Toward verbal and statistical correspondence. Acad. Manag. Rev. 1989, 14, 423–444. [Google Scholar] [CrossRef]
  112. Sabherwal, R.; Chan, Y.E. Alignment between business and IS strategies: A study of prospectors, analyzers, and defenders. Inf. Syst. Res. 2001, 12, 11–33. [Google Scholar] [CrossRef]
  113. Chen, Y.-Y.; Huang, H.-L. Knowledge management fit and its implications for business performance: A profile deviation analysis. Knowl.-Based Syst. 2012, 27, 262–270. [Google Scholar] [CrossRef]
  114. Barki, H.; Rivard, S.; Talbot, J. An integrative contingency model of software project risk management. J. Manag. Inf. Syst. 2001, 17, 37–69. [Google Scholar] [CrossRef]
  115. Hult, G.T.M.; Ketchen, D.J.; Cavusgil, S.T.; Calantone, R.J. Knowledge as a strategic resource in supply chains. J. Oper. Manag. 2006, 24, 458–475. [Google Scholar] [CrossRef]
  116. Venkatraman, N.; Prescott, J.E. Environment-strategy coalignment: An empirical test of its performance implications. Strateg. Manag. J. 1990, 11, 1–23. [Google Scholar] [CrossRef] [Green Version]
  117. Vorhies, D.W.; Morgan, N.A. A configuration theory assessment of marketing organization fit with business strategy and its relationship with marketing performance. J. Mark. 2003, 67, 100–115. [Google Scholar] [CrossRef] [Green Version]
  118. Vorhies, D.W.; Morgan, N.A. Benchmarking marketing capabilities for sustainable competitive advantage. J. Mark. 2005, 69, 80–94. [Google Scholar] [CrossRef]
  119. Brett, J.M.; Olekalns, M.; Friedman, R.; Goates, N.; Anderson, C.; Lisco, C.C. Sticks and stones: Language, face, and online dispute resolution. Acad. Manag. J. 2007, 50, 85–99. [Google Scholar] [CrossRef] [Green Version]
  120. Jensen, M.L.; Lowry, P.B.; Burgoon, J.K.; Nunamaker, J.F. Technology dominance in complex decision making: The case of aided credibility assessment. J. Manag. Inf. Syst. 2010, 27, 175–201. [Google Scholar] [CrossRef]
  121. Jensen, M.L.; Lowry, P.B.; Jenkins, J.L. Effects of automated and participative decision support in computer-aided credibility assessment. J. Manag. Inf. Syst. 2011, 28, 201–233. [Google Scholar] [CrossRef]
  122. Berrone, P.; Fosfuri, A.; Gelabert, L. Does greenwashing pay off? Understanding the relationship between environmental actions and environmental legitimacy. J. Bus. Ethics 2017, 144, 363–379. [Google Scholar] [CrossRef]
  123. Breusch, T.S.; Pagan, A.R. A Simple Test for Heteroscedasticity and Random Coefficient Variation. Econometrica 1979, 47, 1287–1294. [Google Scholar] [CrossRef]
  124. Wooldridge, J.M. Econometric Analysis of Cross Section and Panel Data, 2nd ed.; MIT Press: Cambridge, MA, USA, 2010. [Google Scholar]
  125. Henriques, I.; Sadorsky, P. The relationship between environmental commitment and managerial perceptions of stakeholder importance. Acad. Manag. J. 1999, 42, 87–99. [Google Scholar] [CrossRef]
  126. Topal, İ.; Nart, S.; Akar, C.; Erkollar, A. The effect of greenwashing on online consumer engagement: A comparative study in France, Germany, Turkey, and the United Kingdom. Bus. Strategy Environ. 2020, 29, 465–480. [Google Scholar] [CrossRef]
  127. Lacroix, K.; Gifford, R. Psychological barriers to energy conservation behavior: The role of worldviews and climate change risk perception. Environ. Behav. 2018, 50, 749–780. [Google Scholar] [CrossRef] [Green Version]
  128. van der Linden, S. The social-psychological determinants of climate change risk perceptions: Towards a comprehensive model. J. Environ. Psychol. 2015, 41, 112–124. [Google Scholar] [CrossRef]
  129. MacKay, B.; Munro, I. Information warfare and new organizational landscapes: An inquiry into the Exxonmobil–Greenpeace dispute over climate change. Organ. Stud. 2012, 33, 1507–1536. [Google Scholar] [CrossRef]
  130. Vilone, G.; Longo, L. Explainable artificial intelligence: A systematic review. arXiv 2020, arXiv:abs/2006.00093. [Google Scholar]
  131. Kumar, N.; Venugopal, D.; Qiu, L.; Kumar, S. Detecting anomalous online reviewers: An unsupervised approach using mixture models. J. Manag. Inf. Syst. 2019, 36, 1313–1346. [Google Scholar] [CrossRef]
  132. Siering, M.; Koch, J.-A.; Deokar, A.V. Detecting fraudulent behavior on crowdfunding platforms: The role of linguistic and content-based cues in static and dynamic contexts. J. Manag. Inf. Syst. 2016, 33, 421–455. [Google Scholar] [CrossRef]
  133. Vo, N.; Lee, K. The rise of guardians: Fact-checking URL recommendation to combat fake news. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, Ann Arbor, MI, USA, 8–12 July 2018; pp. 275–284. [Google Scholar] [CrossRef]
  134. Altowim, Y.; Kalashnikov, D.V.; Mehrotra, S. Progressive approach to relational entity resolution. Proc. VLDB Endow. 2014, 7, 999–1010. [Google Scholar] [CrossRef] [Green Version]
  135. Asudeh, A.; Jagadish, H.V.; Wu, Y.; Yu, C. On detecting cherry-picked trendlines. Proc. VLDB Endow. 2020, 13, 939–952. [Google Scholar] [CrossRef] [Green Version]
  136. Castillo, C.; Mendoza, M.; Poblete, B. Information credibility on Twitter. In Proceedings of the 20th International Conference on World Wide Web—WWW ’11, Hyderabad, India, 28 March–5 April 2011; pp. 675–684. [Google Scholar] [CrossRef]
  137. Charrad, M.; Ghazzali, N.; Boiteau, V.; Niknafs, A. NbClust: An R package for determining the relevant number of clusters in a data set. J. Stat. Softw. 2014, 61, 1–36. [Google Scholar] [CrossRef] [Green Version]
  138. Ferrara, E.; Varol, O.; Davis, C.; Menczer, F.; Flammini, A. The rise of social bots. Commun. ACM 2016, 59, 96–104. [Google Scholar] [CrossRef] [Green Version]
  139. Han, Y.; Lappas, T.; Sabnis, G. The importance of interactions between content characteristics and creator characteristics for studying virality in social media. Inf. Syst. Res. 2020, 31, 576–588. [Google Scholar] [CrossRef]
  140. Hoffart, J.; Suchanek, F.M.; Berberich, K.; Weikum, G. YAGO2: A spatially and temporally enhanced knowledge base from Wikipedia. Artif. Intell. 2013, 194, 28–61. [Google Scholar] [CrossRef] [Green Version]
  141. Hopkins, B.; Skellam, J.G. A new method for determining the type of distribution of plant individuals. Ann. Bot. 1954, 18, 213–227. [Google Scholar] [CrossRef]
  142. Kang, B.; Deng, Y. The maximum Deng entropy. IEEE Access 2019, 7, 120758–120765. [Google Scholar] [CrossRef]
  143. Kassambara, M.A. Practical Guide to Cluster Analysis in R: Unsupervised Machine Learning, 1st ed.; STHDA, 2017; pp. 1–38. Available online: http://www.sthda.com (accessed on 4 March 2020).
  144. Kaufman, L.; Rousseeuw, P.J. Finding Groups in Data: An Intro to Cluster Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  145. Mather, P.M. Computational Methods of Multivariate Analysis in Physical Geography; John Wiley & Sons: Hoboken, NJ, USA, 1976. [Google Scholar]
  146. McQueen, J.B. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability: Weather Modification; Cam, L.M.L., Neyman, J., Eds.; University of California Press: Berkeley, CA, USA, 1967; pp. 281–297. [Google Scholar]
  147. Nickel, M.; Murphy, K.; Tresp, V.; Gabrilovich, E. A review of relational machine learning for knowledge graphs. Proc. IEEE 2016, 104, 11–33. [Google Scholar] [CrossRef]
  148. Rubin, V.L.; Chen, Y.; Conroy, N.K. Deception detection for news: Three types of fakes. Proc. Assoc. Inf. Sci. Technol. 2015, 52, 1–4. [Google Scholar] [CrossRef]
  149. Shao, C.; Ciampaglia, G.L.; Varol, O.; Yang, K.-C.; Flammini, A.; Menczer, F. The spread of low-credibility content by social bots. Nat. Commun. 2018, 9, 4787. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  150. Sitaula, N.; Mohan, C.K.; Grygiel, J.; Zhou, X.; Zafarani, R. Credibility-based fake news detection. In Disinformation, Misinformation, and Fake News in Social Media: Emerging Research Challenges and Opportunities; Shu, K., Wang, S., Lee, D., Liu, H., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 163–182. [Google Scholar] [CrossRef]
  151. Spirin, N.; Han, J. Survey on web spam detection: Principles and algorithms. ACM SIGKDD Explor. Newsl. 2012, 13, 50–64. [Google Scholar] [CrossRef]
  152. Theocharis, Y.; Lowe, W.; van Deth, J.W.; García-Albacete, G. Using Twitter to mobilize protest action: Online mobilization patterns and action repertoires in the Occupy Wall Street, Indignados, and Aganaktismenoi movements. Inf. Commun. Soc. 2015, 18, 202–220. [Google Scholar] [CrossRef]
  153. Thorndike, R.L. Who belongs in the family? Psychometrika 1953, 18, 267–276. [Google Scholar] [CrossRef]
  154. Tibshirani, R.; Walther, G.; Hastie, T. Estimating the number of clusters in a data set via the gap statistic. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2001, 63, 411–423. [Google Scholar] [CrossRef]
  155. Wardle, C.; Derakhshan, H. Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making (No. 162317GBR). Council of Europe. 2017. Available online: https://edoc.coe.int/en/media/7495-information-disorder-toward-an-interdisciplinary-framework-for-research-and-policy-making.html (accessed on 10 May 2020).
  156. Wu, Y.; Ngai EW, T.; Wu, P.; Wu, C. Fake online reviews: Literature review, synthesis, and directions for future research. Decis. Support Syst. 2020, 132, 113280. [Google Scholar] [CrossRef]
  157. Ye, J.; Skiena, S. Mediarank: Computational ranking of online news sources. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 2469–2477. [Google Scholar] [CrossRef]
  158. Zannettou, S.; Sirivianos, M.; Blackburn, J.; Kourtellis, N. The web of false information: Rumors, fake news, hoaxes, clickbait, and various other shenanigans. J. Data Inf. Qual. 2019, 11, 10:1–10:37. [Google Scholar] [CrossRef] [Green Version]
  159. Zhang, D.; Zhou, L.; Kehoe, J.L.; Kilic, I.Y. What online reviewer behaviors really matter? Effects of verbal and nonverbal behaviors on detection of fake online reviews. J. Manag. Inf. Syst. 2016, 33, 456–481. [Google Scholar] [CrossRef]
  160. Zhou, L.; Burgoon, J.K.; Nunamaker, J.F.; Twitchell, D. Automating linguistics-based cues for detecting deception in text-based asynchronous computer-mediated communication. Group Decis. Negot. 2004, 13, 81–106. [Google Scholar] [CrossRef]
Figure 1. Quantile profile plot on greenwashing cues.
Figure 1. Quantile profile plot on greenwashing cues.
Sustainability 15 06683 g001
Figure 2. Average marginal effects and simple slope plots of greenwashing with 95% CI on share price at different levels of environmental controversies. (a) Marginal effects on share price. (b) Simple slopes on share price.
Figure 2. Average marginal effects and simple slope plots of greenwashing with 95% CI on share price at different levels of environmental controversies. (a) Marginal effects on share price. (b) Simple slopes on share price.
Sustainability 15 06683 g002
Table 1. Cues and Indicators used in Profile Deviation.
Table 1. Cues and Indicators used in Profile Deviation.
CueIndicator(s)ValenceMeasurement Software
Quantity
  • Word quantity
  • Sentence quantity
Truthful
Truthful
LIWC
Specificity
  • Descriptive words
  • Spatio-temporal words
  • Generalizing terms
Truthful
Truthful
Deceptive
LIWC
Complexity
  • Cognitive processes
  • Insights
  • Sentence length
Truthful
Truthful
Truthful
Custom-written Python code
Diversity
  • Type-token ration
  • Content-word diversity
Truthful
Truthful
LIWC
Hedging/uncertainty
  • Tentative words
  • Impersonal pronouns
  • Weak model verbs
  • Negations
  • Exclusive words
Deceptive
Deceptive
Deceptive
Deceptive
Truthful
LIWC
Affect
  • Affective processes
  • Emotional tone
  • Negative affect
  • Positive affect
Deceptive
Deceptive
Deceptive
Truthful
LIWC
Vividness/dominance
  • Activation
  • Imagery
Deceptive
Deceptive
Whissel Dictionary
Table 2. Average Greenwashing Score per Quantile Range.
Table 2. Average Greenwashing Score per Quantile Range.
Quantile RangeNo. of ObservationsAverage Greenwashing ScoreExample Tweet
1 (low greenwashing)822375.15New forecast predicts #oilsands output in #Alberta will more than triple by 2030, to 5M barrels a day http://on.wsj.com/KjcUjF (accessed on 15 April 2020)
Oil & Gas firm (score: 75.62)
2822377.64
3822282.94
4822383.95[Company] sources battery cells from carbon-neutral production for the first time. That’s significantly more than 30% savings on the carbon footprint of the entire battery of future models. #Sustainability
Auto firm (score: 84.88)
5822284.79
6822386.50
7822287.12Read about #[company’s] commitment to a #lowcarbon future http:// [company website]
Oil & Gas firm (score: 91.24)
8822388.75
9822289.52
10 (high greenwashing)822391.11
Table 3. Outcome of Greenwashing: Financial Market Performance.
Table 3. Outcome of Greenwashing: Financial Market Performance.
Dependent Variable: Share Price
VariablesModel (1)Model (2)Model (3)Model (4)
Greenwashing (GW) −0.47 ** −0.77 **
(0.04) (0.07)
ESG Controversies (ESGC) −0.05 **−1.10 **
(0.00)(0.18)
GW × ESGC 0.24 **
(0.04)
Industry (0 = Auto, 1 = Oil)−0.48−0.51−0.89 *−0.92 *
(0.33)(0.33)(0.36)(0.37)
Region (0 = NA, 1 = Global)−0.58−0.57−0.68−0.63
(0.37)(0.38)(0.42)(0.43)
Size (0 = B20; 1 = T20)0.76 *0.74 *0.400.36
(0.31)(0.31)(0.35)(0.36)
Gross Income−0.21 **−0.21 **−0.03 −0.05 *
(0.01)(0.01)(0.02)(0.02)
Return on Assets0.08 **0.07 **0.10 **0.09 **
(0.00)(0.00)(0.00)(0.00)
Operating Income0.39 **0.40 **0.28 **0.29 **
(0.01)(0.01)(0.01)(0.01)
Profit0.07 **0.07 **−0.03 **−0.03 *
(0.00)(0.00)(0.01)(0.01)
Revenue0.05 **0.06 **0.19 **0.20 **
(0.01)(0.01)(0.01)(0.01)
Constant1.99 **4.01 **1.74 **5.07 **
(0.43)(0.47)(0.50)(0.60)
Observations29,27129,27119,79119,791
Number of Firms50504242
Random effects regression estimates with standard errors in parentheses. Region NA: North America. Size T, B: Top & Bottom 20 rank by market cap (Forbes Global 2000). ** p < 0.01, * p < 0.05,  p < 0.1.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oppong-Tawiah, D.; Webster, J. Corporate Sustainability Communication as ‘Fake News’: Firms’ Greenwashing on Twitter. Sustainability 2023, 15, 6683. https://doi.org/10.3390/su15086683

AMA Style

Oppong-Tawiah D, Webster J. Corporate Sustainability Communication as ‘Fake News’: Firms’ Greenwashing on Twitter. Sustainability. 2023; 15(8):6683. https://doi.org/10.3390/su15086683

Chicago/Turabian Style

Oppong-Tawiah, Divinus, and Jane Webster. 2023. "Corporate Sustainability Communication as ‘Fake News’: Firms’ Greenwashing on Twitter" Sustainability 15, no. 8: 6683. https://doi.org/10.3390/su15086683

APA Style

Oppong-Tawiah, D., & Webster, J. (2023). Corporate Sustainability Communication as ‘Fake News’: Firms’ Greenwashing on Twitter. Sustainability, 15(8), 6683. https://doi.org/10.3390/su15086683

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop