Next Article in Journal
Nature, Artifice, and Discovery in Descartes’ Mechanical Philosophy
Next Article in Special Issue
Prudence, Rules, and Regulative Epistemology
Previous Article in Journal
Moral Partiality and Duties of Love
Previous Article in Special Issue
Is It Virtuous to Love Truth and Hate Falsehood?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Vices and Virtues of Instrumentalized Knowledge

1
Independent Researcher, 2311 EZ Leiden, The Netherlands
2
Institute of Philosophy Faculdade de Letras, Universidade do Porto, 4150-564 Porto, Portugal
*
Author to whom correspondence should be addressed.
Philosophies 2023, 8(5), 84; https://doi.org/10.3390/philosophies8050084
Submission received: 6 June 2023 / Revised: 11 September 2023 / Accepted: 12 September 2023 / Published: 14 September 2023
(This article belongs to the Special Issue Between Virtue and Epistemology)

Abstract

:
This article starts by defining instrumentalized knowledge (IK) as the practice of selectively valuing some set of reliable beliefs for the promotion of a more generally false or unreliable worldview. IK is typically exploited by conspiratorial echo chambers, which display systematic distrust and opposition towards mainstream epistemic authorities. We argue that IK is problematic in that it violates core epistemic virtues, and this gives rise to clear and present harms when abused by said echo chambers. Yet, we contend, mainstream epistemic authorities (MEAs) are also complicit in practices resembling IK; we refer to these practices as instrumentalized knowledge* (IK*). IK* differs from IK in that the selective valuing of beliefs corresponds to a ”reliable” worldview, namely, one independently verified by the relevant epistemic experts. We argue that IK*, despite its apparent veracity, is also problematic, as it violates the same epistemic virtues as IK despite its aim of promoting true beliefs. This, we argue, leads it to being counterproductive in its goal of producing knowledge for the sake of the pursuit of truth, thereby raising the question of what distinguishes virtuous from nonvirtuous practices of instrumentalized knowledge. In an attempt to avoid this violation and to distinguish IK* from IK, we investigate whether and how IK* could still be epistemically virtuous. We conclude that IK* can be virtuous if its goal is to produce understanding as opposed to mere knowledge.

1. Introduction

Conspiracy theories have recently come to the front of mainstream philosophical discourse. From flat-earthism to the “Stop the Steal” movement, conspiracy theories have become a prominent political force. Much discussion is now devoted to analyzing the epistemic practices of these groups, especially when there is a significant risk of harm. Take, for example, the role that QAnon played in the events leading up to the January 6th storming of the US capital building or the role of antivaccine rhetoric in fueling hesitancy towards disease inoculation and other crucial medical procedures. Yet, while research on the implications of the tactics used by these groups is rapidly expanding—leading many to censure conspiracy theories because of their bad epistemic practices (see, e.g., [1,2,3,4])—little attention has been given to comparable tactics used by the mainstream scientific and intellectual authorities1. This raises important questions concerning the responsibility of mainstream epistemic authorities (MEAs) to ensure the production of knowledge—namely, which tactics are acceptable?
This article will look at the practice of instrumentalized knowledge (IK), specifically as used by MEAs. To do so, let us first consider how IK is equipped by conspiratorial echo chambers (CECs). CECs are echo chambers displaying systematic distrust and opposition towards MEAs, including both scientific communities and science communicators, alongside reputable journalists and institutions. A signature characteristic of CECs is their strategic use of IK, which can be understood as the practice of valuing (communicating) some set of reliable2 beliefs merely in their effectiveness towards justifying their own false or unverified worldview3. The problem with IK is not only that it stands in opposition to core epistemic virtues (most notably open-mindedness and valuing truth for its own sake), but also that it can be used to promote worldviews with evident harms. Where many in social epistemology have focused on the challenges produced by CECs, we explicitly focus on the novel epistemic problem of the use of IK by mainstream epistemic authorities, which we refer to as instrumentalized knowledge* (IK*). IK* is differentiated from IK in that the worldview (the summation of beliefs) in question is independently verified by the relevant epistemic authorities or experts—these, of course, differ from context to context, and so we must remain somewhat ambiguous as to what determines whether some individual or scientific body counts as an authority. We argue that the practice of IK* is problematic in its own right, in part, because it violates the same core epistemic virtues that we draw upon to censure conspiracy theories and their echo chambers. This leads it to being counterproductive in its self-assigned goal of producing knowledge, i.e., true worldviews (or as close to true as we can manage given our current best scientific and epistemic capabilities)4. In this way, we are using IK as a foil to explore the vices (and virtues) of IK*, which have hitherto been unexplored in the epistemic literature. The centrality of IK as a foil stems from the fact that conspiratorial echo chambers display systematic distrust and opposition towards mainstream scientific and intellectual authorities. This thus illustrates why and how IK* is a separate but equally important epistemic issue. Nevertheless, there are reasons to think that IK* could be epistemically virtuous in some cases. We conclude with a discussion of when and how IK* can be epistemically virtuous while still differentiating it from IK.

2. Conspiratorial Echo Chambers

The literature on conspiracy theories is developing rapidly, and both the use of the term “conspiracy theory” and its definition are matters up for significant debate ([7,8]). For the purposes of this article, we define conspiracy theories as systems of beliefs directly contradicted by the relevant (mainstream) epistemic authorities, i.e., relevant scientists, academics, and journalists/science communicators [4]5. This provides a basis with which to define “conspiratorial echo chambers” (CECs)6.
Before addressing the relationships linking conspiratorial echo chambers to IK and mainstream epistemic authority to IK*, first let us introduce the concept of an echo chamber. C. Thi Nguyen defines an echo chamber as a “social epistemic structure from which other relevant voices have been actively excluded and discredited” ([11], p. 141). Echo chambers are, in a sense, epistemically exclusive: they consist of insiders, who are considered trustworthy by other insiders, and there are the outsiders, who are considered untrustworthy. Echo chambers are markedly different from epistemic bubbles in that those within an epistemic bubble merely lack access to the relevant factual information to form true beliefs (or to build cognitive capacities for true belief); yet, they are said to retain trust in the mainstream epistemic authorities (to the degree that they are aware of them). In short, once the relevant information is provided, epistemic bubbles are easily “ruptured” by new (approximately true) information, whereas echo chambers do not rupture so easily because they do not recognize these authorities in the first place.
We define a conspiratorial echo chamber (CEC) as an echo chamber that falls under our earlier definition of conspiracy theories. One such example would be the “Stop the Steal” movement in the US. This movement exploded amongst the Republican Party following the 2020 US presidential elections, claiming Democrats were rigging the election votes. It has since been regaining traction following the US 2022 primaries [12]. This movement was founded on distrust, has been extremely polarizing in their “us versus them” attitude, and has repeatedly sought to obstruct and contradict the mainstream epistemic authorities (such as independent election monitors), who challenge this movement with polling data [13].
It is clear to see how CECs can be harmful. The “Stop the Steal” movement was partially but directly responsible for the insurgency at the US capital in January 2021, alongside numerous other problematic movements, such as antivaxxers, QAnon, and Holocaust deniers. In addition to the clear and present real-world harms caused by spreading violence and misinformation, some of the practices displayed by CECs are fundamentally incongruent with traditional epistemic virtues. Epistemic virtues are typically defined as the virtues that not only serve to define truth but also to promote truth-conductive behaviors [14,15,16,17]. Hence, the targeted and explicit dismissal of epistemic authority, as displayed by CECs, is clearly incongruent with core epistemic virtues: it is impossible to claim to genuinely pursue and promote knowledge while simultaneously disregarding the mainstream epistemic authorities (assuming their degree of accreditation)7.
An essential element of CECs is the seemingly arbitrary role of trust and distrust in its composition. By refusing to recognize the mainstream epistemic authorities, false beliefs quickly propagate, lacking any accountability to established or highly probable truths. In addressing CECs, this lack of trust in epistemic authorities needs to be reconciled. Most members of CECs do not distrust epistemic authorities for any rational reason; rather, it is through highly emotional and sociological factors that people lose trust in authorities [18]. This is illustrated by Nguyen [11] through the story of former neo-Nazi Dereck Black. Growing up in a fascist CEC, what finally broke Black out of his conspiratorial worldview was not some extraordinary rational argument, rather it was him being invited by a fellow Jewish student for Shabbat dinner. It was this display of kindness and trust that let Black truly reevaluate his worldviews. Nguyen uses this story to illustrate that to break people out of CECs, more is required than an appeal to true belief via rational argument. To dispel the attitude of hostility and distrust towards epistemic authorities that are characteristic of CECs, building interpersonal trust is essential. In order for established truths to be trusted, people who are in the position of communicating truth need to be trusted as well. If those making up the mainstream epistemic authority are considered trustworthy, the content of their communication is more likely to be taken as true (or highly probable).

3. Instrumentalized Knowledge

The most problematic practice that is prevalent in (but not limited to) CECs is the strategic use of IK. The value of a specific set of beliefs is measured and set, in part, according to their effectiveness in supporting some worldview. To illustrate IK, consider the following case: Some conspiracy theorists hold the position that human-caused global warming is a myth. Yet, when scientific evidence of climate change is presented, this is either ignored or actively discredited; it holds no epistemic value to the CEC members. On the other hand, when evidence of the mitigated effects of climate change are presented, such as a relatively unexpected restoration of the coral reefs [19], this is considered of great epistemic value to the CEC. It directly strengthens the worldview that human-caused climate change is a myth after all.
When discussing IK, some clarifications are in order. IK, as we define it, is concerned exclusively with non-normative evidential propositions. Normative propositions may exploit instrumentalized knowledge in various ways, some more acceptable than others. A prime example is political campaigning, which frequently does exactly this. It references news items or other points of interest only insofar as it furthers the goals of the candidate or party. American progressives, for example, will highlight the ease of purchasing weapons by a shooter to promote more stringent weapons legislation, while conservatives might highlight prior mental health issues of the perpetrator to argue to the contrary. There is a lot to say about this strategy, but this paper will restrict itself to non-normative applications of IK only.
A key aspect of IK in CECs is that the underlying worldview is typically incompatible with a larger body of beliefs that is accepted or promoted by the relevant epistemic authorities. The claim that human-caused climate change is a myth, for example, has been repeatedly debunked and is clearly observable [20,21]. The evidence against this claim is clear, and there is sufficient consensus among the epistemic experts (i.e., climate scientists) for the myth to be roundly discredited (see, e.g., [22,23]). This all being said, the practice of IK by the mainstream (IK*) takes a different form.

4. Mainstream Instrumentalized Knowledge (IK*)

Before discussing the role of IK*, it is important to first define what is meant by mainstream epistemic authorities (MEAs). Consider MEAs as an antithesis to CECs: where CECs are characterized by systematic distrust of MEAs and, by extension, epistemic experts, MEAs are characterized by a systematic trust of these experts8.
Though it is not inconceivable for MEAs to employ IK towards a false belief, this paper looks at a different case, which we define as IK*. Such cases involve MEAs instrumentalizing knowledge in the service of an independently verified, reliable set of beliefs. To understand how IK differs from IK*, consider again the claim that the coral reefs have recovered much better than expected. The Australian Institute of Maritime Science recently released their annual report in which they show a 36-year high in coral reef recovery [19]. Climate-denial CECs could use this claim as a counterpoint in “proving” their position that human-caused climate change is a myth. MEAs, however, often instrumentalize such knowledge the other way around: They consider the findings of exaggerated risk to be threatening to their respective beliefs, namely, that human-caused climate change is harmful. The evidence, thus, undermines their broadly reliable set of beliefs about climate change, and they will seek ways to ignore or deflate the claim. This is performed, for example, by emphasizing the remaining damage to the ocean or by simply ignoring the finding altogether. This is exemplified in a recent article in The Conversation, titled “Record coral cover doesn’t necessarily mean the Great Barrier Reef is in good health (despite what you may have heard)” [24].
Though the article does not necessarily contain serious inaccuracies, it places unambiguous emphasis on how this finding actually is not a source of optimism. This is done by trivializing some of the findings (“[These] findings can be deceptive”, “are we being catfished by coral cover”) and highlighting the remaining threats to the point of removing any notion of this being good news. Even though this is just one example, it illustrates the larger trend in the mainstream communication of climate change of selectively communicating only those findings that fit in the larger narrative of the danger of climate change while seeking to downgrade or dismiss findings that diminish the apparently real danger. This is further illustrated by Havranek et al. [25], who found a not insignificant case of selective reporting in studies on climate change effects: Studies underreported findings of the very low social cost of carbon, resulting in a skewed consensus. Though this discrepancy is not nearly big enough to warrant doubts on the overall risks of climate change, it does show a widespread tendency of scientists to evaluate results, in part, based on how it fits into a broader narrative. This can also be found in more mainstream modes of communication, as outlined by Olausson [26]. They found a strong prevalence of what they define as a “frame of certainty” in Swedish media regarding climate change, meaning outlets dismissed any kind of uncertainty regarding the existence, extent, or any conflicts regarding climate effects in favor of a manufactured sense of certainty between cause (climate change) and effect (specific, extreme weather effects) [27]. Though media outlets are no climate change experts, in many cases they are, from the perspective of lay people, the main epistemic authority when it comes to such issues [28,29].
The dismissal of seemingly contrary evidence is one way for MEAs to instrumentalize knowledge in the pursuit of more generally recognized truths. Another way is by giving excess (i.e., undue) credibility to claims supporting the reliable worldview. In the case of climate change, this could be conducted through the modeled effects of climate change; long-term modeling of the effects of climate change is extremely imprecise and difficult. Although almost all reputable climate scientists agree that the effects are dire indeed, the exact economic and existential threats vary widely [30]. This is because climate models are immensely complex, and the smallest change to input variables can drastically alter the future forecasts [31]. As such, the mainstream can capitalize on these discrepancies by highlighting the most pessimistic model, in pursuit of the goal of spreading the worldview that climate change is a serious and urgent risk9.

5. The Problem of Instrumentalized Knowledge*

IK* might seem harmless at first, considering its aim at creating true beliefs and, hence, a more approximately true worldview. Nevertheless, we argue that IK* is a problem for the following three reasons: First and, most importantly, IK* is problematic when considered through the lens of virtue epistemology. The epistemic virtue of open-mindedness alone should discourage the strategic use of instrumentalized knowledge. That is, any seemingly contradictory claim or evidence should encourage curiosity not apprehension. This is also clear when considering Zagzebski’s interpretation of epistemic virtue as the pursuit or “love of truth” (2003). This seems congruous with IK* at first; after all, the primary goal of IK* is to promote an apparently verified true belief. This, however, assumes too binary a view of truth (or what constitutes true belief). Climate change, for example, is not a single proposition to be judged as true or false. It is a complex and still unfolding scientific investigation, with many unanswered questions—many of which cannot be qualified as merely “true” or “false”. Questions such as, how bad are the harms? How quickly will the harms occur? How avoidable are they? These questions are all indicative of the complexity of climate change; yet, IK* impedes the analysis of the interconnected truths that make up this event. It is clear that climate change is a difficult science, and different forecasts need to be interpreted with great care, if not with skepticism. What IK* does instead is reduce the complexity of this issue to pivot upon a binary truth evaluation: that human-caused climate change is (either) real and harmful (or not).
A second reason why IK* is problematic is because it gives rise to epistemic injustice, specifically testimonial injustice. Miranda Fricker [34] identifies testimonial injustice as the attribution of a credibility excess or deficit towards an agent. IK does just this: It assigns a credibility excess to proponents of the held belief and their claims, and a credibility deficit to those deemed threatening towards the opposing worldview and their beliefs. This credibility assessment is not derived from the epistemic qualities of the agent but merely through the gain or loss in promoting one’s epistemic position. The epistemic injustice is made even clearer when considering the asymmetry in epistemic judgement when comparing CECs against the epistemic authorities who are utilizing similar practices. The mainstream epistemic authorities may cite the use of IK in CECs as a reason for publicly censuring them while simultaneously employing similar tactics when promoting their own aims. It is this asymmetry of epistemic judgement and communication that not only contributes to injustice but also reinforces the distrust of CECs towards the mainstream epistemic authorities.
A critic might respond by arguing that the above criticisms of IK* presuppose an unjustified emphasis on epistemic procedure. Just epistemic procedure is a good thing, but sometimes the public good is more important. Heather Douglas, for example, has long argued for scientists specifically to take a moral responsibility in weighing the value of knowledge against potential harms [35,36]. One such example is the accelerated development of COVID-19 vaccines. Proper epistemic procedure would dictate a long-term study on vaccine efficacy and side effects, putting development in line with historical trends10. This would ignore public health implications, however, specifically the harm of a delayed roll-out. One could reasonably argue that given a base level of quality control, the public health benefits of an early roll-out outweighed the potential risks of the unknown (long-term) side-effects11.
This is a worthy consideration. From a broadly utilitarian perspective, IK* might, indeed, be justified in cases where avoiding clear public harm warrants priority. However, there are two problems with this counterargument. First, cases of clear public harm are rare. Climate change skepticism, for example, is relatively harmless on an individual level and on a short time scale. It is hard to justify sacrificing just epistemic procedure in IK when the harms of a small group of dissenters is similarly small, while there is a lot of important knowledge to be gained through open discussion of the interconnected facts making up this complex issue. Even if this were not the case, however, IK* suffers a bigger problem. IK* fails exactly in that which it aims to achieve: To convince skeptics to a adopt a true worldview—or, more specifically, to adopt cognitive capacities that promote recognition and acceptance of reliable beliefs about the world. But instead of producing and disseminating knowledge, it cultivates something more akin to a Gettier scenario, that is, potentially fostering true beliefs without knowledge. This brings us to the third problem for IK*.

6. Is Instrumentalized Knowledge* Self-Defeating?

Recall that for IK*, the epistemic position in question is already independently verified by the relevant epistemic authorities. Those yet unconvinced of these beliefs can be categorized into two main groups: Those who are ignorant of said beliefs as presented by the epistemic authorities, and those aware of the reliable beliefs but who have chosen to reject them, perhaps because of a conspiratorial worldview. In the first case, IK* is redundant. People are ignorant; what they lack is clear information from epistemic authorities. This should be achievable without instrumentalizing knowledge. Recall the example of the unexpected coral reef recovery. This finding is perfectly suited for educating the ignorant. All that is required is a certain level of communication from accredited authorities, like marine biologists, not only in communicating their findings but also by placing these findings in a broader picture. The Australian Institute of Maritime Science report does this exactly, providing context on the study by placing it in the broader picture of the dangers of global warming on maritime life [19]. If the knowledge is instrumentalized, people will potentially accrue true beliefs without knowledge. The belief is correct (people correctly recognize the dangers of climate change), but their understanding of the complexity of the event is misinformed. Hence, the belief comes apart from the larger web of facts (or highly probable propositions) that constitute the reliably true belief, putting the quality of knowledge into serious doubt. This can have real-world harms as a result: members of the public believe that climate change is harmful specifically because of the damage to the coral reefs, potentially resulting in a disproportionate focus of resources or an undue dependence of climate change beliefs on specific occurrences.
The second group of recipients are those who are aware of the current scientific/epistemic consensus but have rejected it. In cases of exceedingly verifiable truths, such as climate change, rejecting these truths amounts to rejecting the epistemic authorities directly. IK* is, therefore, in effect trying to convince members of CECs who believe otherwise: those who do not trust the epistemic authorities and, by extension, the mainstream itself. This is where IK* becomes self-defeating. Recall Nguyen’s proposed solution of trust building for dealing with (conspiratorial) echo chambers [11]. IK* does the opposite. By sacrificing epistemic procedure in favor of furthering knowledge dissemination, the core sentiment driving CECs is reinforced: epistemic authorities are untrustworthy and, instead of being concerned with the pursuit of truth, they are more concerned with an apparent agenda.
Critics might respond that IK* does not aim to convince either conspiracy theorists or the ignorant. Rather the aim is to convince those who already possess the minimum knowledge and open attitude to appreciate diverse information (the center); even though IK* might reinforce skepticism among conspiracy theorists and increase the polarization of worldviews, it might also convince a (potentially large) group of centrist epistemic agents who were previously undecided but are capable of assimilating new truths into their worldview.
We concede the public is not divided exclusively between trusting and distrusting epistemic agents. Even for the undecided center, however, IK* does more harm than good. When presenting the center with IK* towards some belief with significant scientific consensus, one of three things can happen: (1) some will be convinced, accruing true beliefs without gaining knowledge of the interconnected facts that make up the original position (e.g., climate change); (2) some will remain undecided, not changing their beliefs one way or another; and (3) some will reject the presented beliefs based on the nonvirtuous epistemic procedure, resulting in greater skepticism towards mainstream epistemic authorities.
When comparing a large, undecided center against two increasingly polarized sides, we argue the former is much preferred. An undecided center will encourage genuine discovery; instead of being coerced into what to believe through nonvirtuous epistemic practices, the public is invited to participate in the debate. The harms of polarization are clear, too. The benefits of having more people subscribed to reliable beliefs recognized by the mainstream do not outweigh the societal harms of polarization and misconstrued knowledge.
That being said, this attempt at reconciliation is speculative at best and invites us to make ethical judgments rather than epistemic ones. Ultimately, IK and IK* remain similarly understood as epistemic practices, and so, as of yet, we lack sound principles for distinguishing one from the other. In the remainder of the paper, we aim at providing a defense of IK* that deviates from the above speculative justifications.

7. Two Counterarguments to the Critique of IK*

One final consideration against our critique of IK* is that IK* may be justified by the demands of good scientific practice. In particular, one could argue that scientific progress is marked by the frequent use of false assumptions, idealizations, or unknown variables, which play various important roles in theoretical development and modeling12.
One such example can be found in quantum physics, where Schrodinger’s equation has proven to be extremely useful in determining the positions of quantum particles; yet, the underlying nature of the equation and the state of superposition remain a matter of speculation and ongoing debate [43]. Another example can be found in microeconomics, in which consumer choice models posit agents that have highly psychologically unrealistic properties, yet if used in the proper modeling contexts, can be shown to reliably predict certain kinds of consumer behavior [44]. Both of these examples could be viewed as instances of IK* insofar as they purposefully incorporate incomplete or nonfactive data into theories and models with the pursuit of producing some form of knowledge13.
Alternatively, the case can also be made that science communication, independent of knowledge production, also warrants IK*, especially when it comes to solving collective-action problems where the evidence or testimony of epistemic authorities gives rise to conflicting verdicts on the causes or potential outcomes of known threats. In climate modeling, for instance, much of the underlying processes and predictions are poorly understood, yet most would consider the urgency of preservative actions communicated by experts on climate change justified in spite of the unknowns regarding specific outcomes, be they economic or environmental [30].
In other words, some may be inclined to say that IK* is justified either if it promotes good scientific practices as determined by the relevant epistemic authorities or if it avoids or postpones existential harms caused by collective inaction because of uncertainty among the relevant epistemic authorities.
Both counterarguments are, without a doubt, compelling. Yet, their justifications fail to tell us why IK* is epistemically more virtuous than IK, which is what we are ultimately concerned with. Consider the latter counterargument first. The idea that IK* is justified because it reduces existential harms is certainly consistent with some virtues. But one will notice that these are moral virtues, not epistemic ones. If there is any epistemic value in harm reduction, it is the preservation of the lives and health of epistemic agents, and one could, thus, argue this is a necessary condition for any epistemic performance. So be it. But if we concede this point for IK*, then we also have to concede it for IK, and this is exactly what we are trying to avoid. In fact, conspiracy theorists already make appeals to harm reduction in support of their false worldviews—think about the rhetoric used by antivaxxers and antiscience subgroups. So, when it comes to harm reduction, even if it is fundamental to epistemic practice, there is nothing intrinsically epistemically virtuous about it—at least, nothing that draws a principled distinction between IK and IK*. This brings us to the former counterargument—that IK* is consistent with and promotes good scientific practice.
The problem with appealing to good scientific practice as a counterargument is that, while IK* may be a common feature in theory formation and model building and, thus, an important component of knowledge production, it is certainly not the case that all knowledge is valuable or, rather, equally valuable14. This question becomes particularly apt when we reflect on the history of “ignoble” scientific achievements, i.e., achievements whose relevance or application is either trivial or whose motivations are revealed to be self-serving or involve gross conflicts of interest15.
Consider the ongoing replication crisis in psychology and behavioral sciences. While there is clear evidence of bad scientific practices (e.g., washing of bad data, republishing of the same research), it is evident that many psychologists and behavioral scientists take themselves to be doing honest work. This stands in contrasts to the many claims that their research either lacks conceptual novelty or traffics in some form of self-plagiarism or data reproduction [48,49]. This is more than just a sign that scientists and epistemologists differ on what counts as “knowledge” production; it is a sign that much of what is produced under the heading “knowledge” either does not always advance understanding of the subject in question or (more worryingly) produces knowledge that runs contrary to understanding the subject.
We do not mean to suggest here that theorizing with false assumptions, idealizations, or unknown variables cannot aid in knowledge production; moreover, we do not mean to suggest that knowledge production should be evaluated on the basis of the frequency of novel discoveries or ground-breaking developments. We recognize that much of the important work in science—whether that is measured in predictive or explanatory power—is often mundane and without major discovery, it is the grind that Kuhn describes as “normal science” [42]. Our point in claiming that not all knowledge is equal is that knowledge production may not be the best marker of epistemic virtue, precisely because what counts as knowledge production will be variant across diverse disciplinary norms of prediction and explanation [50]. As such, what counts as knowledge with respect to each discipline is a question for philosophy of science, not epistemology. As such, we think that the counterargument from good scientific practice, like harm reduction, does not provide the right kind of justification for IK*.
This brings us back to the central issue. What we are concerned with here is whether IK* can be epistemically virtuous as a matter of principle, independently of the nonepistemic justifications that may warrant its practice. While we think that this question cannot be answered in a single paper, we do believe that one important step can be made towards resolving or, at least, advancing this issue. We propose that if IK* can be epistemically virtuous (in principle) and, therefore, distinct from IK, the answer resides in how IK* promotes understanding, not knowledge.

8. The Virtues of Understanding

From the perspective of virtue epistemology, the value of understanding differs from knowledge in a few key respects. First, understanding (in some readings of it) permits epistemic agents to hold some peripheral false beliefs towards a subject or theory provided that their set of “core” beliefs are true and justified. This is sometimes referred to as the “weak-factive” view of understanding [51,52]. We can imagine weak-factive understanding with the following example—One might understand how a car engine works by being able to provide a basic functional description: The engine consists of a fixed cylinder and a moving piston. The expanding combustion gases push the piston which, in turn, rotates the crankshaft. Ultimately, through a system of gears in the powertrain, this motion drives the vehicle’s wheels.
This understanding, though rudimentary, not only expresses familiarity with the core principles of combustion and engine design but it is enough for one to make, at least, elementary interventions if the engine breaks down. The same individual may have false beliefs (or no beliefs) about what type of metal the engine parts are forged from or of what chemicals the fuel is composed. But these beliefs do not hinder the agent from understanding the engine’s basic operations. Further, an understanding of this sort allows the agent to acquire new true beliefs, e.g., that there are two kinds of internal combustion engines: the spark ignition gasoline engine and the compression ignition diesel engine, or that most of these are four-stroke cycle engines, meaning four piston strokes are needed to complete a cycle, which aid in new practical interventions. Thus, even under the “weak-factive” reading, understanding (contra mere knowledge) provides a better basis for assessing epistemic virtue. Or better, it provides a basis for assessing what makes *certain* sets of knowledge more virtuous than others.
The virtues of understanding can be extolled further. Not only does understanding accommodate false beliefs in support of greater/future knowledge production, but it can be argued that understanding is a superior mark of epistemic virtue precisely because it is a different, “higher” kind of a cognitive achievement compared to knowledge accumulation [15,47,53,54,55]. That is to say, it involves more than merely memorizing propositions, but it turns on a more comprehensive and abstract appreciation of the topic/subject at hand, allowing the agent to engage with and test their understanding through self-guided means. To appreciate the practical value of understanding, consider the following passage by Pritchard:
Imagine someone who, for no good reason, concerns herself with measuring each grain of sand on a beach, or someone who, even while being unable to operate a telephone, concerns herself with remembering every entry in a foreign phonebook. In each case, such a person would thereby gain lots of true beliefs but, crucially, one would regard such truth-gaining activity as rather pointless. ([56] p. 102)
Returning to the example above, understanding engine combustion is not merely valuable because it contains independent facts about how engines and combustion work, it is valuable because, assuming one possesses it, it affords new cognitive and practical opportunities for making sense of and producing true beliefs about car engines and their operations. Hence, merely possessing (many) true beliefs is not always very interesting or practically useful.
This brings us back to the counterargument from good scientific practice. Even if IK* is consistent with good scientific practice, there are clearly cases of knowledge production without evident understanding. But this is only a problem if we think that knowledge production is a mark of epistemic virtue. Contrarily, if one adopts understanding as the ultimate aim of epistemic virtue, then it seems that IK* is not merely warranted but unavoidable in many scientific, as well ordinary epistemic practices. As Batterman [57] argues, it is not just that idealizations and fictions play various important roles in theory development and model building, it is that there could be no scientific progress without them—any speculation into the unknown necessarily involves assumptions and missing variables to be filled in along the way. In this sense, the value of understanding is not exhausted by the weak-factive view in which peripheral false beliefs are permitted for making sense of perfunctory epistemic activities; even our most trusted sciences do, or have in the past, engaged in IK* practices.
How does this address the question as to whether IK* can be, in principle, epistemically virtuous? On the one hand, the brief discussion above hints that there are cases or situations in which IK* plays a central role in understanding, and if we take understanding to be a cognitively “higher” achievement (e.g., because it affords gaining new knowledge and, hence, new understanding), then this does constitute an important distinction from the previous evaluation of IK* as mere knowledge production. However, a looming question arises: what stops IK from producing understanding and, as such, assuming the same merits as IK* in terms of epistemic virtue? Afterall, if understanding permits epistemic agents to hold false beliefs, and if scientific progress requires leaps of faith that are codified in assumptions and unknown variables, why does IK not produce understanding where IK* apparently does?
The way to potentially resolve some of this tension is to reevaluate the specific conceptualization of what constitutes understanding and, specifically, what level or type of false beliefs are permissible for something to still qualify as understanding. A strong-factive view of understanding is not very useful, as such a view permits few to no false beliefs, disqualifying both IK and IK* on the same grounds.
A weak-factive view (as mentioned earlier) is equally problematic in that it might be overly permissive, qualifying both IK and IK* as promoting understanding. A more moderate view in line with Kvanvig [53] might provide some resolution. If the moderate view of understanding states that false beliefs are acceptable as long as they do not constitute “core” beliefs, we could build a case that IK is different from IK* in that IK has core false beliefs, whereas IK* only has “peripheral” false beliefs (which, in this view, is permissible for building understanding). This does raise the question of what differentiates a core belief from a peripheral belief, a question currently left largely unexplored in the literature. Though a comprehensive resolution falls outside the scope of this paper, a simple distinction could be argued as follows:
Assuming IK* is rooted in “good” scientific practice, we can argue that scientific practice by definition is focused exactly on making sure that core beliefs are as truthful as possible. The scientific method puts high emphasis on (i) rooting understanding in carefully curated evidence, grounded in statistical methods and error terms, and (ii) seeks to preserve accepted and established facts. IK lacks both characteristics. Though these two elements far from guarantee virtuous understanding, the fact that IK does not attempt to pursue either in the first place differentiates them in epistemic virtue.

9. Conclusions

From the perspective of virtue epistemology, IK is part of what makes CECs so problematic. However, this problem concerns mainstream and respective epistemic authorities as well. The mere fact that the mainstream instrumentalizes knowledge towards a set of verified truths does not, in and of itself, justify such epistemic practices. It is problematic in its incompatibility with the many cognitive virtues that support the production of knowledge, and so it exhibits similarly vicious characteristic traits as IK, such as testimonial injustice. This notwithstanding, the instrumentalization of knowledge threatens to further polarize politicized issues, thus reinforcing the motivations of CECs and their potential harms.
Yet, by treating understanding as epistemically more virtuous than mere knowledge, we can draw a principled distinction between IK and IK*: while IK may occasionally traffic in true beliefs, it can never give rise to understanding given its general lack of respect for epistemic virtue. By contrast, IK* can, when pursued virtuously, promote understanding. But this raises the question: when is IK* epistemically virtuous, and when not?
Perhaps the answer is teleological: IK* is epistemically virtuous if and only if it is used to arrive at a broader understanding. The problem is that, in practice, it remains up for debate what are the most appropriate criteria for distinguishing which procedures promote understanding and, moreover, how to determine when an agent possesses or truly aims at understanding (as opposed to merely promoting one’s worldview). One criterion put forth by this article is to distinguish different kinds of understanding according to the truth level of its core beliefs. IK by definition traffics in false core beliefs, whereas IK* only sometimes involves false beliefs, and these beliefs are not always crucial to understanding the subject in question. This then begs the question of how to differentiate between core and peripheral beliefs—for which there is currently no principled method. Even if one were conceived, there is still the problem of the difference in epistemic procedure across fields (e.g., one cannot use the same criteria to adjudicate debates about instrumentalized knowledge in psychology versus, say, physics).
What we do know is that we should emphasize the importance of epistemic procedure in the evaluation of politicized arguments, even in cases where the worldview is a widely established truth. Instead of focusing on spreading evident truths with whatever tools available, mainstream discourse should focus on promoting understanding of these evident truths as opposed to merely disseminating factual claims. Conflicting evidence should be seen as an opportunity to explore the underlying interconnected facts that constitute a contested worldview. This is how understanding flourishes.

Author Contributions

Writing—original draft preparation, J.S. and J.G.; writing—review and editing, J.S. and J.G. All authors have read and agreed to the published version of the manuscript.

Funding

James Grayot is funded by “Fundação para a Ciência e a Tecnologia—CEEC 4th edition”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank Anand Vaidya and two anonymous referees for their highly valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

Notes

1
There is some debate over what constitutes an epistemic authority and on what grounds they should or not be relied upon (see, e.g., [5,6]). This article understands mainstream epistemic authorities as those authorities necessarily relied upon for information (for practical purposes), such as the free press, certain institutions, and science communicators, the same authorities many conspiracy theorists typically hold in high distrust ([3], p.121–122).
2
Reliable beliefs are defined as beliefs that can reasonably be taken to be true based strictly on an objective epistemic evaluation.
3
We use the phrase “worldview” in the context of IK to refer to a collection of beliefs or belief-like states that shape the point of view of the conspiracy theorist.
4
For the sake of distinguishing IK from IK*, we need to set aside classical debates concerning whether knowledge is contained in justified true belief. The examples we use (e.g., climate science) suggest that questions over knowledge production and dissemination cannot be resolved merely by appeals to traditional epistemic concepts, like justified true beliefs. This is because the topics under investigation are composed of dense networks of beliefs, which have diverse evidential and evaluational bases. The underlying epistemic structure of such topics does not lend itself to evaluation according to binary truth relations, i.e., true or false beliefs.
5
Rather than a conclusive definition, consider this the key characteristic of conspiracy theories in the context of this paper.
6
There is an ongoing debate in the conspiracy theory literature on whether conspiracy theories should be evaluated as a class or assessed individually, which then has implications for blameworthiness of beliefs in conspiracy theories. This article does refer to conspiracy theories as a class in the context of echo chambers but otherwise aims to avoid the discussion of blameworthiness and questions of delineation. For further reference, see [8,9,10].
7
Not to say that there are not conceivable cases where it could be both rational and virtuous to disregard mainstream epistemic authorities (authoritarian governments, for example). In most cases, however (and in the instances this paper is concerned with), the outright dismissal of mainstream epistemic authorities is very hard to justify.
8
Mainstream epistemic authorities are not necessarily the same as epistemic experts—it is sufficient to say they are those authorities that both hold some perceived epistemic authority by the general public and recognize the authority of epistemic experts (think of government institutions, certain press and newspapers).
9
Climate change is a serious and urgent risk despite the mentioned uncertainties [32,33]—the question is whether that is sufficient to engage with IK*.
10
The shortest process of vaccine development previously was the mumps vaccine, which was developed in 4 years [37].
11
For reference, see [38,39,40].
12
Of course, whether truth is ever achievable in practice [41] and how this plays out historically [42] is another matter.
13
However, whether inclusion of unknown variables and idealizations in theories constitutes having false beliefs is a question that cannot be addressed in full here (see [45] for discussion). Nevertheless, the case can be made that, even if they do constitute false beliefs, they are still epistemically valuable and do not run contrary to the pursuit of truth [46].
14
Note, our claim that not all knowledge is valuable is not a remark about the comparative values or merits of knowledge produced by different disciplines (e.g., natural sciences versus social sciences), rather it is a remark about what makes knowledge valuable per se—what makes knowledge worth having?—(see [17,47]).
15
This is not meant to be a critique of the similarly named “Ig Noble” prize (see, e.g., https://improbable.com/, accessed on 5 June 2023), which, despite its name, aims at communicating serious scientific achievements that are relevant despite their comic or apparently irrelevant nature.

References

  1. Clarke, S. Conspiracy theories and conspiracy theorizing. In Conspiracy Theories; Routledge: Oxfordshire, UK, 2019; pp. 77–92. [Google Scholar]
  2. Cohnitz, D. On the rationality of conspiracy theories. Croat. J. Philos. 2018, 18, 351–365. [Google Scholar]
  3. Keeley, B.L. Of Conspiracy Theories. J. Philos. 1999, 96, 109–126. [Google Scholar] [CrossRef]
  4. Levy, N. Radically socialized knowledge and conspiracy theories. Episteme 2007, 4, 181–192. [Google Scholar] [CrossRef]
  5. Kruglanski, A.W.; Raviv, A.; Bar-Tal, D.; Raviv, A.; Sharvit, K.; Ellis, S.; Mannetti, L. Says who? Epistemic authority effects in social judgment. Adv. Exp. Soc. Psychol. 2005, 37, 345–392. [Google Scholar]
  6. Zagzebski, L.T. Epistemic Authority: A Theory of Trust, Authority, and Autonomy in Belief; Oxford University Press: Oxford, UK, 2012. [Google Scholar]
  7. Duetz, J.C.M. What Does It Mean for a Conspiracy Theory to Be a ‘Theory’? Soc. Epistemol. 2023, 37, 4. [Google Scholar] [CrossRef]
  8. Harris, K.R. Some problems with particularism. Synthese 2022, 200, 447. [Google Scholar] [CrossRef]
  9. Buenting, J.; Taylor, J. Conspiracy theories and fortuitous data. Philos. Soc. Sci. 2010, 40, 567–578. [Google Scholar] [CrossRef]
  10. Dentith, M.R.; Keeley, B.L. The Applied Epistemology of Conspiracy Theories: An Overview; Routledge: London, UK, 2018. [Google Scholar]
  11. Nguyen, C.T. Echo chambers and epistemic bubbles. Episteme 2020, 17, 141–161. [Google Scholar] [CrossRef]
  12. Levine, S.; Chang, A. The ‘Big Lie’ Advocates Threatening Free and Fair Elections across the US. Guardian. 2022. Available online: https://www.theguardian.com/us-news/2022/may/24/big-lie-candidates-election-tracker-trump (accessed on 3 April 2023).
  13. Eggers, A.C.; Garro, H.; Grimmer, J. No evidence for systematic voter fraud: A guide to statistical claims about the 2020 election. Proc. Natl. Acad. Sci. USA 2021, 118, e2103619118. [Google Scholar] [CrossRef]
  14. DePaul, M. Value monism in epistemology. In Knowledge, Truth, and Duty: Essays on Epistemic Justification, Responsibility, and Virtue; Oxford University Press: Oxford, UK, 2001; pp. 170–183. [Google Scholar]
  15. Riggs, W.D. Balancing our epistemic goals. Noûs 2003, 37, 342–352. [Google Scholar] [CrossRef]
  16. Zagzebski, L. From reliabilism to virtue epistemology. In The Proceedings of the Twentieth World Congress of Philosophy; Philosophy Documentation Center: Charlottesville, VA, USA, 2000; Volume 5, pp. 173–179. [Google Scholar]
  17. Zagzebski, L. The search for the source of epistemic good. Metaphilosophy 2003, 34, 12–28. [Google Scholar] [CrossRef]
  18. Van Prooijen, J.W.; Douglas, K.M. Belief in conspiracy theories: Basic principles of an emerging research domain. Eur. J. Soc. Psychol. 2018, 48, 897–908. [Google Scholar] [CrossRef] [PubMed]
  19. Emslie, M. Long-Term Monitoring Program Annual Summary Report of Coral Reef Condition 2021/22: Continued Coral Recovery Leads to 36-Year Highs Across Two-Thirds of the Great Barrier Reef; Australian Government & Australian Institute of Marine Science. 2022. Available online: https://www.aims.gov.au/sites/default/files/2022-08/AIMS_LTMP_Report_on%20GBR_coral_status_2021_2022_040822F3.pdf (accessed on 24 May 2023).
  20. IPCC. Climate Change 2023: Synthesis Report. Contribution of Working Groups I, II and III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change; Core Writing Team, Lee, H., Romero, J., Eds.; IPCC: Geneva, Switzerland, 2023; p. 184. [CrossRef]
  21. Bellard, C.; Bertelsmeier, C.; Leadley, P.; Thuiller, W.; Courchamp, F. Impacts of climate change on the future of biodiversity. Ecol. Lett. 2012, 15, 365–377. [Google Scholar] [CrossRef]
  22. Cook, J.; Oreskes, N.; Doran, P.T.; Anderegg, W.R.; Verheggen, B.; Maibach, E.W.; Carlton, J.S.; Lewandowsky, S.; Skuce, A.G.; Green, S.A.; et al. Consensus on consensus: A synthesis of consensus estimates on human-caused global warming. Environ. Res. Lett. 2016, 11, 048002. [Google Scholar] [CrossRef]
  23. Oreskes, N. The scientific consensus on climate change. Science 2004, 306, 1686. [Google Scholar] [CrossRef]
  24. Richards, Z. Record coral cover doesn’t necessarily mean the Great Barrier Reef is in good health (despite what you may have heard). Guardian. 2022. Available online: https://theconversation.com/record-coral-cover-doesnt-necessarily-mean-the-great-barrier-reef-is-in-good-health-despite-what-you-may-have-heard-188233 (accessed on 27 May 2023).
  25. Havranek, T.; Irsova, Z.; Janda, K.; Zilberman, D. Selective reporting and the social cost of carbon. Energy Econ. 2015, 51, 394–406. [Google Scholar] [CrossRef]
  26. Olausson, U. Global warming—Global responsibility? Media frames of collective action and scientific certainty. Public Underst. Sci. 2009, 18, 421–436. [Google Scholar] [CrossRef]
  27. Weingart, P.; Engels, A.; Pansegrau, P. Risks of communication: Discourses on climate change in science, politics, and the mass media. Public Underst. Sci. 2000, 9, 261. [Google Scholar] [CrossRef]
  28. Boykoff, M.T. Who Speaks for the Climate?: Making Sense of Media Reporting on Climate Change; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  29. Boykoff, M.T.; Yulsman, T. Political economy, media, and climate change: Sinews of modern life. Wiley Interdiscip. Rev. Clim. Chang. 2013, 4, 359–371. [Google Scholar] [CrossRef]
  30. Saltelli, A.; Funtowicz, S. When all models are wrong. Issues Sci. Technol. 2014, 30, 79–85. [Google Scholar]
  31. Saltelli, A.; Giampietro, M. The fallacy of evidence based policy. arXiv Preprint 2015, arXiv:1607.07398. [Google Scholar]
  32. Leuschner, A. Uncertainties, plurality, and robustness in climate research and modeling: On the reliability of climate prognoses. J. Gen. Philos. Sci. 2015, 46, 367–381. [Google Scholar] [CrossRef]
  33. Lloyd, E.A. Model robustness as a confirmatory virtue: The case of climate science. Stud. Hist. Philos. Sci. Part A 2015, 49, 58–68. [Google Scholar] [CrossRef]
  34. Fricker, M. Epistemic Injustice: Power and the Ethics of Knowing; Oxford University Press: Oxford, UK, 2007. [Google Scholar]
  35. Douglas, H.E. The moral responsibilities of scientists (tensions between autonomy and responsibility). Am. Philos. Q. 2003, 40, 59–68. [Google Scholar]
  36. Douglas, H. Science, Policy, and the Value-Free Ideal; University of Pittsburgh Press: Pittsburgh, PA, USA, 2009. [Google Scholar]
  37. Ball, P. What the lightning-fast quest for Covid vaccines means for other diseases. Nature 2021, 589, 16–18. [Google Scholar] [CrossRef] [PubMed]
  38. Bok, K.; Sitar, S.; Graham, B.S.; Mascola, J.R. Accelerated COVID-19 vaccine development: Milestones, lessons, and prospects. Immunity 2021, 54, 1636–1651. [Google Scholar] [CrossRef]
  39. Riad, A.; Pokorná, A.; Attia, S.; Klugarová, J.; Koščík, M.; Klugar, M. Prevalence of COVID-19 vaccine side effects among healthcare workers in the Czech Republic. J. Clin. Med. 2021, 10, 1428. [Google Scholar] [CrossRef]
  40. El-Shitany, N.A.; Harakeh, S.; Badr-Eldin, S.M.; Bagher, A.M.; Eid, B.; Almukadi, H.; Alghamdi, B.S.; Alahmadi, A.A.; Hassan, N.A.; Sindi, N.; et al. Minor to moderate side effects of Pfizer-BioNTech COVID-19 vaccine among Saudi residents: A retrospective cross-sectional study. Int. J. Gen. Med. 2021, 14, 1389–1401. [Google Scholar] [CrossRef]
  41. Popper, K. The Logic of Scientific Discovery; Routledge: Oxfordshire, UK, 2005. [Google Scholar]
  42. Kuhn, T.S. The Structure of Scientific Revolutions; University of Chicago Press: Chicago, IL, USA, 2012. [Google Scholar]
  43. Maudlin, T. Philosophy of Physics: Quantum Theory; Princeton University Press: Princeton, NJ, USA, 2019; Volume 33. [Google Scholar]
  44. Ross, D. Philosophy of Economics; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  45. Weisberg, M. Three kinds of idealization. J. Philos. 2007, 104, 639–659. [Google Scholar] [CrossRef]
  46. Reiss, J. Idealization and the aims of economics: Three cheers for instrumentalism. Econ. Philos. 2012, 28, 363–383. [Google Scholar] [CrossRef]
  47. Pritchard, D. Anti-luck virtue epistemology. J. Philos. 2012, 109, 247–279. [Google Scholar] [CrossRef]
  48. Flis, I. Psychologists psychologizing scientific psychology: An epistemological reading of the replication crisis. Theory Psychol. 2019, 29, 158–181. [Google Scholar] [CrossRef]
  49. Hanfstingl, B. Should we say goodbye to latent constructs to overcome replication crisis or should we take into account epistemological considerations? Front. Psychol. 2019, 10, 1949. [Google Scholar] [CrossRef] [PubMed]
  50. Beck, L.; Grayot, J.D. New functionalism and the social and behavioral sciences. Eur. J. Philos. Sci. 2021, 11, 103. [Google Scholar] [CrossRef]
  51. Elgin, C. Understanding and the facts. Philos. Stud. 2007, 132, 33–42. [Google Scholar] [CrossRef]
  52. Zagzebski, L. On Epistemology; Wadsworth Pub Co., Ltd.: Wadsworth, OH, USA, 2009. [Google Scholar]
  53. Kvanvig, J.L. The Value of Knowledge and the Pursuit of Understanding; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  54. Pritchard, D. Knowledge, understanding and epistemic value. R. Inst. Philos. Suppl. 2009, 64, 19–43. [Google Scholar] [CrossRef]
  55. Pritchard, D. Knowledge and understanding. In Virtue Epistemology Naturalized: Bridges between Virtue Epistemology and Philosophy of Science; Springer International Publishing: Cham, Germany, 2014; pp. 315–327. [Google Scholar]
  56. Pritchard, D. Recent work on epistemic value. Am. Philos. Q. 2007, 44, 85–110. [Google Scholar]
  57. Batterman, R.W. Idealization and modeling. Synthese 2009, 169, 427–446. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Siegmann, J.; Grayot, J. The Vices and Virtues of Instrumentalized Knowledge. Philosophies 2023, 8, 84. https://doi.org/10.3390/philosophies8050084

AMA Style

Siegmann J, Grayot J. The Vices and Virtues of Instrumentalized Knowledge. Philosophies. 2023; 8(5):84. https://doi.org/10.3390/philosophies8050084

Chicago/Turabian Style

Siegmann, Job, and James Grayot. 2023. "The Vices and Virtues of Instrumentalized Knowledge" Philosophies 8, no. 5: 84. https://doi.org/10.3390/philosophies8050084

Article Metrics

Back to TopTop