*3.4. Is Equity Possible?*

As with any data-driven decision-making, the data itself can be flawed, and the deductions made from the data can be equally problematic. First, developing nations will likely encounter challenges gathering accurate data on their people, particularly as race, gender, and socioeconomic status is concerned. Many developing countries have social and religious systems that discriminate against people from certain racial and ethnic backgrounds [4,23], while other countries marginalize people from Queer backgrounds [24]. Here, many developing nations may not have leadership that values human beings equally or equitably, leading to large datasets that are incomplete or inauthentic according to someone's true, authentic identity.

When discussing how data can inform decision-making, Macfadyen et al. called for the need for a more effective overall assessment paradigm in education, as many data driven decisions are made using incomplete data and may inform targeted interventions that are not timely or efficient enough [18]. Similarly, Nazarenko and Khronusova asserted that some forms of data are much easier to collect than others, comparing electronic standardized test data to word-of-mouth communication [15]. The authors argued that word of mouth communication may be essential in understanding how educational organizations can implement change, but "verified data collection of this kind is practically impossible" [15] (p. 678).

Johnson backpedaled in their discussion of the integrity of big data, explaining that data mining and the problems with big data go deeper than poor methodology [21]. Johnson claimed that an inherent feature of science in technology is how data collection instruments and strategies are weaved into "a complex web of technical and social interdependencies," [21] (p. 7), such as administrator priorities, changing student demographics, and unsteady influxes of human and financial resources. Johnson, therefore, argued that "Design intent and assumptions about user behavior are especially significant sources of embedded values in technologies" [21] (p. 7), suggesting that educational leaders and policymakers must understand who implemented the data collection measures and which specific social forces may have influenced those approaches.

Regarding data driven decision making, Crossley raised important questions, including whether researchers "should ask whose capacity will be strengthened by new initiatives, whose values and approaches to research will be prioritized, whose modalities will be applied—and do these meet local needs, priorities and agendas?" [17] (p. 22). Using the European Union and the United Kingdom as an example, Crossley questioned the value of "expensive big science approaches to social research that are increasingly favored in the UK" and whether such approaches "have the best potential to foster the strengthening of research capacity within low-income countries" [17] (p. 22). Here, Crossley understood that what may be good for one educational organization or context may not be good for another, yet it may be tempting for developing nations and developed nations to overgeneralize

data and its implications when individual data initiatives are best for a certain educational context [17].

Buchanan and McPherson elaborated on this false sense of data integrity when discussing Australia's national testing program to evaluate primary and secondary student progress and teaching effectiveness [16]. In their critique of Australia's national testing plan, the authors suggested that Australia had modeled their high stakes testing program against those from other developed nations, but such a strategy was not best for Australia, which is famous for its stark contrast between rural and urban school districts [16]. In all, Buchanan and McPherson argued that the primary justification for the testing program was to formalize some sort of mechanism that measures and produces good teaching, but the program ultimately equated "student achievement to a crude test result," [16] (p. 31), which did little to inform Australia's idiosyncratic school system at both the regional and national levels.

In U.S. contexts, Gillborn et al. criticized common uses of big data, again targeting national testing programs as Buchanan and McPherson did [22]. With a critical lens toward racial equity, Gillborn et al. argued that "National testing programs, such as the No Child Left Behind (NCLB) reforms in the US and the use of school performance tables in England, have popularized the idea that numbers can be used to expose (and change) failing schools" [22] (p. 161). However, as the authors reason, "commentaries [on these programs] rarely include any detail about the relatively small samples," [22] (p. 161), in some instances numbering only 200, yet the results were being generalized across tens of thousands of schools. In this regard, Gillborn et al. implied that data can be flawed or not measure what it is intended to measure, while the interpretation and implementation of that data to inform policy and practice can be equally harmful [22]. Furthermore, such neoliberal policies have redirected public resources to private sectors, impacting public educational funding and increasing the equity gap between low- and high-income communities, as well as exacerbating racial equity gaps in U.S. contexts [22].
