Next Article in Journal
Digital Technologies for Young Entrepreneurs in Latin America: A Systematic Review of Educational Innovations (2018–2024)
Previous Article in Journal
Abolition and Social Work: Dismantling Carceral Logics to Build Systems of Care
Previous Article in Special Issue
#Polarized: Gauging Potential Policy Bargaining Ranges Between Opposing Social Movements of Black Lives Matter and Police Lives Matter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Plague and Climate in the Collapse of an Ancient World-System: Afro-Eurasia, 2nd Century CE

by
Daniel Barreiros
Institute of Economics, Federal University of Rio de Janeiro, Rio de Janeiro 22290-902, Brazil
Soc. Sci. 2025, 14(9), 536; https://doi.org/10.3390/socsci14090536
Submission received: 23 July 2025 / Revised: 28 August 2025 / Accepted: 1 September 2025 / Published: 4 September 2025

Abstract

This article examines the potential role of the Antonine Plague (165–180 CE) and climate change in the mid-2nd-century collapse of the Afro-Eurasian world-system. Following the model proposed by Gills and Frank, the world-system cycles between phases of integration (A) and disintegration (B). Integrative phases are marked by increasingly complex exchanges of goods, services, information, and populations, which enhance connectivity and intensify the circulation of matter and energy. Yet, this very complexity, while driving growth and expansion, also generates systemic vulnerabilities. The plague and climate change are examined here as critical shocks that triggered the shift from an A phase to a B phase, destabilizing interconnected regions such as the Roman Empire in the West and the Han Dynasty in China. The demographic losses and logistical strains of the pandemic eroded the integrative structures underpinning Afro-Eurasian connectivity, creating conditions for prolonged disintegration. These developments are further situated within the broader history of the Silk Roads, whose role in fostering transcontinental connections had reached a peak in the centuries preceding the crisis. The analysis underscores how pandemics like the Antonine Plague, together with episodes of abrupt climate change, can act as decisive agents in the disintegration phases of world-systems, reshaping the trajectories of complex societies and accelerating the collapse of established networks.

1. Introduction

Between the first century BCE and the second century CE, civilizations stretching from the Mediterranean basin to the far eastern fringes of Asia were bound together by a dense network of overland and maritime routes that facilitated an unprecedented exchange of goods, people, and ideas. Networks such as the famed Silk Roads functioned as the arteries of a vast Afro-Eurasian world-system, through which precious goods, essential raw materials, technologies, information, and even religious doctrines circulated with increasing regularity. This circulation sustained the prosperity of empires such as the Roman, Parthian, Kushan, and Han, securing not only the wealth of urban elites but also the political and administrative cohesion of territories that ultimately depended on the constant flow of energy, matter, and information.
However, this phase of virtuous integration contained within it a structural paradox: the more complex and functionally specialized this intercontinental network became, the more vulnerable it was to systemic shocks. Against this backdrop, this article proposes to examine how this world-system took shape, and how a “perfect storm” of climatic, economic, and biological factors precipitated its fragmentation. To this end, we will first discuss the theoretical and methodological foundations that underpin the concept of an ancient world-system, then reconstruct the evidence of its material and symbolic integration, analyze the succession of climatic and epidemic catastrophes that undermined it, and finally reflect on the role of pandemics—especially the Antonine Plague–as decisive vectors of systemic collapse.

2. Could There Have Been a World-System in the Ancient World?

For many decades, historiography and historical sociology have underestimated the importance of the flows of information, goods, services, and people in promoting integration between different societies prior to the European maritime expansion of the 16th century. These flows were, in most cases, regarded as intermittent or incapable of generating a genuine systemic dynamic capable of linking the destinies of various macro-human collectives dispersed across vast territories—even though scholars such as McNeill (McNeill 1982, 1997) had already pointed out that pre-modern societies were interconnected through systemic networks. More recently, a methodological renewal—particularly through computational modeling and statistical tools—has been consolidating a clearer image of the dynamics of integration, conflict, and cooperation among ancient civilizations, revealing their entanglement rather than their isolation within segregated local circuits (Crabtree 2016; Christiansen and Altaweel 2006; Djurdjevac Conrad et al. 2018; Leidwanger 2013).
Long before the advancement of these methodological transformations, the intellectual community of world-system scholars had already expressed growing dissatisfaction with the notion that the pre-modern world was a patchwork of civilizations, composed of societies that were related, but not effectively integrated. This dissatisfaction stemmed from critiques by several world-systems scholars who challenged Wallerstein’s claim (Wallerstein 1974, 1976, 2011) that the framing of hierarchical relations between socioeconomic units across vast geographic spaces is inextricably tied to capitalism and to the history of the past five centuries. What if there had not been a pioneering world-economy originating in Europe, preceded by archaic integration experiences such as world-empires and micro-systems? What if, instead, there had existed a single, non-hyphenated World System evolving over the last five millennia—of which the contemporary interstate, economic, and intersocietal capitalist system would be merely the latest iteration? It was through the exploration of this possibility, and in many respects diverging from Wallerstein’s ideas, that the literature on the “5000 Years World System” emerged, with pioneering contributions by Frank (1990), Gills and Frank (1990, 1992), Frank and Gills (1993)1.
A few years later, Russian world-system researchers would present convincing mathematical evidence in support of Frank and Gills’ main conclusions, but stressing its limitations at the same time (Korotayev 2008; Korotayev and Zinkina 2017; Korotayev et al. 2006a, 2006b). They sought to demonstrate empirically that the World System, far from constituting a modern phenomenon, has functioned as an expanding and continuous entity for millennia, predating not only Wallerstein’s “long sixteenth century,” but also extending well before the 30th century BCE, as posited by Gills and Frank (Korotayev 2008, p. 152). Focusing on the dynamics of technological innovation and employing a quantitative and mathematical approach, Korotayev and his colleagues postulate a deep history of social macrodynamics dating back to the Neolithic period, when the domestication of cereals and animals, the invention of the plow and the wheel, and the adoption of metallurgy spread progressively throughout the region that would one day become the Afro-Eurasian Oikumene (Korotayev et al. 2006b, pp. 21–24). Within this framework, the entanglement of China and West Asia within technological diffusion networks between the 3rd and 2nd millennia BCE—manifested in the circulation of innovations such as wheat and barley cultivation, bronze metallurgy, and wheeled transport—marked a decisive step in the consolidation of an interconnected system; and by the 1st century CE (or even earlier), this network extended from the Atlantic to the Pacific, with cultures displaying broadly comparable levels of technological and social complexity, sustained by the widespread diffusion of iron metallurgy, plow-based agriculture, and advanced transport technologies (Korotayev and Zinkina 2017, pp. 82–85; Korotayev et al. 2006a, pp. 30–32).
This long-standing technological interdependence would represent the main explanation for the hyperbolic trajectory of global population growth observed since 10,000 BCE, a dynamic interpreted as the demographic signature of a genuinely systemic whole. The global population between 1 CE and 1958 CE can be described with exceptional accuracy by a hyperbolic equation, displaying a remarkably high degree of fit with empirical data. The expansion of this model to earlier millennia revealed extremely high correlations between predicted and observed values (R2 = 0.990 for 40,000 BCE–200 BCE and R2 = 0.9966 for 500 BCE–1962 CE). The underlying logic rests on a non-linear positive-feedback mechanism: the larger the population, the greater the number of potential inventors, which accelerates the pace of technological innovation; these innovations, in turn, expand the environmental carrying capacity in relative terms and further reinforce population growth. This process, however, can only be sustained if innovations circulate widely, implying the existence of sufficiently dense diffusion networks capable of linking different regions. Indeed, the integration of the World System reached a qualitatively new stage in the first millennium BCE, when iron metallurgy spread within just a few centuries—rather than millennia. This episode highlighted the decisive role of innovation-diffusion networks, capable of explaining both the coherence of global population growth and the long-term durability of the World System (Korotayev 2008, pp. 136–38; Korotayev et al. 2006b, pp. 10–28, 147–62).
Returning to Wallerstein, we note that Pieterse (1988) suggested that he did not, in fact, develop a genuine theory of systems, but rather a rhetorical analogy intended to address structural economic inequalities on an international scale within the context of capitalism. We partly agree with this assessment and argue that some theoretical improvement is necessary for the concept of world-systems to transcend the arbitrary boundaries imposed by Wallerstein—especially the privilege granted to modern history at the expense of the many millennia of connected “world” histories that predate the year 1500 (McNeill 1995). We will attempt, albeit briefly, to propose an alternative that foregrounds the interconnections and mutual dependencies constituting a systemic dynamic, within which power relations and asymmetries are inscribed. Furthermore, we aim to acknowledge—although it is not the central focus of this work—“the autonomy of (sub)systems, in terms of spheres, levels and dimensions of social existence” (Pieterse 1988, p. 260), thus allowing us to investigate world-systems as nested structures (Chase-Dunn and Hall 1993; Mariani et al. 2019). In this framework, the notion of “world-system” sheds its fundamentally internationalist appeal (centered on relations between national economies demarcated by Westphalian political boundaries) and comes to incorporate both horizontal and vertical interconnections among entities situated at different levels of social organization.
By emphasizing interconnection, mutual dependency, and the chain of agency transmission—that is, the capacity for local actions to trigger analogous effects in other segments of the network, Frank (1991, pp. 1–2) grounds his proposal to expand the historical and geographical boundaries of world-systems analysis:
“I plead for writing a world history that is as comprehensive and systematic as possible. (…) This history should seek maximum ‘unity in the diversity’ of human experience and development. Therefore, we should not only make comparisons over time and space, we should also seek more connections among distant and seemingly disparate events at each historical point in time (…). The principal idea I advance is the principle, indeed the imperative, of doing a ‘macro’ world system history. The main reason to do so is that, as the old adage goes, this historical whole is more than the sum of its parts. This holistic principle does not deny the necessary ‘micro’ history of its parts. However, it is necessary to remember that all the parts are also shaped by—and can only be adequately understood in relation to—their participation in the whole and their relations with other parts”.
Frank argues that world history should focus on the historical connections and interrelations between distant and seemingly unrelated events at any given moment in time, and that these connections must be investigated in a systematic and coherent manner. The systemic network thus appears as more than a mere collection of parts; each component is shaped by its insertion into the whole, and its nature can only be fully understood in relation to the other parts. To a significant extent and despite their broader divergences from Frank and Gills, Chase-Dunn and Hall (1993, p. 855) uphold a convergent idea: world-systems are
“(…) intersocietal networks in which the interactions (e.g., trade, warfare, intermarriage) are important for the reproduction of the internal structures of the composite units and importantly affect changes that occur in these local structures”.
Before turning to aspects of ancient world-systems, let us first consider how a world-system—particularly in its contemporary form, the capitalist world-economy—might be described, drawing on elements first proposed by Wallerstein. Some of these elements undoubtedly merit sustained attention, for they speak to structural realities that unfold in the slowest rhythms the longue durée can accommodate.
First and foremost, we must state that world-systems are networks of hierarchically organized collective agents within a division of labor:
“We have defined a world-system as one in which there is extensive division of labor. This division is not merely functional—that is, occupational—but geographical. That is to say, the range of economic tasks is not evenly distributed throughout the world-system. In part, this is the consequence of ecological considerations, to be sure. But for the most part, it is a function of the social organization of work, one which magnifies and legitimizes the ability of some groups within the system to exploit the labor of others, that is, to receive a larger share of the surplus”.
Within these networks of agents, the complementarity among local economic subsystems generates a structure of power and privilege in which certain human collectives asymmetrically appropriate a disproportionate share of the economic product of others. As a result, socioeconomic inequalities emerge with a structure that is fractal-like, displaying patterns analogous to self-similarity across different scales (Grimes 2017): local elites reproduce their power and wealth by imposing an exploitative structure upon economically and politically subordinate masses, while systemic elites operate a similar structure of exploitation on a macro-spatial scale. This latter structure is imposed not only on the masses directly subordinate to it, but also on subaltern and “foreign” elites and masses, which bear the burdens of a dual-layered exploitation (Baker 1993; Gunaratne 2007).
This systemic division of labor is inherently necessary: the units that comprise the world-system depend on the flows of goods, services, and information that circulate through this hierarchical network to sustain their local structural integrities. Consequently, economic exchange—despite the unequal appropriation of its benefits—retains a functional dimension: entities at multiple scales—families, local groups, regional polities3—are variably exposed to the risk of exchange-network collapse, in proportion to the degree of economic autarky within which they can subsist.
In other words, the more the subsistence and/or expansion of these units within a multilevel structure depends on the regularity and predictability of flows, the more exposed they are to systemic rupture.
The general tendency is for the loss of autarky—and the ensuing dependence on the network—to unfold in a top-down movement. Accordingly, at a more superficial level of systemic integration, regional polities—chiefdoms and states—are expected to be the first entities whose structural integrity becomes dependent on intersocietal flows of goods, services, and information. Such dependence manifests through multiple channels, notably the circulation of prestige goods—crucial both for the regulation of intra-elite hierarchies and the symbolic consolidation of elite authority vis-à-vis the wider populace—alongside the strategic exchange of precious metals and raw materials for military purposes. It is only at a deeper level of integration—not so alien to the ancient world, as we shall see—that the most basic units of this nested structure (i.e., family groups) are absorbed, to varying degrees of intensity, by this circuit of systemic dependence—for example, when the supply of basic energy resources, such as grains and cereals, becomes wholly or partly dependent on circuits linked to long-distance trade.
The idea that the world-system constitutes a nested structure draws on Fernand Braudel’s original conception of the social phenomenon: society, he observed, is un ensemble des ensembles—a whole composed of connected and interdependent parts; systems within systems, arranged in successive, articulated layers (Shannon 1996, pp. 15–16).
Braudel conceived the world-economy through three analytical dimensions: one vertical, one horizontal, and one chronological. The vertical dimension reprises the classical formula that structures economic relations on a macro scale, with a core and a periphery. However, subtler are the effects of the horizontal and chronological dimensions on the concept of world-system. On the horizontal plane, Braudel (1981, 1982, 1984) proposed a tripartite stratification of the economy: (1) material civilization, corresponding to everyday practices of production and subsistence; (2) an intermediate level, represented by market relations; and (3) a superior stratum, in which mechanisms of control and ownership operate (the sphere of capital and dominant structures). This horizontal organization finds resonance in the chronological plane, with the famous division between short-term, medium-term, and longue durée: material civilization expresses itself in the temps événementiel, with its daily reproduction, though conditioned by behavior patterns rooted in deeper temporalities—the medium-term (market) conjunctures and the long-term structural rhythms, which function as probabilistic attractors affecting human agency.
With regard to pre-modern world systems, it seems essential to recognize that, at intermediate levels, forms of economic integration other than the market were in operation. Likewise, at the macro level, other forms of social organization—different from those prevailing in Western Europe during the so-called “long sixteenth century”—played a central role. Finally, we suggest an expansion of the analytical model into a four-dimensional framework, explicitly incorporating human agency at the three levels of social complexity outlined by Johnson and Earle (2000), and discussed previously.
Wallerstein was a reader of Polanyi, yet this influence was insufficient to confer a substantivist orientation upon world-systems theory. This limitation was significant, as it helped shape Wallerstein’s emphasis on the insurmountable differences between the modern world-system—conceived as a capitalist world-economy—and other systemic networks deemed “primitive,” such as the so-called world-empires or mini-systems4. These differences do exist, without question; the issue is that they seem more or less profound depending on how one defines “economic”.
According to Polanyi (2012, pp. 63–65), the concept of economy has historically encompassed two meanings: the so-called “formal” and the “substantive”. In its formal sense, the economy refers to a set of behavioral phenomena governed by the relationship between means and ends, and economics, to the study of human decisions determined by the principle of scarcity. In short, both the phenomenon and its study are linked to the expectation of maximizing rationality—obtaining the greatest benefit, more for less. However, Polanyi argues (2012, pp. 69–72) that economic choice (no longer in the formal sense, but in the substantive sense) can be motivated independently of scarcity, being instead driven by political, ethical, cultural, moral, or religious concerns.
For scarcity to function as a socially determined driver of economic behavior, it is necessary for society’s institutions to recognize the possibility of multiple uses for a given means, and for these uses to be organized according to a hierarchy of ends. This normative structure creates a decision-making heuristic: the scarcity of a particular means comes to justify the choice of one use over another. The same principle applies to the relationship between different means and a single end—institutions must legitimate the prioritization of certain means over others in achieving a given objective.
When, despite the existence of alternatives, tradition—or any other social principle, such as patterns of distinction or displays of status—establishes that a given good or service is irreplaceable, scarcity ceases to function as an immediate driver of economic behavior. Even if the means in question are physically scarce, their symbolic or normative function removes them from the realm of rational substitution. For this reason, it is conceptually insufficient to define the economy solely as a problem of rational choice among alternative means.
Nevertheless, Wallerstein’s work has largely remained anchored in a dichotomy that positions the capitalist world-economy—driven by rational agents oriented toward profit maximization—as the paradigmatic object of world-systems analysis (Wallerstein 1976, p. 348)5. In contrast, earlier forms of intersocietal articulation in systemic networks are, in some way, relegated to a “primitive” condition. It is no coincidence that when classifying them as world-empires or mini-systems, the word “economy”—present in world-economy—disappears. It is implicitly assumed that, in the absence of a global market as an integrative mechanism and of rational, maximizing behavior, these systems were not “economically” articulated.
However, if we move away from Wallersteinian formalism and approach world-systems through a substantivist lens, we begin to perceive economic relations in everything that concerns the human need to subsist in their environments—just like any other living being. The economic phenomenon thus ceases to refer to a problem of decision-making: there are many historically attested ways of determining how means are to be produced, distributed, and used that do not necessarily involve maximizing returns or making ostensibly rational choices.
Economic relations, therefore, come to describe the dependencies that systemic agents develop in relation to their environment (in the form of resources) and their cooperation or conflict with other agents, considering both horizontal and vertical flows among entities situated across the three levels of social organization described by Johnson and Earle (2000).
“At the interactive level, therefore, the economy encompasses man as gatherer, cultivator, transporter, and creator of useful things; it encompasses nature, at times as a silent obstacle, at others as a facilitator; and it encompasses the interrelation of the two in a sequence of physical, chemical, physiological, psychological, and social events that occur on a greater or smaller scale”.
This dependency is mediated by institutionalized interactions governed by social norms that provide predictability to the behavior of the agents involved. Over the past five thousand years, these interactions have frequently involved relations of power, exploitation, and unequal appropriation. Through them, both physical needs (such as food, security, and shelter) and symbolic or social needs (such as status, prestige, honor, or devotion) are fulfilled through material means. This leads to the fact that the economic phenomenon consists precisely of the production, appropriation and circulation of material goods to meet a variety of ends—material or immaterial—and that this dynamic can occur with or without the presence of institutions that authorize maximizing behavior among agents.
It is therefore possible to have an understanding of the nature of world-systems that is compatible with the principles of substantive economics:
“The reaching across natural and cultural boundaries of human societies certainly takes concrete form in trade and commerce. But it also takes form in movements of technology, skills, ideas and faiths. In all these, the dynamics are complex. Some may expand, others may withdraw. But they may also overlap, interpenetrate, or diffuse according to rhythms of their own”.
From a substantivist perspective, the crucial difference between pre-capitalist world-systems and the capitalist world-economy becomes less pronounced, although one element raised by Wallerstein cannot be disregarded: the importance of essential goods. While the needs addressed by systemic horizontal and vertical flows at three levels (among families, local groups, and regional polities) may be both material and immaterial, when these flows involve sources of energy (such as food or fuel), the organicity of a world-system is significantly amplified.
By organicity, we refer to the condition of a system in which flows of goods, people, services, and information become structurally indispensable to the integrity of the entities that compose it. Inspired by Polanyi’s (2012) substantivist economics and Smil’s (2024) analysis of energy-complex systems, this concept expresses the extent to which such flows cease to be ancillary and come to constitute the very foundation of systemic cohesion: their interruption or significant reduction compromises not only the performance of individual components but also the continuity of the system as an emergent entity. A system’s organicity increases in proportion to the functional specialization of its parts; in other words, as a division of labor emerges, the internal units become increasingly incapable of operating in isolation, as they depend on one another to fulfill functions necessary for their reproduction. The greater the specialization and interdependence, the greater the organicity: a high degree of functional differentiation renders the system more sensitive to the discontinuity of the flows that sustain it. Thus, a highly organic system is one whose stability is directly linked to the continuity and intensity of its internal exchanges, without which its components tend to disintegrate, which means reverting to less complex organizational forms.
In this sense, Wallerstein is correct in emphasizing the role of essential goods in systemic dynamics. In his analysis, peripheral status manifests precisely in societies that integrate into the network as suppliers of food, energy, and raw materials, typically relying on subproletarian labor (precarious labor subjected to extra-economic coercion), with their surpluses appropriated by core societies, which possess greater technological capabilities and produce manufactured goods. It is this interdependence, he argues, that constitutes the world-system itself, because it cannot be interrupted without jeopardizing the operation of both core and peripheral economies (Wallerstein 2007, pp. 22–26). Internal exploitation and inequality thus emerge as products of a highly organic network, in which exchanges involving luxury goods are seen as ancillary rather than constitutive of the systemic relationship: suppliers of luxury goods are considered part of an “external area” to the world-system, and adaptation to a potential disruption in this trade is not regarded as problematic (Wallerstein 2011, p. 162).
Nevertheless, from a substantivist perspective, it could be argued that Wallerstein goes too far in dismissing the relevance of prestige goods. The sharp distinction between these two dynamics—the flows of essential goods and those of luxury items—fails to recognize that, substantively speaking, needs can be both material and immaterial; both must be understood as capable of causing disruptions, whether in the functioning of the network nodes (the entities that compose a world-system across the three levels mentioned earlier) or in the web of interactions itself. Rather than a dichotomy, we should see between these two poles a gradient of potency, expressed by the degree to which alterations in a given flow affect the organicity of the constituent entities and the systemic network as a whole. Within this gradient, it is clear that energy flows represent those with the greatest disruptive potential for the organicity of a system; and although luxury goods may have a comparatively lower potential, this is by no means negligible, depending on subsystemic aspects related to the political, religious and cultural organization of the constituent social units.
Jane Schneider (1991, p. 53) was among the first to recognize6 the significance of the luxury goods economy in the formation of pre-modern world-systems, rejecting the notion that such flows were merely epiphenomenal. Their importance was considerable in the symbolic reproduction of sub-systemic (i.e., local) power structures, as well as in the status games played among elites connected within the intersocietal network.
“Following Malinowski and Mauss on the power of the gift, a case can be made that luxury goods served more fundamental ends. The relationship of trade to social stratification was not just a matter of an elevated group distinguishing itself through the careful application of sumptuary laws and a monopoly on symbols of status; it further involved the direct and self-conscious manipulation of various semiperipheral and middle-level groups through patronage, bestowals, and the calculated distribution of exotic and valued goods”.
In addition, Schneider (1991, p. 61) suggested that the flows of precious metals, considered by Wallerstein as mere “preciosities”, functioned as systemic proxies for energy transfers at the subsystemic level:
“Because before the capitalist transformation, primitive means of transportation restricted the flow of bulk goods, we are inclined to think that energy was stagnant too. If, however, some luxuries, and in particular gold and silver, were readily convertible into energy resources across much of the Old World, their movement constituted a disguised transfer of essential goods”.
In short, gold and silver—being easily transportable over long distances given the technological conditions of the time—allowed agents across a vast transregional network to project their capacity to mobilize local energy resources. In other words, while the power to mobilize resources was exercised locally, the transfer of wealth enabled actors to extend their influence and resource-mobilization capabilities far beyond their immediate region. This operated alongside the mechanisms of reciprocity and redistribution practiced in the ancient world.
Furthermore, precious metals—and their circulation—could play a significant role in mobilizing surplus energy for tasks related to the maintenance and expansion of state structures (such as warfare and bureaucracy), among others. In all these cases, goods regarded as sumptuary from a Wallersteinian perspective fulfilled a relevant functional role in the production of order and social complexity within pre-capitalist world-systems, and must therefore be necessarily incorporated into the analysis.
Chase-Dunn and Hall (1993, pp. 855–60), however, observe that flows of essential and luxury goods constitute distinct, often nested, circuits. The network for essential goods typically defines the smallest zone of regional interaction, whereas the luxury-goods network can span much wider areas—exemplified by the Silk Roads, which linked centers in China, India, and Rome into an Afro-Eurasian supersystem. At times, as in the modern world, these networks may converge into the same circuits—a convergence that can serve as an important distinguishing feature among different types of world-systems. The fact that they are nested, however, does not mean these networks fail to feed back into each other. Although the circuits for the transit of energy resources are more geographically restricted, disruptions in the long-distance links that carry luxury goods can still cause local damage, as we shall see.
Equally important is the issue of “capital,” reinterpreted by Frank in a broader sense than by Wallerstein. Far from conceiving accumulation as a specific feature of the so-called capitalist world-economy and as an outcome of profit-seeking, maximizing behavior, Frank (1991, pp. 18–23) suggests that through the systemic flows of people, ideas, and material goods, “capital” has been unequally accumulated in different parts of the network over the past 5000 years. This accumulation takes the form of stocks of mobile wealth, material goods, infrastructural works, and human capital—a key source of innovation.
“For millennia and throughout the world (system), there has been capital accumulation through infrastructural investment in agriculture (e.g., clearing and irrigating land) and livestock (cattle, sheep, horses, camels, and pasturage for them); industry (plant and equipment as well as new technology for the same); transport (more and better ports, ships, roads, way stations, camels, and carts); commerce (money capital, resident and itinerant foreign traders, and institutions for their promotion and protection); military (fortifications, weapons, war ships, horses, and standing armies to man them); legitimacy (temples and luxuries); and of course the education, training, and cultural development of “human capital”.
I suggest that capital possesses, in and of itself, an expansive tendency, insofar as its accumulation provides the means for further accumulation. In this sense, capital can be understood as the result of mobilizing free energy to generate work, which then transforms matter according to specific ends. While this mobilization tends to renew and expand productive capacities through technological innovation, social organization, and systemic feedback, it ultimately relies on a finite stock of free energy. Thus, the expansionary logic of capital operates within a physical world that imposes real energetic and ecological limits.
“Is this process of accumulation, and the associated production, trade, finance, and their political organization independent of ecological possibilities and limitations? Just posing this question seems to answer it, especially in this age of heightened ecological degradation and awareness. Human social, economic, and political history has always been an adaptation to ecological circumstances and changes. Ecological possibilities and limitations helped determine the development of alluvial valley agricultural civilizations like ancient Sumer and Egypt”.
The concentration of people fostered major urban hubs7 of intense innovation, contributing in intangible ways to expanding the capacity to harness greater amounts of free energy and generate more work. This is, in fact, one of the main sources of asymmetry between the constituent entities of the network, giving rise to a distinct topography in which the interconnection between the parts does not imply symmetry, but rather a hierarchy of agency powers. Some entities are not only more capable than others of influencing and modifying systemic flows in their favor8, but also of threatening the structural integrity of other entities through the use of force.
Gills and Frank (1990, pp. 28–30) further argue that the analysis of pre-capitalist systemic networks should replace Wallerstein’s tripartite structure of core, periphery, and semi periphery with a more comprehensive model, which they designate by the acronym CPH (center–periphery–hinterland). This framework challenges the traditional view commonly attributed to the pre-capitalist past, namely, the existence of geographically isolated world-empires surrounded by a disconnected “outer zone” or hinterland, and instead highlights the crucial role played by nomadic peoples in linking civilizational cores. In this perspective, nomadic societies—particularly those of Central Asia—are understood as a functional component of the fundamentally agrarian Afro-Eurasian world-system, serving as privileged vectors of integration among different spatial sectors of the network.
In contrast to the concept of a semi periphery, the hinterland is inhabited by populations that maintain ties with both core and peripheral regions, yet remain relatively insulated from the hegemonic societies’ mechanisms for extracting energy and material resources from their rivals. These populations also retain a significant degree of social and political autonomy. According to the authors, conflicts between core societies and those of the hinterland often arise from efforts to peripheralize the latter—and from their resistance to such peripheralization.
It is also essential to underscore the fundamental role of climate and environmental phenomena in shaping the dynamics of systemic networks. In alignment with Polanyi (2012), and drawing from Gills and Frank (1990, pp. 20–27), the substantive economy of world-systems should first and foremost be understood as an ecological entanglement, full of feedback relations between organisms, institutions, and technologies on one side, and the ecosystems within which they operate on the other. This entanglement is vividly illustrated by the emergence of the first urban civilizations in the Fertile Crescent—a pattern repeated elsewhere. There, the availability of water and geological resources enabled the expansion of intensive agriculture, elevating urban centers to a new level of social complexity. Crucially, this very complexity led to the scarcity of certain inputs—such as timber, lithic materials, and metals. This scarcity, in turn, necessitated the acquisition of such goods beyond local ecological niches, generating a systemic impulse for long-distance material exchanges and, often, for politico-military expansion as an adaptive response.
This ecological interdependence among different regions, in turn, constituted a source of structural instability. The maintenance of complex urban systems required not only the management of internal surpluses but also the continual expansion of provisioning zones and the assurance of uninterrupted flows of matter and energy once they had been established. Consequently, the search for external resources tended to fuel the expansion of the world-system, progressively incorporating new ecological niches into an interconnected and asymmetrical economic network—one that was, by its very nature, increasingly prone to conflict.
Technological innovation and diffusion also emerged as an adaptive response, as it potentially enhanced both productive and logistical capacities.
“If a society borrows systematically important technological innovations, its evolution already cannot be considered as really independent, but should rather be considered as a part of a larger evolving entity, within which such innovations are systematically produced and diffused. (…) [T]he information network turns out to be the oldest mechanism of the World System integration, and remained extremely important throughout its whole history, remaining important up to the present”.
And as networks of contact intensified, the potential for the transmission of such innovations likewise increased, with nomadic peoples playing a crucial role in facilitating their diffusion among urban centers.
The pressure for more energy and materials to sustain increasingly complex systems was met with new technologies and implements—dams, roads, irrigation systems, and tools. However, while these innovations temporarily expanded the environment’s carrying capacity, they paradoxically generated unequivocal environmental impacts: increased deforestation, salinization, and siltation. Crucially, when these localized anthropogenic stresses combined with large-scale geological and climatic processes—earthquakes, volcanic eruptions, solar cycles, shifts in the planet’s eccentricity and obliquity—they often triggered a chain reaction with profound systemic consequences. This synergy could lead to the long-term drying or desertification of vast regions, temperature drops, agricultural crises, famine, and forced migrations. These disruptions frequently pushed nomadic peoples from hinterlands toward sedentary population centers. Within these destabilized contexts, climate-driven catastrophes also caused biotic and zoonotic imbalances that, in many cases, culminated in major epidemics. (McMichael 2012; Tian et al. 2017).
In a world-system deeply interconnected, with human and non-human animals traveling vast distances toward high-density urban zones, the transformation of a local epidemic into a pandemic became a possible—if not probable—outcome:
“Every day, archaeologists uncover, and reinterpret, additional evidence for maritime and overland diffusion over the longest distances, and at earlier and earlier times. Diffusion spread, among other things, foodstuffs; agricultural, industrial, transport, and military technology; culture and religion; language and writing; mathematics and astronomy; disease, first plague deaths and then resistance to the same, and medicine; and, of course, genes”.

3. A World-System in Ancient Afro-Eurasia

Although the dates are approximate, a broader view of the economic, political, and cultural relations among the major Afro-Eurasian urban centers suggests that, between 50 BCE and 200 CE, a network of agency transmission emerged with sufficient intensity to characterize what may be termed a systemic “A” phase. As argued by Frank (1990), Gills and Frank (1992), Grinin (2017), and Zinkina et al. (2019), systemic networks periodically undergo phases of intensified integration among their constituent nodes. In this study, these phases are conceptualized as processes of expansion of systemic complexity, which can be conceived as an increase in: (a) the number of simultaneous connections between agents; (b) the flows of free energy mobilized within and between network nodes, and (c) the impact of these connections on strategic decisions and social practices.
During integrative phases, production and exchange expand, and capital accumulation intensifies, especially within the system’s most dynamic cores. Investment in transportation infrastructure increases markedly, reinforcing both the cumulative growth of the integrative network and the process of capital accumulation itself. Regions historically peripheral to global circuits of information are drawn into large-scale economic, political, and migratory flows, though with varying degrees of intensity and depth. In such periods, substantial energy and material resources are devoted to promoting social order through the imposition of norms and rules enforced across the territory. This entails not only the creation of legal frameworks and administrative structures but also sustained investment in mechanisms of social control and coercive apparatuses to ensure compliance. Resources are increasingly allocated to securing commercial routes, maintaining safe corridors for the movement of goods and people, and developing institutions that facilitate economic interchange—currency, standardized weights and measures, and credit arrangements among them. For all this to function effectively, a continuous supply of human and non-human labor is essential, alongside inputs of energy and raw materials for the production of tools, vehicles, infrastructure, and the technical means required to sustain such complex networks of integration and exchange.
“Although a myriad of individuals and smaller cultures contributed to the First Silk Roads Era, such as the Sogdians and smaller states and consortiums of South Asia, trade and cultural exchange on such an unprecedented scale was predicated on the political and economic stability created by the four large imperial states that controlled much of Eurasia during the First Silk Roads Era—those of the Han Dynasty in China, the Kushan and Parthian Empires of Inner Eurasia and the Roman Empire. (…) These powerful administrations established law and order over enormous areas; they created political and military stability (although there were also intermittent periods of instability, particularly between the Romans and Parthians); they minted and used coinage; and they constructed sophisticated roads and maritime infrastructure”.
“It should also be noted that the connectivity of network space increased within the empires, not least because the latter developed material infrastructure (such as roads, bridges, and canals). Moreover, the integration of various areas under the aegis of a single empire significantly improved their openness to trade links. Passing through the territory of a single state was safer and cheaper for merchants than crossing the lands of multiple small states, which were often at war with each other”.
As a result, local trajectories become progressively shaped by macro-historical and macro-spatial dynamics, although the perception of these impacts only becomes evident at different times, depending on the crossing of various thresholds of intensity or diffusion.
“This rhythm affects all of the parts of the world system simultaneously, though differently (not necessarily all at exactly the same moment), and thus accounts for the synchronization we observe. Therefore, this rhythm should be regarded as specific to the world system and not simply to the parts. Nor should this rhythm be regarded as a mere coincidence in parallel patterns among various regions”.
In contrast, “B” phases represent periods of disruption in the structural integrity of world-systems and, to varying degrees, in the constituent nodes of the network (Frank 1990, pp. 160–62). These are times of economic crisis, historically triggered by non-anthropogenic climatic and geological processes, but more frequently by pressures on environmental carrying capacity caused by the intensive exploitation of land. Environmental degradation and ecosystem imbalances may create favorable conditions for local epidemics, which, under circumstances of prior intense spatial integration, can escalate into pandemics. B phases are thus marked by crises in long-distance trade networks, which initially contract significantly before being reoriented, eventually laying the groundwork for a new integrative phase (Gills and Frank 1992, pp. 677–78).
The environmental impacts of the economic intensification during A phases, along with non-anthropogenic climate changes, give rise during B phases to demographic tensions and significant population movements across territories, often involving pressure from nomadic groups on urban centers already in crisis (Beaujard 2010, pp. 4–5). What emerges, then, is a contraction and obstruction of contact networks, accompanied by a decline in the availability of free energy flows necessary for the generation of order. This is evidenced by the downturn in productive activities, the interruption of construction projects, and the degradation of established infrastructure such as canals, roads, and similar systems.
These processes are further intensified by a relative social, political, and economic insulation of the network’s constituent nodes, and by a shrinking of production and distribution circuits, elements which together indicate a process of declining systemic complexity. And although Gills and Frank (1992, p. 628) argue that the systemic network in B phases does not “collapse,” as it “alternates cyclically between periods of relatively high (hegemonic) integration and concomitant economic prosperity, and periods of relatively less integrated hegemonies and concomitant economic retrogression or contraction,” collapse is precisely what these phases entail—if we accept, as Tainter (1988, p. 31) does, that:
“[a]s the development of complexity is a continuous variable, so is its reverse. Collapse is a process of decline in complexity. Although collapse is usually thought of as something that afflicts states, in fact it is not limited to any ‘type’ of society or ‘level’ of complexity. It occurs any time established complexity rapidly, noticeably, and significantly declines. Collapse is not merely the fall of empires or the expiration of states. It is not limited either to such phenomena as the decentralizations of chiefdoms. Collapse may also manifest itself in a transformation from larger to smaller states, from more to less complex chiefdoms, or in the abandonment of settled village life for mobile foraging (where this is accompanied by a drop in complexity)”.
Around 50 BCE, and for at least the following two centuries, the civilizations of Afro-Eurasia entered a phase of substantial economic and commercial expansion, accompanied by a notable degree of diplomatic integration (Balland et al. 1992, p. 65; Leslie and Gardiner 1995, pp. 61–67). This vast communications network linked the Roman Empire, the Parthian Empire in Mesopotamia and Persia, the Kushan Empire in Central Asia and the northwest of present-day India, and the Han Empire in China, fostering significant processes of synchronization. These large-scale dynamics, however, were inherently constrained by the pronounced logistical and information-transmission limitations of the period—constraints that become especially evident when contrasted with the technological capacities of the twenty-first century CE.
Around the territories most directly controlled by the bureaucracies and armed forces of these archaic imperial states—especially their main centers of power, such as the major cities—there were rural and urban populations nominally subject to central authority. However, the actual exercise of power over these populations was conditioned by a series of factors, including negotiation with local elites and the maintenance of advanced military garrisons, among others. Further from these centers of authority, political control became increasingly tenuous. Rather than rigid demarcations, the borders of these empires are better understood as diffuse gradients, more distinct and vibrant around major urban centers, and progressively attenuated the further away from these centers. This gradation was shaped by the cumulative effects of geography and the rising costs of transportation and communication, which limited infrastructural integration across peripheral zones.
The system was also characterized by center-periphery relations, albeit of a nature somewhat distinct from those that would later define the capitalist world-economy. The system’s centers, particularly the imperial capitals and their immediate surroundings, followed by the larger cities, disproportionately accumulated capital in various forms: architectural structures (many of which served powerful symbolic functions essential to social cohesion), military force, hydraulic works, productive infrastructures (both agricultural and artisanal), and mobile wealth (in the form of precious metals). This accumulation was driven, in part, by tributary and extractive relations imposed on settlements within the empire’s formally recognized territory, as well as on other civilizational spaces unable to resist the political, military, and symbolic dominance exerted by hegemonic empires (Frank 1990, pp. 235–38; Balland et al. 1992, pp. 68–69). According to Frank, the Afro-Eurasian system thus constituted a “network of hegemonies,” defined by a
“hierarchical structure of the accumulation of surplus among political entities, and their constituent classes, mediated by force. A hierarchy of centres of accumulation and polities is established that apportions a privileged share of surplus, and the political economic power to this end, to the hegemonic centre/state and its ruling/propertied classes”.
Peripheral regions, subordinated to imperial centers primarily through tribute and plunder, played a crucial role as suppliers of both energy resources—most notably high-energy-density cereals9—and human labor in the form of slaves and mercenaries, “biological machines” capable of converting that energy into organized action (Georgescu-Roegen 1971, pp. 212, 372). This conversion of energy underpinned the generation of structural order within imperial systems: from the material transformation of the landscape (public works, agriculture, craftsmanship) to the exercise of state coercion (warfare, internal repression, urban control), and even the reproduction of the very mechanisms of extraction and redistribution (Frank 1990, pp. 182–83). Under these conditions, the periphery not only nourished the center but also furnished the thermodynamic means for the maintenance and expansion of imperial order—an order that, in turn, subjugated peripheral populations who, in providing energy and labor, became ensnared in the systemic logic they sustained.
Superimposed upon these regional networks was a vast and intricate web of land and maritime routes stretching across Afro-Eurasia, linking distant regions such as China, Europe, and North Africa through the trade of luxury goods. Multiple, well-connected corridors facilitated the transcontinental movement of durable, high-value-to-weight commodities, including silk, spices, gemstones, and porcelain.
“The terrestrial silk roads enabled commerce to move from China through Central Asia and Persia to the Mediterranean basin. The sea lanes linked lands from South China, through Southeast Asia, Ceylon, and India, to Persia and East Africa. One sea lane may have enabled Malayan mariner-merchants to sail directly from the islands of Southeast Asia to Madagascar and ports in East Africa. From the Persian Gulf, the Red Sea, and the East African ports, it was a simple matter to gain access to the Mediterranean basin”.
These same roads, land routes, and maritime passages also served local and regional purposes, facilitating the transport of essential and bulky goods—grain, oil, salt, live non-human animals, preserved foods, timber, plain textiles, and common pottery. Such commodities typically did not traverse the entire continent but circulated among towns and cities along the way. This pattern was evident along the routes linking coastal and inland urban centers under Han Dynasty control (Barisitz 2017, p. 10). In the Roman Empire, the organic integration of provinces beyond Italy into the economy of the imperial capital has long been recognized: Rome itself—and the military garrisons stationed across the imperial territory—depended heavily on imported grain, particularly from North Africa and, above all, the Nile Valley (Benjamin 2018, pp. 73, 257; Duncan-Jones 1990; Fulford 1987).
During the peak of silk trade between China and the Roman Empire (c. 90–130 CE), a highly lucrative transcontinental exchange network was consolidated, though marked by the absence of direct contact between its eastern and western extremes. The overland Silk Road began in northern Chinese cities such as Chang’an and Luoyang, reaching the Tarim Basin, which had come under Chinese control by 90 CE. After crossing the Pamir Mountains, caravans entered the territory of the Kushan Empire, where they likely paid customs duties. From there, the silk was sold in Merv, a Parthian city that the Chinese agents did not venture beyond. Parthia played a central role as an intermediary, profiting from the resale of silk to the Romans while deliberately obstructing any attempt at direct Sino-Roman contact. As a result, Roman products—chiefly gold, silver, and luxury items such as amber, coral, glass, embroidered textiles, and aromatic substances—were acquired by the Chinese through Parthian mediation.
On the Roman side, control of overland routes west of the Euphrates, consolidated around 106 CE, ensured the flourishing of commercial hubs such as Palmyra, Damascus, Petra, Antioch, and Ephesus. In these cities, archeological records reveal intense commercial activity involving silk and other valuable goods. Raw silk, known for its thickness and weight, was entirely transformed in the looms of Syrian cities like Tyre, Sidon, and Berytus into a light, translucent gauze—the form in which Romans recognized the fabric. This final product was highly prized among Roman elites, primarily worn by aristocratic women to emphasize their social status. The silk trade operated largely on a barter basis, focused on the exchange of prestige goods and precious metals, particularly since China lacked significant production of gold and silver (Thorley 1971, pp. 71–79).
Contrary to the general pattern in which food trade remained largely regional in scope, archaeological evidence attests to the importance of some long-distance supply routes linking Roman cities as both origins and destinations. The grain trade (Arruñada 2016; Bowman and Wilson 2009, pp. 7–25, 55; Frank 2006) was indispensable for sustaining Rome, whose population exceeded the productive capacity not only of its immediate agricultural hinterland but perhaps of the entire Italian peninsula. Under Augustus (27 BCE–14 CE), Egypt is estimated to have supplied roughly 135,000 tons of grain annually—amounting to between one-half and two-thirds of the imperial capital’s total consumption (Erdkamp 2005, pp. 226–27; Rickman 1980, pp. 261–64). This dependence on the southern provinces persisted for centuries.
The supply logistics involved tribute, a centralization/redistributive system—under the supervision of the praefectus annonae, who was responsible for the grain storage and distribution system—and a robust network of private merchants (negotiatores), whose activities were encouraged but not strictly regulated by the state. Archeological evidence, such as documents from Pompeii dated to 40 CE, demonstrates the significant involvement of these agents in trade, particularly in the import of Alexandrian wheat. The supply routes were predominantly maritime and highly organized: approximately 800 shipments by large vessels (carrying 50,000 modii, or 340 tons) or 4000 shipments by smaller vessels (carrying 10,000 modii, or 70 tons) would have been required to transport grain to Rome via ports such as Puteoli and Ostia. From these ports, the cargo was redistributed via river transport to Rome, stored in granaries, and sold locally, revealing the interdependence between state logistics and regional commercial autonomy (Casson 1980, pp. 20–28; Kessler and Temin 2007, pp. 315–17).
Further east10, excavations at the port of Berenice on the Red Sea have revealed the presence of rice, imported by Indian merchants who not only consumed it locally but also offered it as a tradable commodity. Roman ships returning from northern India supplied East African markets with basic Indian staples, including grain, rice, ghee, sesame oil, cotton textiles, and sugar cane. Written records indicate that special grain shipments were sent to locations such as Muziris (in southern India), suggesting that these deliveries were intended for resident Romans who needed to supplement the local rice-based diet. In Qana (now Bir Ali, Yemen), markets offered Mediterranean wines and a limited amount of Egyptian wheat supplied by Roman merchants. Additionally, Roman merchants residing in Muza (near the Bab el-Mandeb Strait at the entrance to the Red Sea) received a modest quantity of Mediterranean grain.
The ostraca records from Berenice provide valuable evidence regarding transport and provisioning practices within the context of eastern Roman trade, particularly with respect to the circulation of food, beverages, and maritime rations. Roman amphorae carried a remarkable variety of liquid products, including wines from various regions (Italian, Laodicean, Rhodesian, Aminean, Ephesian, and Colophonian), as well as Egyptian varieties stored in previously used containers. In addition to wine, these amphorae also transported olive oil and fermented fish sauces (garum), both staples of the Mediterranean diet (McLaughlin 2010, pp. 15–27, 68–76, 142–49). On board, provisions included beets and onions, along with small containers of a quince-flavored mixture of water and honey, likely part of a preventive strategy against scurvy, a common ailment on long sea voyages. Other products consumed for this purpose included dried fruits such as amla, rich in vitamin C, and legumes such as mung beans (Fitzpatrick 2011, pp. 27–33, 45–49).
In the Parthian Empire, similar products (wines, fruits, and nuts) were part of the export economy to China via the Silk Roads. Central Asian regions, such as those surrounding the cities of Samarkand and Bukhara, contributed high-quality grain and various fruits, respectively, reflecting a diet based mainly on cereals and vegetables, complemented by dried fruits such as dates and figs. The Romans, on the other hand, valued spices and condiments highly, with records of up to 142 different varieties in ancient texts. These included laurel, cassia, and cumaru, with pepper being particularly significant for both preserving and improving the flavor of food (Kron 2012, pp. 170–72). At the same time, the Silk Roads acted as a vector for the spread of new agricultural crops, promoting the circulation of species such as rice, sugar cane, wheat, spinach, artichokes, eggplants and fruits such as oranges, mangoes and melons, as well as the introduction of Western grapes and wines to Han China (Bentley 1996, pp. 765–66).
In addition to the circulation of food and spices, the Afro-Eurasian trade included a wide range of raw materials and manufactured products essential for everyday life and productive activity. Metals such as copper, tin, lead, and iron were among the main Roman exports to regions such as India and Aksum (largely on the territory that now constitutes the modern state of Ethiopia), playing a key role in the supply of building materials, tools, and weapons. Textile products, including wool, linen, and cotton, circulated widely; and the reuse of Egyptian garments, which were washed, dyed, and sent back to African markets, was notable. The export of Roman ceramics and glassware to the East illustrates the spread of medium-value household and decorative goods. Teak wood from Asia was imported for shipbuilding in Roman Egypt, valued for its strength and durability. Specialized tools such as axes, knives, and carpentry utensils were also sent to Aksum, reinforcing the economic integration between producing and consuming areas.
The Afro-Eurasian lines of connection also fostered an ecosystem of technological, cultural and religious exchanges. This transcontinental connectivity significantly shaped patterns of sociability, power, and production across the intercontinental landmass (Benjamin 2014, pp. 376–77; Teggart 1969). In the domains of technology and production, the Silk Roads functioned as conduits for the dissemination and adaptation of important innovations during the 1st and 2nd centuries CE. Agricultural techniques such as the iron plow, the multi-tube seeder, and the rotary threshing fan spread widely, undergoing adaptations to suit the diverse ecological contexts of Eurasia. Strategic innovations like the stirrup—developed on the steppes around the 1st century BCE—revolutionized mobility, warfare, and the social organization of both nomadic and sedentary societies (Williams 2024). The invention of paper under the Han dynasty, and its gradual dissemination, illustrates the close link between technological innovation and imperial bureaucracy. Likewise, the near-industrial production of Chinese silk, driven by growing demand in the Roman and Parthian worlds, stimulated advances in weaving techniques as well as in the logistics of long-distance transportation.
On a cultural level, this network favored an unprecedented miscegenation of religious, philosophical, aesthetic, and institutional ideas (Christian 2000, pp. 1–4, 16–18). Buddhism, which originated in the Indian subcontinent, spread intensely through Central Asia and Chinese territory, favored by mercantile and missionary networks. The materialization of this movement can be seen in centers such as Dunhuang (China), whose decorated caves bear witness not only to Buddhist diffusion, but also to the confluence of diverse religious traditions, such as Eastern Christianity, Zoroastrianism, Hinduism, and Manichaeism (Benjamin 2018, pp. 273–81; Tan 2020). These faiths not only coexisted along the routes but also often shared practices, terminologies, and cosmologies, building an environment of relative syncretism and tolerance, which is thought to have paved the way for peaceful trade and inter-societal relations in the zones where civilizations met.
Elements of Chinese administration were assimilated by Central Asian peoples, while Greek thought and Hellenistic ritual practices influenced Buddhism visually and doctrinally, as evidenced in the syncretic sculptures produced in Gandhara and Mathura during Kushana rule (Karimjonova 2024). Music, instruments, and concepts of harmony of nomadic cultures were absorbed by Eastern and Western traditions. The introduction of plant species such as the grape, as well as animals such as the Ferghana horse and the Bactrian camel, changed agricultural and logistical practices in previously isolated regions. Western demand for Chinese silk impacted not only trade but also cultural and consumption patterns among Roman elites (Xu 2023).
Therefore, between the 1st century BCE and the 2nd century CE, a virtuous cycle of economic and cultural integration was underway, even amid the inherently conflictual dynamics of the empires along the Silk Roads. However, the emergence and persistence of this integrative network should not be understood as spontaneous or self-sustaining. Like any complex, multi-layered system of communication and exchange, it was subject to entropic decay, as framed by the Second Law of Thermodynamics: without a continuous input of free energy and deliberate work, its structural order would inevitably erode.
It was not an exogenous shock that eclipsed this transcontinental network, but rather the very intensity of its contacts, transits, and exchanges, which generated the conditions for its collapse. If human beings—and their captive non-human servants—appeared to be the protagonists along these routes, other agents, straddling the boundary between the living and non-living, also found a niche in the roads, trails, and navigable waterways. Countless carriers and hosts, moving over long distances from one densely populated urban center to another, created an unprecedented adaptive opportunity. Once the presence of these invisible agents became apparent, the flows of energy and the material transformations essential to the negentropic processes sustaining the thermodynamic complexity of the Silk Roads—and the empires that depended on them—began to contract. With their decline, the first Afro-Eurasian world-system also entered its own period of dissolution.

4. The Plague

In 165 CE, reports emerged of a mysterious disease in Smyrna (modern-day İzmir), then one of the principal Roman cities in Anatolia, and in Nisibis (present-day Nusaybin, a predominantly Kurdish city in southeastern Turkey), which at the time served as a major commercial hub on the caravan routes from Bactra in the Kushan Empire and from Merv in Parthian territory. The most prominent witness to this outbreak was the sophist Aelius Aristides, who described episodes of a severe and highly contagious epidemic that he himself contracted. At the time, Nisibis had only recently been seized from the Parthians by Roman forces under Lucius Verus, and the return of these troops to Rome likely served as one of several vectors for introducing the disease to the Italian Peninsula, where it spread rapidly between 166 and 168 CE (Geoffroy 2025, p. 172).
“So, perhaps the Parthian campaign contributed to, exacerbated, an outbreak of pestilence which was already developing in the East, and would have hit Smyrna anyway; rather than being the primary cause and driver of the epidemic. Soldiers passing through on their way to the fighting, dragging resources with them, added to some displaced civilians, would all have disruptive effects, increasing both the possibilities for transmission of and susceptibility to disease. The subsequent relocation of these troops then helped make this plague a more decisively and severely imperial affair than it would otherwise have been: both geographically and militarily”.
Before reaching Rome, Greece is believed to have been affected as early as 166 CE, along with Macedonia and the Istrian region. After the epidemic advanced through the Italian Peninsula, Gaul, as well as territories beyond the Rhine and the Danube, would likewise be struck (Gourevitch 2005, p. 59). This marked the beginning of the succession of outbreaks now commonly referred to as the Antonine Plague, so named for its devastating impact on the Roman Empire under the Antonine dynasty (Gaia 2020).
There were, however, clear indications that this was not merely a localized crisis but, rather, a genuine Afro-Eurasian pandemic. We know that epidemics flared across Mesopotamia, and that the city of Seleucia (on the banks of the Tigris)—another key caravan stop between Persepolis and Nisibis and near Ctesiphon, the Parthian capital—was severely affected. Around this time, the disease appeared among the Parthian troops in Mesopotamia, causing a sudden outbreak of “boiling and fizzing”, as sources from the time put it. Thousands of Parthian soldiers fell ill with a deadly disease with fever and swelling of the skin (Zaviyeh and Golshani 2020). From Pelusium, the plague spread via caravans to Antioch, a major commercial and customs center, the third largest city of the Roman Empire after Rome and Alexandria, and one of the principal gateways between the Mediterranean and Asia (Berche 2022, p. 2). In Ephesus, another important mercantile hub, oracles were consulted in search of a remedy for the plague (Sabbatani and Fiorino 2009, pp. 263–68). Palmyra, once a modest settlement that transformed into a thriving node of the long-distance trade network, witnessed its caravan traffic collapse just a few years before the plague reached Rome. In the 160s and 170s CE, the construction of new funerary monuments in Palmyra introduced an unusual practice: owners of newly built tombs granted perpetual use of sections to third parties. After 170 CE, the construction of new monuments virtually ceased, remaining stagnant until the century’s end (Duncan-Jones 2018, p. 57). This could point to a cultural shift; however, when we consider the dwindling caravan traffic in Palmyra, the evidence of epidemics in other caravan cities, as well as in the Tarim Basin and North Africa before 165 CE (see below), the sharing of funerary monuments may suggest cost reductions likely driven by mounting economic hardship.
Contemporary accounts suggested that the disease that ravaged Smyrna, Nisibis, Pelusium, Antioch, and perhaps Palmyra did not originate in Asia but rather in the Aksumite Empire, and that it reached Asia Minor from Egypt around 164 CE. This spread is believed to have occurred along the Red Sea and Indian Ocean trade routes, passing through ports such as Berenice, Ailana, Adulis, and Qani, which connected the Roman world to southern Arabia, the Kushan Empire (Mehlhorn 2023, pp. 7–8), and China.
“Trade connections between the four largest empires of Eurasia—the Roman, Parthian, Kushan, and Han empires—reached their zenith. Maritime routes across the Indian Ocean and overland highways via the Silk Roads were replete with travelers. We know that diseases hitched rides along these routes, even the lonely paths across empty wilderness and desolate plains. (…) Analysis of two- thousand- year- old human feces found in a latrine along one Silk Road waypoint confirms that travelers carried parasites with them over thousands of kilometers. Could traders have also carried the Antonine plague pathogen?”.
The answer is: probably yes. Given Alexandria’s central role as a nexus within the long-distance trade networks reaching Rome (especially with regard to the grain trade) it would be plausible for the disease to have traveled aboard cargo fleets from North Africa to Italy (Cravioto and García 2007; De Romanis 2007, p. 201; Duncan-Jones 1996, p. 116, 2018, p. 43; McDonald 2021, pp. 385–87).
An alternative hypothesis speculates that the origin of the Antonine Plague lay in East Asia, particularly in territories controlled by the Han dynasty or the Mongolian region (Fears 2004, p. 74; Ferreira et al. 2023, pp. 1–2). In 162 CE, approximately two years before the plague was first reported in Africa, an epidemic during a military campaign in Sinkiang and Kokonor is said to have killed between 30% and 40% of the troops; these areas are located near the Tarim Basin, where three important oases and commercial centers linked the northern branches of the Silk Roads (Boyd 2022, p. 18). Numerous reports describe successive epidemics affecting several Chinese cities in 166 CE, which, according to Xiang Kai’s account from Pingyuan, were the result of a “universal pestilence” (Elliott 2024, p. 104). That same year, in an effort to bypass the Parthian Empire’s monopoly on communications and trade between Rome and China, a Roman embassy is thought to have traveled by sea to the Han court, sailing around the Indian subcontinent and through Southeast Asia. It is plausible that pathogens accompanied the delegation, particularly as it passed through ports with high biological risk (Harper 2017). Later, in the 11th century CE, the historian Sima Guang would claim that major pestilences swept across China over roughly a decade, between 151 CE and 185 CE, a span that aligns with the canonical timeline of the Antonine Plague (165–180 CE) (Duncan-Jones 1996, p. 117, 2018, p. 42; Oddo et al. 2023, pp. 5–6; McDonald 2021, p. 387; Ruiz-Patiño 2020, p. 180).
The Antonine Plague was referred to by the eminent Pergamene physician Galen (c. 129–c. 216 CE) using the Greek term loimos (Cravioto and García 2014), which denotes a widespread epidemic event, “when lots of people in a single place are stricken in the same way at the same time—which is particularly sustained and deadly” (Flemming 2019, p. 226). Unlike specific diseases classified as phrenitis or podagra, loimos does not refer to a precise clinical entity but rather to an outbreak of devastating scope. Identifying the disease responsible for the Antonine Plague is therefore an exceedingly difficult task. Galen never described it systematically, focusing instead on the treatment of physical symptoms. As noted by Littman and Littman (1973, p. 244) and Duncan-Jones (1996, p. 108), the absence of a comprehensive clinical account makes it impossible to reconcile what we know about the Antonine Plague with a definitive diagnosis of any single contemporary disease. This incongruity, however, may reflect not only the paucity of evidence but also the evolutionary transformations of pathogens over time, with viruses and bacteria undergoing mutations that produce distinct strains with varying degrees of virulence and clinical presentation.
Galen’s observations included the emergence of darkened exanthems among the afflicted, a phenomenon particularly prevalent among individuals concurrently experiencing hemorrhagic diarrhea. It has been documented that the skin eruptions exhibited ulceration and remained dry in all cases, thus displaying a distinguishing characteristic from smallpox (the primary contender for the enigmatic disease); in smallpox, the exanthems manifest as pustular and only undergo desiccation in a progressive stage. The ulcerations were regarded by Galen as necessary, a somatic response to contamination through which putrefied blood was expelled (Berche 2022, p. 2; Flemming 2019, pp. 228–29). Galen also described fever, abdominal pain, and blackened stools, probably due to hemorrhage; vomiting; coughing; inflammation and ulcerations of the airways with blood-tinged discharge that could persist for many days (Harper 2021b; Retief and Cilliers 2000, p. 268). As for the disease’s clinical course, it lasted between nine and twelve days until reaching its most severe crisis; three days later, Galen reports, a young man who survived would already be able to rise from his bed (Littman and Littman 1973, pp. 247–48). One can only imagine the medium-term debilitating effects until full recovery, with a significant impact on the availability of the labor force.
In China, the eminent physician Hua Tuo (c. 140–c. 208 CE), in what is considered one of the earliest likely descriptions of smallpox in East Asia, described an illness that caused what he termed “red herpes” in patients who developed a mild fever, and “black herpes” in those with a high fever. Hua Tuo further noted that about one in five patients with red herpes would succumb to the illness, while none of those who developed the more severe form (the black herpes) survived (Harper 2021b). During the epidemic of 166 CE, under Emperor Huan (146–168 CE), Chinese tax records indicated a mortality rate of 30–40% among taxpayers, attributed to a disease that spread across the imperial territory while the army was engaged on the northwestern frontier (Boyd 2022, p. 15).
Despite the symptomatic differences, some form of orthopoxvirus, an ancestral variant of modern smallpox, may well have been the cause of this possible Afro-Eurasian pandemic (Andorlini 2012, p. 16; Berche 2022; Geoffroy and Díaz 2020; Littman and Littman 1973, p. 245; McNeill 1989). Evidence suggests a correlation between low rainfall, low temperatures, and outbreaks of smallpox, and it is known from Galen that the city of Aquileia was severely affected by the Antonine Plague between 168 and 169 AD, during an exceptionally harsh winter. It is also known that winter caused significant mortality in the settlement of Soknopaiou Nesos in Egypt’s Fayum Oasis, and that it was during winter and spring that the highest death tolls occurred in the Chinese epidemics of 173, 179, and 182 (Duncan-Jones 2018, p. 44). In epidemics of Variola major (the modern form of the disease), it is estimated that 5% to 9% of unvaccinated survivors develop largely irreversible ocular complications, an outcome that accounted for roughly one-third of all blindness cases in Europe prior to the introduction of vaccination. In Rome, following the Antonine Plague, the cult of Bona Dea (a goddess traditionally associated with healing, especially of eye diseases) saw its authority and efficacy called into question, likely due to the persistence of blindness among her devotees (Ambasciano 2016). This adds yet another piece of evidence in favor of the hypothesis that the Afro-Eurasian world-system was struck by an archaic poxvirus akin to smallpox.
Although a proteobacterium such as Rickettsia prowazekii (the causative agent of typhoid fever in humans, cattle, goats, and sheep) is highly susceptible to mutations due to anthropological and environmental factors, smallpox exhibits more frequent variations in terms of intensity, given that the Poxviridae are highly prone to rapid adaptive genetic changes (Gourevitch 2005, p. 65). This has been attested by the genetic sequencing of an ancestral form of orthopoxvirus extracted from a mummified body discovered in Lithuania, dated to the 17th century CE (Flemming 2019, p. 236; Wertheim 2017). Although no genetic material from the 2nd century CE is available, it is plausible that other variants of orthopoxvirus circulated along the trade routes of the Afro-Eurasian world system, possibly producing symptoms that were generally compatible, albeit with characteristics distinct from those caused by modern smallpox.
Therefore, symptomatic differences should not be deemed definitive grounds for ruling out the possibility of some sort of “smallpox” pandemic between 164 and 180 CE, even though Flemming (2019, pp. 232–34) points out a puzzling omission: despite the symbolic and moral weight Romans attached to skin lesions—which drove extensive efforts to develop treatments for concealing or removing scars—contemporary and later ancient historians failed to mention the disfiguring marks left by smallpox in accounts of the Antonine Plague. Galen’s silence on this point is especially significant. As a physician well-versed in treatments for skin lesions and one who meticulously documented the plague’s effects, his failure to connect such disfigurements with survivors undermines the case for modern smallpox. However, it remains plausible that earlier, less severe poxvirus strains, which might not have produced distinctive scarring, could have been responsible.
The continental outbreak is unlikely to have been alastrim (Variola minor), a mild smallpox form with minimal scarring and low mortality (~1%), given the Antonine Plague’s estimated 10–30% death rate according to the specialized literature. This leaves an archaeoviral poxvirus strain as the most compelling hypothesis. Supporting this, in Baghdad, the Persian physician Rhasis (Abu Bakr Muhammad ibn Zakariya al-Razi) described hasbah seven centuries after the Antonine Plague: an endemic illness with flat pustules that left no scars, resembling a mild hemorrhagic fever. Though Galen’s description of the Antonine Plague’s vesiculopustular eruptions differs from hasbah, the disparity may reflect viral attenuation over time. Centuries of cyclical exposure would have conferred population-wide immunity, converting the once-devastating archaeovirus into a manageable endemic illness—clinically consistent with the mild disease described by Rhasis (Flemming 2019, p. 240; Gourevitch 2005, p. 65).
The skeletal evidence from Cirencester (Roman Britain) presented by Zhao and Wilson (2025) provides further crucial support for the hypothesis that a proto-variolar virus introduced during the Antonine Plague had undergone significant attenuation already by the 3rd-4th centuries CE. The adult male skeleton (sk847) exhibits characteristic bilateral lesions of osteomyelitis variolosa—severe ankylosis of the left elbow at 90 degrees, bilateral foot deformities with shortened calcanei, and systemic articular pathology—representing sequelae from a childhood infection that the individual survived to reach 41–50 years of age. This survival pattern is particularly significant when considered within the evolutionary framework of poxviral attenuation: while the initial introduction of an ultra-archaic poxvirus during the Antonine Plague would have caused substantial mortality in immunologically naïve populations, the Cirencester case demonstrates that by the late Roman period, children were not only contracting this disease but surviving it in sufficient numbers to reach adulthood bearing its skeletal signatures. The bilateral and systemic nature of the bone lesions confirms that this was indeed a severe childhood infection capable of causing the same pathological sequelae as later variola strains, yet the individual’s survival and longevity suggest a viral form that, while still capable of causing significant morbidity, had evolved toward reduced lethality compared to its initial pandemic manifestation.
This archaeological evidence supports the proposed evolutionary trajectory wherein the ultra-archaic poxvirus that caused the Antonine Plague represents either the last common ancestor of both aVARV and mVARV lineages or a more ancient viral form that preceded their divergence, with the two lineages subsequently separating around the 4th century CE along distinct evolutionary pathways in geographically separate regions. The Cirencester skeleton, dating precisely to this critical divergence period, captures an aspect of this transitional phase when the ancestral virus had already begun its attenuation process in European populations through sustained human-to-human transmission over two centuries since its initial introduction. Duncan-Jones (2018, p. 44) estimated an average fatality rate of 25% to 30% for the Antonine Plague, with regional variations consistent with a virulent form of smallpox. He further suggested that child mortality may have been particularly high, citing the 1774 smallpox outbreak in Chester, England, which resulted in 202 deaths (180 of them children) as a comparative reference. This maximalist view regarding the disease’s death toll has been gaining traction, even though not all its proponents endorse figures as high as those proposed by Duncan-Jones—often estimating between 10% and 20% fatal casualties, with regional differences (Elliott 2024; Harper 2021b; Kennedy 2024). In any case, the minimalist view of Gilliam (1961, p. 250) and Bruun (2007, p. 209), who suggested a mortality rate of 1% to 2%, has been losing relevance in the literature.
Aligning with the archeoviral hypothesis, Littman and Littman (1973, pp. 253–55) proposed a moderated mortality rate of 7–15%—substantially below Variola major yet exceeding alastrim—while stressing how contextual variables (population density, sanitation, seasonality, comorbidities, and local response capacity) mediated outcomes. Though their estimates are now considered conservative against maximalist reappraisals, they retain relevance for the archaeovirus hypothesis. Critically, an ancestral poxvirus phylogenetically linked to mVARV would align both with symptomological deviations from modern smallpox and moderate mortality—precisely the epidemiological profile suggested by Littman and Littman for Afro-Eurasia’s pandemic chain.
Apart from mortality rates, Littman and Littman proposed that widespread epidemics could reach contagion rates of 60% to 80%. If that is correct, and considering nearly three decades of sustained exposure to the pathogen across the Afro-Eurasian world-system, a new hypothesis begins to take shape. The attempt to measure the impact of an epidemic or pandemic solely through mortality rates yields a limited picture, even though this remains the primary focus of the specialized literature. Societal collapse need not stem from extreme mortality alone: even modest fatality rates can prove catastrophic when coupled with high contagion, widespread incapacitation, and enforced isolation. Such conditions readily disrupt highly integrated systems, eroding internal cohesion among subunits and overwhelming their pre-existing redundancies11 for sustaining critical linkages during crises (Clemente-Suárez et al. 2021; Quandt et al. 2022). The COVID-19 pandemic, although not historically comparable12 to the Antonine Plague, offers a useful analogy: despite its moderate lethality, its economic, social, and administrative effects have been enormous, especially in regions whose logistical and productive structures depend on in-person work.
The incapacitation of 20% to 30% of the workforce—or, more drastically, 60% to 80%, as suggested by Littman and Littman (1973)—undermines agricultural output, tax collection, military readiness, and even the routine of state activities. These effects are generally more severe in societies with limited capacity for technical substitution or automation, such as those of ancient Afro-Eurasia (Ayoub et al. 2021; Ivanov 2021; Pujawan and Bah 2022). These scenarios are even worse among the poorest segments of the population, who continue to work in precarious conditions, facing greater exposure to biological risks, a higher incidence of injuries, and lower productivity.

5. A Pathocenotic Transition

Scholarship by Grmek (1969), Gourevitch (2005), and Gonzalez et al. (2010) further suggests the Antonine Plague may have arisen not from a single pathogen, but from a systemic rupture in the region’s pathocenosis—defined by Grmek (1969, pp. 1475–76) as the dynamic equilibrium of co-circulating diseases within a specific population, time period, social organization, and ecological context. This framework posits that the plague emerged when critical parameters governing interactions among pathogens (including their vectors and reservoirs), hosts, environmentally sensitive comorbidities, and the physical environment underwent destabilizing shifts. Such transformations—whether through introduced virulent strains, climatic stressors, famine-driven vulnerability, or socially amplified transmission (Gourevitch 2005; Gonzalez et al. 2010)—would have disrupted existing disease balances until a new epidemiological equilibrium was established. “This makes a structural and dynamic system, tending to reach equilibrium, especially if the ecosystem is stable, but liable too to lengthy episodes of evolution and dramatic breakdowns” (Gourevitch 2005, p. 57).
“This equilibrium can be disturbed, however, leading to sharp variations in the frequencies of certain diseases and even to the emergence of new diseases in a particular population or territory. Thus, a succession of disturbances can lead to a dynamic sequence of pathocenoses, one health state giving way to a new one after a period of upheaval. Such perturbations include the introduction of infectious agents and environmental changes of ‘natural’ or human origin”.
The Afro-Eurasian commercial networks, along with the accumulation of capital through large-scale infrastructure projects (such as aqueducts, caravanserais, markets, warehouses, and granaries) and the maintenance of land and maritime routes, facilitated an unprecedented degree of interaction between populations during the 1st and 2nd centuries CE, in a context of substantial demographic growth. The intense movement of humans and non-human animals over long distances, linking densely populated urban centers with heterogeneous sanitary conditions and constant exposure to herd animals, would likely have disrupted the pathocenotic balances previously established on a transcontinental scale. In such cases, the transformation of the “pathocenotic landscape” can impose significant costs in terms of mortality, reproduction rates, and longevity on the affected organisms, until a new homeostatic (endemic) equilibrium is reached.
In summary, the continuous interaction between organisms, both macroscopic and microscopic, may have facilitated the emergence of diseases interrelated through dynamics of support, antagonism, and feedback. The overlap of symptoms and co-occurring diseases likely hindered contemporary sources from clearly describing distinct clinical presentations and made it challenging for modern scholarship to retroactively identify them with known nosological entities. A feedback loop between a probable poxvirus infection (disseminated by the movement of humans and non-human animals) and pre-existing health conditions at each node of the Afro-Eurasian networks—such as chronic malnutrition, intestinal infections due to poor sanitation, and dermatoses that compromised the skin barrier and facilitated opportunistic infections—could have contributed to the emergence of a chaotic pathological landscape. Diseases like smallpox “burst under favor of an immunosuppressive situation or any lowering of natural immunity, as in the cases of a real famine, or bad, poor and scanty food, for agriculture and health deteriorate on a par” (Gourevitch 2005, p. 64).
Paleopathological data and textual sources indicate the circulation of pustular diseases across various regions of Afro-Eurasia since the second millennium BCE, although more recent paleogenomic literature suggests that the oldest known human poxviruses (aVARV) date from the 7th to 10th centuries CE and exhibit significant genetic divergence from mVARV (the modern form of the variola virus)13. It is therefore possible that diseases somewhat similar to smallpox existed in the past, caused by viral lineages evolutionarily related to mVARV but now extinct. Based on paleogenomic analyses, Newfield et al. (2022, p. 913) suggest that aVARV may have caused a disease distinct from smallpox as we know it, differing in epidemiological aspects (such as transmissibility and outbreak patterns), etiological features (due to genomic divergences affecting virulence), clinical presentation (observable symptomatology), and pathogenic mechanisms (modes of immune response and tissue damage).
The hypothesis of an African origin for the poxvirus likely responsible for the Antonine Plague is supported by molecular analyses pointing to a zoonotic origin from a common ancestor with the Taterapox virus (found in African rodents) between 2000 and 4000 years ago, possibly in East Africa (Babkin and Babkina 2015, pp. 1100–8; Harper 2017, 2021b). In Egypt, the mummified body of Pharaoh Ramses V (c. 1157 BCE) displays pustular lesions consistent with clinical manifestations of smallpox, and other mummified remains from the same period show similar markings, suggesting the circulation of a poxviral disease in North Africa since at least the second millennium BCE (Moyer 2005, p. 2; Oldstone 2010, pp. 56–57; Sherman 2006).
This scenario supports the idea that the virus or its precursors may have spread from Africa through trade networks, military expeditions, and population movements, as also evidenced by reports of major pestilences in the Arabian Peninsula in the decade preceding the outbreak of the Antonine Plague in Rome. In India, ancient medical texts such as the Charaka Samhita and the Sushruta Samhita, dated between 1500 and 1000 BCE, describe illnesses with symptoms compatible with smallpox, while local religious practices were already directed toward a “goddess of smallpox,” suggesting a longstanding familiarity with the disease (Babkin and Babkina 2015, p. 1102; Moyer 2005, pp. 2–3). By the 130s CE, classical depictions of the goddess Hariti had begun to circulate in the Kushan Empire—an early sign, perhaps, of changing disease ecologies in the region. Although her association with smallpox emerged only in later sources, Hariti’s rising prominence in sculpture and religious practice during the late 2nd century suggests that South Asia may already have been a reservoir for the disease, or for a closely related pathogen (Oliveira 2022, p. 173). In China, although clinical descriptions only emerge more clearly in texts from the 4th century CE, it has been suggested that the virus may have reached the country via northern migration, possibly as early as the 3rd or even the 12th century BCE (Sabbatani and Fiorino 2009, p. 263). In the Near East, Hittite records from the 14th century BCE mention a prolonged pestilence acquired during conflicts with the Egyptians, which may have included variola-like manifestations (Cravioto and García 2014, p. 67). By contrast, robust evidence for the presence of smallpox in Europe prior to the 6th century CE remains scarce and contested, raising doubts about its early circulation in the western Mediterranean (Hughes et al. 2010, p. 53).
Newfield et al. (2022, p. 913) rightly challenge the hypothesis of the presence of any viral form capable of producing the modern form of smallpox prior to the 4th century CE, and they point to methodological limitations in the application of the molecular clock technique previously used to date the emergence of aVARV. Instead, they propose that aVARV and mVARV diverged sometime between the 4th and 16th centuries CE—thus significantly after the Antonine Plague and any supposed ancient smallpox epidemics. This conclusion, however, tells us nothing about: (1) the actual pathological potential of the last common ancestor (LCA-VARV) between the currently sequenced poxviruses (aVARV and mVARV), as well as that of even more primitive variants that are phylogenetically related to but predate this LCA, which may have been responsible for epidemics similar—but never identical!—to modern smallpox prior to the 4th century CE; and (2) how different the epidemiological, clinical, and pathogenic manifestations of these ultra-archaic strains may have been when compared to mVARV. So, the author’s assertion that “it is time to eradicate smallpox from our histories of the ancient world and ancient plagues from our histories of smallpox” appears to be a hasty proposal14.
It is well established that the close coexistence of humans and herd animals—typical of pastoral and livestock-based economies—created biological bridges between host species, establishing a speciation niche particularly favorable to microorganisms with high rates of genetic mutation. In the 2nd century CE, as an increasing number of Afro-Eurasian populations exposed to livestock came into contact with densely populated urban centers, the conditions for zoonotic processes became increasingly plausible.
The emergence of new strains capable of establishing themselves in and adapting to novel host species has been a key driver of spillover events throughout history (Quammen 2013). However, the modern smallpox virus (mVARV) has Homo sapiens as its only active habitat, lacks non-human reservoirs, and requires direct interpersonal contact for transmission (Haller et al. 2014, pp. 18–19; Moyer 2005, p. 1; Thèves et al. 2014, pp. 210–11; Thèves et al. 2016). It is, therefore, a microorganism already specialized in a narrow ecological niche, enjoying evolutionary advantages that (most of the time) outweigh the risks associated with habitat restriction. In contrast, aVARV (the ancient variant of the virus, dated to the 7th–10th centuries CE) shows genetic markers that suggest its possible presence in non-human reservoirs (Elliott 2024, p. 199).
How, then, should we hypothesize the main features of the common ancestor of these variants—or those of even more primitive phylogenetically related strains? It is plausible to suggest that LCA-VARV was a zoonotic (Wolfe et al. 2007, p. 281), generalist virus capable of infecting multiple mammalian species. From this ancestral form, progressively more specialized lineages may have diverged, each adapting to specific ecological niches, with mVARV representing the most host-restricted and ecologically specialized variant among them. In light of this, contemporary reports suggesting that outbreaks of the disease in humans also affected non-human animals should not be dismissed outright. While references to cattle dying in ancient sources could very well have served a rhetorical function, as Duncan-Jones (1996, p. 111), Elliott (2024, p. 200), and Flemming (2019, p. 235) argue, this does not entirely rule out the possibility of zoonotic events occurring. These references, often found in the works of Seneca, Livy, and Herodian (Duncan-Jones 1996, p. 112, 2018, p. 51; Sabbatani and Fiorino 2009, pp. 261–63), may reflect a literary pattern, but we must consider that zoonotic outbreaks involving poxviruses are far from uncommon. Indeed, poxviruses, in general, infect a broad range of mammalian species, and one of their most distinctive evolutionary features is their capacity for interspecies host transfer (Haller et al. 2014, pp. 33–34; Hughes et al. 2010, p. 53).
Therefore,
“(…) one hypothesis is that Variola evolved from a rodent orthopoxvirus to become an obligate human pathogen, in Africa, sometime before the Antonine Plague. The biological agent of the second-century pestilence could represent an especially virulent lineage of Variola that went extinct, or an ancestral form of the virus that evolved into a milder medieval form of smallpox. And it still could have been caused by some other biological agent altogether, although there are no serious candidates at present”.
In sum, poor evidence is still better than no evidence at all, which makes it unduly hasty to dismiss past epidemiological reports, however unreliable they may seem. While paleogenomic analyses do not yet allow us to reconstruct the evolutionary history of the common ancestor between aVARV and mVARV (that is, prior to the 4th century CE), we do have ancient accounts of illnesses that resemble smallpox in some respects, consistent with the expected variation in the epidemiological, clinical, and pathogenic expression of these ultra-archaic poxviruses when compared to modern smallpox. Recent literature seems to be moving in the direction of defending lower mortality rates than those caused by Variola major, although higher than the minimalist estimates, which would converge on the fact that generalist pathogens, capable of developing in multiple species, end up being less virulent and lethal than specialist pathogens (Brown et al. 2012; Kirchner and Roy 2002; Leggett et al. 2013; Visher and Boots 2020).
Beyond the frenzied spatial integration of societies, what else could have destabilized the pathocenotic equilibrium to the point of plunging previously exposed populations into an epidemiological process with pandemic features? Could there be an overlooked factor?

6. Not by the Plague Alone: Climate Change

There is solid evidence that the pathocenotic imbalance resulted not only from intense territorial integration, the movement of organisms over long distances, growing urbanization, sanitary problems, and the chronic malnutrition common to Afro-Eurasian cities. One crucial element enhanced the compound effects of these processes, enabling the emergence of scenarios that had once appeared unlikely. If a poxvirus was indeed responsible for the Antonine Plague (as current evidence suggests) then, based on what is known about these pathogens, its survival and transmission would have been favored by temperatures below 22 °C and low humidity, while its infectivity would have diminished at temperatures exceeding 30 °C and relative humidity levels above 55% (McDonald 2021, pp. 374, 398). It is perhaps no coincidence that the Antonine Plague was triggered precisely under environmental conditions conducive to massive viral proliferation.
Planetary-scale climatic transformations plunged southern Europe, Central Asia, and the Near East into a prolonged cooling period that began around 150 CE (McCormick et al. 2012, p. 202). A high-resolution reconstruction of temperature and precipitation patterns in southern Italy, based on marine evidence, indicates climatic instability marked by a cooling trend and a decline in river discharge (suggesting prolonged droughts) beginning around 100 CE and intensifying by 130 CE (Erdkamp 2021, p. 426; Marzano 2021, pp. 506–7; Zonneveld et al. 2024, p. 5).
Multiple paleoclimatic records from the Eastern Mediterranean—including data from the Sofular and Uzuntarla caves (in present-day Turkey), Jeita (Lebanon), and Soreq (Israel)—point to a trend toward aridification in the mid-second century CE. These findings are corroborated by dendrochronological analyses of samples from Central Europe and the Altai Mountains in Siberia (McDonald 2021, pp. 373–82). In Egypt, although comprehensive paleoclimatic data are lacking, there is evidence of a decrease in water volume from the Nile’s annual floods during the 160s CE. The advance of glaciers (such as the Great Aletsch in present-day Switzerland) between 155 and 180 CE suggests that reduced temperatures may have limited both snowmelt and precipitation in the Ethiopian Highlands, thereby affecting the hydrological dynamics of the Nile, in contrast to the intense flood periods recorded between 30 BCE and 155 CE.
Evidence of declining precipitation across a broad latitudinal band stretching from the Mediterranean to East Asia is further supported by dendrochronological data from central China (McDonald 2021, p. 374); even more compelling are similar findings from North America, indicating low precipitation levels over the Rio Grande between 148 CE and 173 CE, suggesting a period of drought (Elliott 2016, p. 24). The fact that the central Sahara experienced a wetter phase between approximately 160 CE and 200 CE, however, illustrates the latitudinal limits of the aridification and cooling trend, a prelude to what would later become known as the Late Antiquity Little Ice Age (LALIA), beginning in the sixth century CE (Büntgen et al. 2016; Shi et al. 2022).
Let us consider, then, that during the first three and a half centuries of the Roman Warm Period (from 250 BCE to 400 CE), average temperatures higher than those of the overall pre-industrial period—especially in the Mediterranean region—combined with relative climatic stability (marked by low frequency of extreme events) and agricultural expansion (along with greater energy availability), enabled the maintenance of pathocenotic equilibrium. However, beginning around 100 CE, the last three centuries of the Roman Warm Period were characterized by a downward trend in environmental stability, creating favorable conditions for the spread of microorganisms that had previously remained endemic.
Volcanic activity plays a complex and significant role in shaping global cooling and regional aridification, mainly through the injection of sulfur dioxide (SO2) into the stratosphere, which forms sulfate aerosols that reflect incoming solar radiation. The cooling caused by frequent or clustered volcanic events can generate persistent aridification (Cooper et al. 2018; Stenchikov 2009; Monerie et al. 2017). Geological records point to an eruption in the far east of present-day Russia in 163 CE (probably of the Ksudach volcano, in present-day Kamchatka province), and that around 170 CE a second eruption of even greater size, but of undefined origin, left its mark on Eurasia. The abrupt drop in temperatures and rainfall, with the impact diminishing over a wider radius, most severely affected Central Asia, where average temperatures plummeted by approximately 4 °C—an outcome clearly linked to this series of volcanic events (Duncan-Jones 2018, pp. 60–61; McDonald 2021, p. 383).
Elliott (2016, p. 24) further suggests that this series of volcanic phenomena may have had broader global effects, establishing a climatic link between North America and parts of Afro-Eurasia, through which changes in temperature and ocean currents (particularly El Niño and La Niña events) directly influenced the hydrological dynamics of the Nile River. In addition, volcanic phenomena originating outside the Afro-Eurasian territory may also need to be taken into account:
“Several of the eruptions in the AD 150s register high numbers (4 and above) on the VEI scale. The most impactful eruption was that of the Masaya Volcano in Nicaragua, which ejected between one and ten cubic kilometers of ejecta, placing it between 5 and 6 on the VEI scale (…). Readings from several Greenland ice cores show the levels of volcanic sulphates at a high enough resolution to demonstrate that volcanic activity from the early 150s to the mid 160s was severe enough to produce climatic changes (…) The data confirms either an uncommonly large eruption or several smaller eruptions which produced increased levels of sulphate aerosols between AD 150 and 163. Even more interesting, however, is that these eruptions coincided with spikes of non-volcanic atmospheric sulphate, perhaps from wildfires”.
The Antonine Plague thus coincided with a period of worsening climatic conditions, linked to significant geological processes. The causal connections that might have integrated these phenomena in a unidirectional way—from climate change to pathocenotic imbalance, within the broader context of spatial integration among societies—are plausible, and this remains true even if, as Haldon et al. (2018, pp. 4–5) argue, the climatic impacts were regionally and temporally uneven, particularly in southern Europe.
Indeed, that appears to have been the case, although the authors’ critique of Harper (2017) rests on a mistaken assessment: Harper is explicit in asserting that the climatic transformation of the second century CE was non-linear, unstable, and geographically heterogeneous, unfolding across multiple temporal and spatial scales simultaneously—like a “carousel” moving in different directions and at varying speeds (Harper 2017, p. 41). Thus, although there is a long-term trend toward cooling and aridification between the second and third centuries CE, this trend is not a “straitjacket” that automatically defines all microclimates within the empire. The Mediterranean, for example, is described as a “tessellation of microclimates,” (Harper 2017, p. 43) with the consequences of climate change portrayed as “eminently local”. This implies that regions across the entire latitudinal zone affected by the exhaustion cycle of the Roman Warm Period could locally exhibit hydrological, moisture-related, and climatic profiles that diverged from the broader long-term trend.
What is most crucial—though increasingly obscured as the critique by Haldon et al. (2018) descends into rhetorical excess—is the fact, not unknown to the authors, that we are dealing with climatic-geological systems in which abrupt changes, even if local, possess a considerable potential for large-scale, long-term disruption. What compounds the severity of this situation is the entanglement between the climatic-geological system and a socio-economic world-system. A trend toward cooling and aridification affecting key nodes of the world-system (such as the Nile Valley, whose agricultural economy was vital to the energy supply of the Italian peninsula) would already be sufficient to trigger significant disturbances in overall systemic integrity. When this climatic-economic shock is compounded by the outbreak of a devastating plague, we are faced with what Cline (2021) has termed a “perfect storm of calamities,” echoing his description of the collapse of Bronze Age civilizations.

7. Broken Links and System Failure

The Antonine Plague was enmeshed in a series of processes that led to the collapse of established connections, marking the decline of the Afro-Eurasian world system that had existed since the 1st century BCE. Critiques highlighting the cumulative, yet non-deterministic nature of the plague’s impact seem well-founded (Haldon et al. 2018), given that only within the broader context of intense organismic flows, tightly knit exchange networks, and significant climate change, can the full extent of the disease’s catastrophic effects be understood. Still, its central role in the collapse of the systemic network should never be underestimated.
Events signaling the collapse of the world-system and, at a local level, the disorder afflicting its constituent units, have unfolded in an almost synchronized manner, likely reinforcing one another. Since at least 100 CE, the relative climatic stability of the Roman Warm Period—which, despite local fluctuations, had characterized the era—gave way to more frequent and intense climatic oscillations, marked by a sustained, long-term trend of decreasing temperatures in the Northern Temperate Zone (between the Tropic of Cancer and the Arctic Circle). Climatic unpredictability, along with shifting hydrological and rainfall patterns, can be as disruptive to economic stability as catastrophic droughts or torrential rains. Agricultural economies operating with the technology of the second century CE (and, to a significant extent, even today) depend on the development of sophisticated anticipatory skills. Since the gap between production decisions and their outcomes creates a substantial time lag, the capacity for short-term adaptation is severely limited. As the frequency and intensity of climatic fluctuations rise, these anticipatory abilities are increasingly undermined, leading almost inevitably to agricultural crises.
Within this framework, it is crucial to consider the central role of the Nile Valley as a key grain (essentially, energy) supplier to the more central regions of the Roman Empire, including the imperial capital. There are indications that Egypt’s population as a whole was less affected by the plague, despite the fact that it undoubtedly passed through the region—either advancing from the south or brought by caravans traveling along the Silk Road from the east. This outcome can be partly explained by the region’s relatively warmer climate, which would have limited the survival of the viral agent responsible for the plague, even during the winter months, despite the overall cooling trend. McDonald (2021, pp. 399–400) attributes the higher incidence of plague evidence in the Nile Delta—an area cooler and more humid than the southern reaches of the valley near the First Cataract—to this climatic factor. This does not suggest that Egyptian populations were immune to the disease, but rather that its impact was likely less severe there than in other imperial centers further north.
However, counteracting the potentially mitigating effects of relatively higher temperatures, Egypt is estimated to have hosted the highest population density in the Roman Empire—averaging up to 300 people per square kilometer, with urban clusters such as Alexandria reaching the astonishing density of 50,000 inhabitants per square kilometer, comparable to modern metropolises (Scheidel 2001). The Nile, in turn, functioned as a natural artery for the transport of people, goods, and—consequently—pathogens, facilitating the circulation of diseases among population centers. This connectivity was amplified by Egypt’s strategic position as a corridor connecting Sub-Saharan Africa, North Africa, Asia, and Europe, increasing its exposure to migratory flows and commercial exchanges that heightened the risk of epidemic outbreaks (Harper 2017).
Thus, balancing between a high population density and elevated temperatures, Egypt remained an epidemiological risk zone, though it faced somewhat more moderate exposure compared to the imperial capital, Rome. However, the network of interactions must be understood in three dimensions: two correspond to geographical space, while the third refers to the systemic significance of historical events occurring at certain nodes within this network. In this context, Egypt played a critical thermodynamic role in maintaining the structural integrity of the Roman Empire. Its relatively lower exposure to the disease did not spare the region from plunging into a whirlwind of environmental, economic, and social crises; and, as we shall see, the centrality of Egypt’s agricultural economy in supplying food to the imperial capital made it, in a sense, the first domino to trigger the destructive effects that would ultimately paralyze the systemic integration of Afro-Eurasia.
The crisis that emerged from the apparent abandonment of many rural Egyptian villages during the second century CE—such as those in the Arsinoite nome (in the Fayum) and the Mendesian nome (in the Delta)—was undoubtedly fueled by the plague, though it was not its sole cause. The primary cause of the Egyptian crisis can be attributed to climate change, further aggravated by the extensive cultivation of marginal lands—a process encouraged by the more stable conditions of the first phase of the Roman Warm Period, up until 150 CE. “Roman rule introduced Egypt and Romanized Egyptian elites to new agricultural techniques, storage technologies and, perhaps most importantly, the Roman cultural impulses which prized farming and the domination of nature as a mark of status and civilization,” and in this spirit, agricultural development on marginal lands emerged as a civilizing mission—an imposition of order upon chaotic nature –, an endeavor incentivized by tax exemptions (Elliott 2016, p. 28). Needless to say, agriculture under such conditions was heavily dependent on human intervention, so that any failure in artificial irrigation systems—whether due to a water crisis or difficulties in replenishing capital in the form of hydraulic works—had severe consequences. Diamond (2011) suggests that among the causal factors of collapse, ideological recklessness and lack of social memory should not be underestimated; temporarily favorable climatic conditions, combined with a strong dose of civilizing voluntarism, had definitively pushed Egyptian agrarian systems in unsustainable directions (Scheidel 2012a).
In the event of insufficient Nile floods, beginning in the reign of Antoninus Pius (138–161 CE), landowners could declare their lands as abrochos (ἄβροχος) in order to obtain full exemption or a significant reduction in taxes, since the inability to meet tax obligations, in such cases, resulted from causes beyond the farmer’s control. These exemption requests appear in papyri from the nomes of Arsinoe, Oxyrhynchus, and Hermopolite between 158 and 245 CE, with the state waiving taxes on those lands or granting discounts, according to administrative records from that period (Bruun 2007, p. 205; Elliott 2024, pp. 31–32; Harper 2017, p. 134). It is significant that these records first appear in the 150s CE, a moment when climatic deterioration began to leave its mark and the Nile’s inundations were notably insufficient (until 180 CE) compared with earlier and later periods (Elliott 2024, pp. 33–34; McCormick et al. 2012, pp. 188–89, 202).
The abrochia declarations thus served as relevant proxies for the intensity of Nile floods, and we find that in the years 163–164 CE they reached their peak, suggesting the occurrence of water scarcity. The number of such records declines over the course of the decade, disappears entirely after 172 CE, reappears in 190 CE, and vanishes again after 195 CE (Duncan-Jones 2018, p. 47; Elliott 2016, pp. 25–26).
What does this mean? First, it may suggest that important nomes were facing hydraulic challenges coinciding with the general advance of aridification and declining temperatures, which affected snowmelt and monsoons—especially at high elevations—throughout the 150s and 160s CE, a situation that would continue to affect Egyptian agricultural villages throughout the third century (Huebner 2020). In that case, the impact on Egyptian grain production, which was of crucial importance in the imperial supply networks, would have been inevitable (McCormick et al. 2012, p. 188; McDonald 2021, pp. 379, 400). At the same time, the absence of records concerning unirrigated lands may also point to administrative disarray stemming from the spread of the Antonine Plague through Egyptian urban settlements, which would imply inefficiencies in the social mechanisms necessary to sustain imperial socioeconomic complexity. A severe and widespread calamity tends to inhibit, rather than stimulate, the production of official records and private communications. If this applies to the context of the Antonine Plague, it is plausible that the pandemic contributed to the scarcity of detailed documentation regarding various aspects of economic and social life, including records relating to unirrigated lands (Andorlini 2012, p. 22; Duncan-Jones 2018, p. 47).
Another possibility is that the abandonment of irrigated lands stemmed from difficulties in mobilizing local labor to maintain the accumulated capital embedded in hydraulic infrastructure. The maintenance of Roman Egypt’s hydraulic infrastructure depended on an abundant rural workforce, but papyrological records suggest that this demographic base experienced substantial depopulation precisely during the critical period of the Antonine Plague (Harper 2017, p. 111). In the Fayum, a steady decline in the number of taxpayers is observable: in Karanis, for example, the tax-paying population shrank by between 33% and 47% over twenty-five years, up to 171 CE, while in Soknopaiou Nesos, of the 244 men registered in 178 CE, no fewer than 59 had disappeared by January 179 CE, and another 19 by February (Elliott 2024, p. 117; Sabbatani and Fiorino 2009, p. 269). These figures reveal how villages became vulnerable to the interruption of indispensable collective tasks such as canal dredging and dike repair—activities that required an active and numerous community to avoid the risk of turning fertile land into dry fields (Gourevitch 2005, p. 61).
This population decline in the censuses cannot be explained solely by mortality caused by the epidemic. Mass flight (anachoresis) was a central factor, triggered by an inflexible fiscal system that continued to demand per capita taxes even as the population diminished. Elliott (2016, p. 4) emphasizes that the capitatio (laographia) imposed on peasants remained rigid, exerting pressure on rural villages already weakened by climatic fluctuations and short-term agricultural crises (Bruun 2012). When a taxpayer fled, their share of the tax burden was redistributed among those who remained, creating a perverse incentive for others to leave as well. Gilliam (1961) illustrates this cycle in Karanis: the fewer people who stayed, the heavier the individual burden became, encouraging even more abandonment. This pattern was repeated in various Delta localities, such as Kerkenouphis, where the combination of epidemic, famine, and banditry drove entire populations off the land.
Karanis, which may have housed between 3636 and 4063 inhabitants, saw its population fall to somewhere between 2160 and 2560 between 171 and 174 CE—an estimated loss of about 40% in just three years (Gilliam 1961, pp. 240–41). This reduction aligns with other estimates of taxpayer disappearance, reinforcing the notion that flight was not an isolated phenomenon. Still, the indirect relationship remains evident: even if the plague did not kill everyone, it certainly exacerbated economic and fiscal pressures, making continued peasant presence in the fields unsustainable. Duncan-Jones (1996, p. 120) underscores that flight was a well-documented and frequent response in epidemic contexts—recognized even as a prophylactic measure by Roman medicine—yet always linked to a scenario of unbearable fiscal exploitation.
This collapse in the labor force had immediate practical effects on the agrarian system. The size of leased plots, which often exceeded 20 aroura before 166 CE, shrank to no more than 8 aroura by 191 CE, highlighting the contraction of cultivated land due to a shortage of hands to work the soil and maintain active irrigation systems (Andorlini 2012, pp. 22–23). In parallel, the duration of tenancy contracts expanded from annual terms to periods of four years or more—a reflection of landowners’ attempts to secure some degree of income predictability amid an acute labor shortage (Duncan-Jones 2018, p. 45; Haas 2006, p. 1093).
The Roman imperial administration faced practical limitations in taxing a province as strategically vital and peculiar as Egypt. Nevertheless, the papyri reveal that fiscal pressures persisted even amid acute crises of labor and transportation (De Romanis 2007, pp. 209–15). These fiscal demands continued unabated despite a chronic shortage of manpower, highlighting a sustained tension between the tax-enforcing state apparatus acting in the name of Rome and the increasingly burdened local labor force. In 165 CE, for instance, the prefect Flavius Titianus had to threaten the strategoi of the Arsinoite nome for failing to provide donkeys and handlers (kthnotróphoi) to transport grain from the previous year’s harvest; 255 out of the 411 donkeys dispatched from the Oxyrhynchite had disappeared with their handlers, a 62% loss, reflecting desertions on a “vast scale” in the Heptanomia16. Even as late as 197 CE, the prefect Aemilius Saturninus was still forced to acknowledge that the local granaries remained unfilled because of the dwindling number of individuals who had traditionally carried out grain transport (Duncan-Jones 1996, p. 121).
These internal difficulties had a direct impact on the security of Rome’s grain supply. Between 180 and 191/192 CE, fears arose that delays in the collection and shipment of Egyptian grain might compromise its arrival in Alexandria before the mare clausum (closed-sea season). The creation of the classis Africana by Commodus (r. 180–192 CE) is direct testimony to imperial anxiety: a Roman naval fleet based in Proconsular Africa, whose main function was to protect the sea routes supplying grain, especially between Africa and Rome. Although its direct focus was African wheat, the fleet was also involved in the wider logistics of the annona, the state food supply system.
Likewise, the reorganization of the transporters of Arles and the appointment of a procurator ad annonam prouinciae Narbonensis et Liguriae under the Severan dynasty (193–235 CE) illustrate a broader pattern of fragility in the logistical flow. The existence of private grain imports to Rome during this period further underscores that the state provisioning apparatus was insufficient to cover the deficit caused by structural failures in Egypt. This combination of sanitary crisis, logistical disorganization, and fiscal exhaustion created a cycle of inefficiency, in which the lack of labor and draft animals delayed transport, raised service costs, and threatened Egypt’s primary function: to serve as the empire’s granary.
Despite these obstacles, tax collection in kind was not relaxed to a proportional degree. Bagnall (2000, p. 288) shows that in 184–185 CE, the Herakleides division in the Arsinoite was being charged a total of 814,862 artabas17 of wheat—an amount comparable to pre-plague production capacity. By contrast, in the fourth century, the Oxyrhynchite nome, of similar size, recorded only 321,278 artabas (Bagnall 2000, p. 288). This disparity reveals that the imperial administration in the second century, dealing with a higher proportion of public lands—taxed at rates of 4 to 5.2 artabas per aroura18—maintained assessment levels that ignored the plague’s impact on actual production (Bagnall 2000, p. 289). Evidence from villages such as Kerkenouphis, with a 70% to 93% population decline, and Karanis, with losses between 33% and 47% from 168/169 to 171, demonstrates that the productive base could not sustain the same fiscal burden without severe social consequences (Bagnall 2000, p. 292). This helps explain the widespread flight in the Fayum, where the high proportion of public land resulted in harsher fiscal pressure than in regions dominated by private holdings.
The result of this mismatch was the deterioration of the peasant surplus. It is significant that in many papyri, such as P. Oxy. LXVI 4527, the imperial tax remained fixed in absolute value, even though the actual pace of collection was so slow that more than a year was required to complete it (van Minnen 2001, p. 176). This rigidity, combined with declining production, shrank the surplus available for commercial exchange, pushing entire communities into a subsistence economy.
The imperial insistence on extracting the same level of tribute from a weakened productive base reinforced the pattern of flight. Bagnall (2000, p. 292) emphasizes that in cases such as the Mendesian nome, documentation clearly shows that “the taxpayers were exhausted by the burden placed on them by the government’s attempt to collect taxes from a declining population.” A village depopulated for years would see its dikes and canals silt up without maintenance, requiring enormous reconstruction efforts to restore productivity. Periodic flight in alternating regions fed a cycle of abandonment, technical collapse, and contraction of the market economy—precisely the dynamic that, over the medium term, transformed parts of the Fayum into abrochos zones: formerly productive lands rendered dry by the inability to maintain the accumulated hydraulic capital.
In addition to efforts to escape excessive taxation amid a context of climatic and productive crisis, other potential factors contributed to the dispersal of populations in rural Egyptian villages. Since at least the emergence of Hippocratic medicine, it was understood that air, water, or location—in short, environmental conditions—had an immediate impact on human health. Illnesses such as malaria (from the Italian mal’aria, meaning “evil air”), respiratory secretions, and diarrhea were directly associated with seasonal effects produced by vapors emanating from stagnant water. These vapors, resulting from the putrefaction of organic matter, were referred to as miasmas, which led to the notion of “miasmatic diseases” caused by the inhalation of contaminated air (Cravioto and García 2007, pp. 10–11; Harper 2021b; Karamanou et al. 2021, p. 58). Galen subscribed to the miasmatic hypothesis, and it was through this framework that he interpreted the Antonine Plague, attributing it to “a changing of the air which does not let the specific character of the seasons keep the same.” It was clear to the physician of Pergamon how dangerous it was to be near a patient suffering, for example, from ophthalmia, leprosy, tuberculosis, or any of the so-called “pestilential fevers,” whose victims exuded a repulsive odor—whether from their wounds or their breath. (Gourevitch 2005, p. 62).
There existed a narrative with mythical overtones, later disseminated by Ammianus Marcellinus, a fourth-century CE historian from Antioch, which attributed the origin of the Antonine Plague to the desecration of a golden sarcophagus in the temple of Apollo at Seleucia, in the context of the siege and sack of the city by Roman troops during the war against the Parthians19. From the sarcophagus, it was said, emanated a spiritus pestilens that cursed the troops and was brought back to Rome. A similar version is also recounted in the Historia Augusta, although in this account, the city of Babylon is named as the source of the plague, rather than Seleucia. The term spiritus is associated both with the soul (as an immaterial substance) and with breath and moving air; it is therefore reasonable to postulate that this metaphysical description ultimately pointed, albeit indirectly, to a material and empirically observable phenomenon—the transmission of contagious diseases through respiratory droplets and close human contact (Cravioto and García 2007, p. 12; Ruiz-Patiño 2020, p. 177; Sáez 2016, p. 218).
Social distancing and physical isolation were traditional practices for evading the harmful action of spiritus and miasmata. In Leviticus, for example, there are prescriptions for the control of leprosy that we would today classify as differential diagnosis, isolation, quarantine, and disinfection (Karamanou et al. 2021, p. 59). It is known that Galen fled from Rome as soon as the disease began to spread; his journey to Aquileia, however, did not spare him from encountering the plague once again (Duncan-Jones 2018, p. 42). During the outbreak of 190 CE, Emperor Commodus was advised to flee to a location far from the miasmas, and those who remained in Rome were instructed to burn copious amounts of incense and other aromatics as a means of purifying the air (Flemming 2019, p. 225). Thus, the belief in the noxious nature of air was not only known to physicians but also communicated to the general populace, and it is unsurprising that urban populations migrated to less densely inhabited regions whenever that was a feasible option (Andorlini 2012, p. 15; Duncan-Jones 1996, p. 110).
“People migrated in shoals to territories free from the plague and escaping—from nowadays perspective—was essentially the most effective method of avoiding the infection (…). Observing our ancestors’ accomplishments, one may spot that many people tried to do fuga (escaping), especially when the morbidity was escalating or one had received proven information about the plague’s spectrum”.
It is worth noting, however, that abandoning a major city to escape the effects of miasmas entailed a delicate trade-off. Centuries of urbanization had, by definition, produced vast populations deeply dependent on the functioning of the urban economy, and thus on a highly organic and interdependent system. Leaving imperial centers such as Rome, Ctesiphon, or Alexandria was no trivial decision for families whose survival depended not only on the redistributive economy operated by these imperial hubs, but also on food markets and on earning income through artisanal labor, commerce, and various urban trades. The division of labor, which enhanced the efficiency and productivity of these economies, also drastically reduced their resilience in the face of disruptions or obstructions to systemic flows—precisely the kind of crisis triggered jointly by climate change and the Antonine Plague.
In the second century CE, urban workers within the Roman Empire faced considerable challenges in reintegrating into rural life due to increasing occupational specialization. Most urban professions were tied to collegia, professional and religious associations that structured the economic and social lives of workers through internal regulations, patrons, and mutual support (Perry 2011). These guilds demanded specific technical skills, developed through lengthy apprenticeship processes that led to the erosion of more generalist competencies, such as agricultural know-how or familiarity with the seasonal rhythms of the countryside, rendering readaptation extremely difficult in times of crisis.
In Roman Italy, rural migrant laborers maintained functional ties to the land and were therefore capable of adapting quickly; this, however, was a far cry from the reality of permanent urban residents, whose reintegration into subsistence economies was constrained both logistically and socially (Lo Cascio 2016). Moreover, as De Ligt and Tacoma (2016) point out, although population movements within the Empire were frequent, they rarely entailed a successful transition from urban occupations to agricultural labor.
By contrast, rural Egyptian populations—who supplied food both to the granaries sustaining the state’s redistributive system and to the private market—were less entangled in the organic structure of urban economies and thus possessed alternative avenues for escaping social disorder and disease. Peasants could respond to the crisis by reducing the socio-economic complexity of the networks to which they were connected—and many indeed did so. This retreat into simpler forms of economy further exacerbated the labor shortage in large-scale agriculture, thereby intensifying the difficulties of provisioning the imperial urban centers.
The Nile Delta, with its fish-rich wetlands and abundance of edible aquatic plants, offered ideal conditions for entire villages to abandon agriculture—and, in doing so, to evade the grain taxes owed to the Roman authorities. Instead, they could survive in small, mobile groups made up of families or extended kin networks, sustaining themselves through foraging, hunting, fishing, and the tending of small herds such as cattle or water buffalo. In what was perhaps the more common scenario, only the men left the villages, seeking to provide for their families. Papyrological sources indicate that local authorities attempted to regulate these activities through the imposition of license fees, signaling both their widespread nature and their economic relevance (Elliott 2016, pp. 14–17, 28–29).
The depopulation of rural villages was further exacerbated by social chaos in the form of banditry, which fed upon—and was in turn fueled by—the fear of epidemic disease and the desire to evade taxation. The main protagonists of this disorder were the boukoloi, semi-nomadic herders of the Nile Delta who are identified in both literary and papyrological sources as key actors in a series of rebellions against Roman rule in the late second century CE (Andorlini 2012, p. 9; Oddo et al. 2023, p. 8). Documents such as the papyrus P. Thmouis 1 refer to the boukoloi as the “impious Nikochites”, with “impious” being a common designation for insurgent elements in administrative documents from the Roman East. Cassius Dio and Achilles Tatius described the boukoloi as “desperate men” (ἀποροῦντες), who took advantage of opportunities to plunder the traditional agrarian economy, spreading terror among the peasantry. It is not unlikely that the ranks of the boukoloi were swelled by their own victims, who, fleeing disease and the burdens of rural taxation, took up the semi-nomadic life of marauders themselves (Blouin 2010).
The impact of boukoloi activity on the Delta was substantial. Villages such as Thmouis (attacked in 166–167 CE) and Kerkenouphis (almost entirely destroyed in 170–171 CE) stand out as notorious examples of this social upheaval. The papyrus P. Thmouis 1 reports that of 19 settlements mentioned, 11 had been depopulated due to “attacks and ruin”. In 167–168 CE, villages such as Petetei, Psenharpokratis, and Psenbienchon, which had sought an alliance with the boukoloi, became targets of violent repression, with dozens of inhabitants executed by imperial troops. The apex of this process of social crisis took shape in a massive insurrection in the Delta around 172–173 CE, which even threatened the city of Alexandria—nearly a decade after climatic and epidemic crises had eroded Egypt’s rural fabric and almost certainly swelled the boukoloi into a formidable force of mobile fighters. The Roman response to the rebellion of 172–173 CE was swift and severe, led by the Roman army under the command of Avidius Cassius, a veteran of the Parthian campaigns (Elliott 2016, pp. 28–31). This scenario of armed conflict, epidemic outbreaks, and fiscal exploitation imposed a heavy burden on Egypt’s agrarian economy and, consequently, on the populations living at the heart of the imperial territory.
The impact of the Egyptian crisis must be understood within a broader framework: that of a pre-industrial Roman economy constrained by traditional Malthusian checks. Not even the period of relative prosperity—characterized by far-reaching economic integration and a stable political-territorial order between the first century BCE and the second century CE—was sufficient to decisively alter these structural constraints. The provisioning of the largest Roman cities posed a colossal challenge, further aggravated by population growth, which imposed high costs in terms of sanitary infrastructure. The diet of an ordinary imperial subject in the Italian peninsula was poorly diversified and heavily dependent on wheat. The quality of the wheat itself was often poor, frequently adulterated with vetch, rye, straw, gravel, and infested with insects and their larvae. Galen himself remarked that the shipments arriving in Rome were replete with darnel, a weed host to harmful fungi capable of causing neurological illnesses (Elliott 2024, pp. 6–17, 213–16; Harper 2017, pp. 66–67).
In addition to being nutritionally poor (although calorie-dense in the case of wheat), food sources were far from abundant and were typically subject to significant logistical bottlenecks. It is estimated that every three to four years, a food shortage would strike the Mediterranean, often triggered by failures in transportation and distribution. Food insecurity was thus a persistent problem. The bounty of a plentiful harvest could be undone by a higher incidence of cargo ship sinkings, especially during seasons of severe storms. Market failures or breakdowns in state redistribution schemes frequently kept food out of reach for thousands, even when supplies were theoretically available. It was not uncommon for populations in the major cities of Italy to resort to animal fodder—such as oats or millet—or to raw, rotten, or even poisonous plants, including acorns, grasses, tree bark, roots, and wild mushrooms. Thus, subsistence was dependent on a system, but one with ineffective or virtually non-existent redundancies (Elliott 2024, pp. 40–47).
Famine left its traces inscribed in bones and teeth. Bioarchaeological studies reveal a high incidence of linear enamel hypoplasia (LEH) in the imperial capital—a growth defect that forms during childhood and indicates episodes of physiological stress due to malnutrition, infectious disease, or the interaction of both. In a large necropolis near Rome, between 80% and 92% of individuals exhibited signs of LEH, while approximately 77% displayed porotic hyperostosis, a bone condition associated with chronic anemia. Furthermore, analyses of average stature suggest that the population of Rome was shorter and of slighter build than other contemporary populations, reinforcing the notion that the Romans were exposed to poor health conditions. (Harper 2017, pp. 74–79; Scheidel 2009, pp. 11–12).
Experiencing demographic expansion, an economy of this nature thus operated at the very limits of its energetic capacity. Rome, with over a million inhabitants—whose living conditions and monotonous diet compromised their nutritional and immunological status—also faced chronic problems of waste and refuse accumulation in its streets. Even the considerable investments in infrastructure (aqueducts, sewer networks, and drainage systems) proved insufficient to contain the proliferation of parasites, which persisted at levels comparable to or even higher than those of earlier periods. Malaria, endemic in Rome and parts of the Campagna, fueled cyclical epidemics, raised mortality rates, and kept life expectancy in unhealthy zones exceptionally low—at times around 20 years (Sherman 2006). More critically, malaria further weakened immune defenses and heightened susceptibility to other infections, including respiratory illnesses (Elliott 2016, pp. 9–10).
Malnutrition was therefore a persistent and structural factor, rendering Roman urban centers perpetually vulnerable to epidemic outbreaks (Sajovec et al. 2024). Galen himself observed that desperate populations, forced to consume poor-quality food, became “fodder for local epidemics” (Elliott 2024, p. 43). The connection between food scarcity (limos) and epidemic disease (loimos) appears, indeed, to have been intuited long before the Classical period, as suggested by the etymological similarity between the terms.
The political priority of ensuring the provisioning of the imperial capital and the military forces mobilized significant agricultural surpluses but simultaneously consumed them inefficiently, disrupting distribution networks and straining the entire supply chain (Sabbatani and Fiorino 2009, pp. 266–74). This forced redistribution diverted vital resources from the countryside, intensified the threat of famine among rural populations, and pushed impoverished peasants toward already densely populated urban centers such as the imperial capital Rome, Antioch, and Ephesus. These cities functioned as true “population sinks”, continually absorbing migrants and refugees from the interior—individuals who were immunologically less exposed to urban diseases and who were often compelled to beg and subsist on scraps and alms in overcrowded districts.
It is against this structural backdrop that we should understand the compounded impact of plague, climate change, tax evasion, fear of miasmas, and the disruptive actions of the Egyptian boukoloi.
The city of Rome was struck hard by the Antonine Plague for nearly two decades, and within the circumstances of the Afro-Eurasian world-system, this represented an unprecedented crisis at one of its most critical nodes, with far-reaching effects on the system as a whole. In 189 CE, Cassius Dio reportedly recorded a peak mortality of two thousand deaths in a single day—a certainly impressionistic testimony, but one that has long fueled maximalist interpretations of the pandemic’s consequences (Ferreira et al. 2023, p. 5; Ruiz-Patiño 2020, p. 179). He claimed that a quarter of the imperial capital’s population had perished over the span of twenty years. Decades earlier, Galen had remarked on the high lethality of the disease in Aquileia during the winter of 168–169 CE, reinforcing this devastating picture (Cravioto and García 2007, p. 19; Karasaridis and Chalupa 2025, pp. 13–14). It is likely that the disease struck the capital in two massive waves—one in 166 CE and another in 191 CE—infecting around 40% of the population and claiming roughly 40% of those infected. Overall, Karasaridis and Chalupa (2025) estimate that the plague imposed an additional mortality burden of up to 7% above the city’s baseline death rate, aligning with the total death toll suggested by Littman and Littman (1973) more than four decades before this more recent study.
Beyond Rome, the impact of the plague was equally devastating, though scholarly estimates vary: between 5 and 10 million deaths, representing approximately 7% to as much as 13% of the Empire’s estimated population of around 75 million (Geoffroy and Díaz 2020). More conservative estimates, such as those of J. F. Gilliam (1961), place the figure at 500,000 to 1 million deaths (1–2% of the population), while Walter Scheidel (2009) has suggested a mortality rate as high as 25%. The true number likely lies somewhere within this range, though it must be emphasized that perhaps even more significant than the mortality itself were the medium-term incapacitating effects and long-term consequences that jammed the operating mechanisms of the broader Afro-Eurasian world-system.
Under these structural conditions, evidence suggests that the convergence of catastrophes—climate, disease, famine—imposed a substantial cost on the imperial economy. Indicators point to systemic disruption in extractive supply chains between 164 and 190 CE, with simultaneous interruptions in quarrying and mining operations. Records document a hiatus in lead mining in the British Isles, in marble extraction at Teos, and in the Dacian mines—none of which resumed immediately after the suspensions of the 160s. Duncan-Jones (2018, pp. 54–55) interprets these breakdowns as evidence of catastrophic collapse in both underground and surface mining, exacerbated by labor shortages and contracting demand. Epigraphic evidence further supports this view: operations at the imperial marble quarries of Docimium (Phrygia)—which supplied Rome with pavonazzetto (a highly prized polychrome marble)—abruptly ceased in 166 and did not resume until 173 (Duncan-Jones 1996, p. 129).
In parallel, archaeometric data from Greenland ice cores reveal a drastic decline in atmospheric lead emissions beginning in the 160s CE, indicating a significant contraction in silver production. The Romans exploited galena mines, a lead ore that often contains varying amounts of silver. During the extraction and smelting of lead, the silver was separated and refined, making galena a dual source of valuable metals: lead for technical uses (such as plumbing, weights, anchors, epigraphic inscriptions, etc.) and silver for coinage. Thus, any contraction in lead mining would simultaneously indicate a drop in silver output (McDonald 2021, pp. 234–35). Modern archaeometry allows researchers to track this decline through the analysis of Greenland ice cores: lead released during smelting was carried by winds and deposited in annual layers of ice, serving as an atmospheric archive of metallurgical activity in Europe. The data show that, after a prolonged period of elevated emissions beginning in 17 BCE, there was an abrupt drop in lead concentrations in the ice around the 160s CE, coinciding with the onset of the Antonine Plague (McConnell et al. 2018, pp. 5727–28).
This collapse is substantiated by evidence such as the abrupt cessation of operations at the Rio Tinto mines in southwestern Spain between 170 and 180 CE, and by a wax tablet from Alburnus Maior (Dacia), dated to 167 CE, which records the dissolution of a collegium funeraticium due to the absence of two-thirds of its members—clear evidence of the mines being abandoned because of mortality or flight (Mitrofan 2014; Silver 2011, pp. 133–34). In ancient Rome, a collegium funeraticium was a funerary association composed of workers, whose purpose was to ensure proper burial rites and, by extension, reflected communal cohesion and stability. Its continued existence required a stable local population, regular contributions, and an internal network of mutual support. Its dissolution thus signals a moment of drastic discontinuity, such as large-scale mortality, the flight of laborers, or economic collapse, that rendered both funerary rites and the communal life sustaining them unviable.
The possibility of a pronounced economic contraction, triggered by the combined disruptions of climatic crisis, plague, and social turmoil, is further reinforced by the analysis of dendrochronological records across Europe (Bernabei et al. 2019; Büntgen et al. 2011). These records indicate significant fluctuations in deforestation patterns over the course of the second century CE, with a notable peak in the early decades followed by a steep decline and two partial recoveries around 150 CE and in the early 200s. A marked advance in reforestation, especially during the 170s, 180s, and 190s, suggests a significant contraction in construction activity and a corresponding drop in the demand for timber and charcoal—essential inputs for building and metallurgical operations. This scenario aligns with large-scale economic dislocation driven by population dispersal, the high mortality caused by the plague, and, above all, its medium-term incapacitating effects on the labor force (Duncan-Jones 2018, p. 59).
Understanding the impact of events related to the Antonine Plague on income distribution in the wealthiest regions of the Roman Empire is crucial to assessing the extent to which long-distance trade networks were affected. The available evidence is fragmentary and often contradictory, but its underlying elements can offer clues about the weakening of luxury goods circulation networks when combined with indications of the plague striking cities along the Silk Roads. In theory, it is reasonable to expect that a mortality crisis marked by sharp peaks—as appears to have been the case in 166 CE and 191 CE—would have led to significant increases in the relative earnings of surviving workers and a reduction in inequality, given a likely rise in their bargaining power vis-à-vis the propertied classes (Harper 2021b; Scheidel 2009, pp. 15–18).
“Perhaps the biggest unacknowledged question of Roman economic history is whether population pressure was already mounting before the imperial power structure started to unravel or whether the epidemics of the second and third centuries CE provided temporary relief (or instead made matters worse). Empirical data are consistent with the presence of Malthusian mechanisms: real wages rose in the wake of epidemics and body height, a marker of physiological well-being, declined under Roman rule but recovered afterward. This suggests that in the long run, the Roman economy was unable to overcome fundamental demographic constraints on intensive economic growth”.
And even if the issue did not involve widespread mortality, the evidence of evasion from agricultural labor and tax obligations would lead to the same outcome: a greater amount of land available per worker willing to cultivate it (Elliott 2016, p. 4). There are indications that daily and monthly wages for unskilled rural laborers in Egypt rose more rapidly than the prices of key consumer staples such as wheat, wine, and olive oil; as for tenant farmers, it appears that land rents fell significantly. In Tebtunis (in the Fayum), papyrological records show a rise in wages between 152 and 169–170 CE, with average daily rates increasing from 6 obols20 (with variations between 4 and 7) in 152, to 8 obols in 166, and reaching between 10 and 14 obols (with a median of 12) in 169–170 (Duncan-Jones 1996, p. 124). Nonetheless, it is doubtful that this improvement in bargaining power was sustained over time, especially given the gradual recovery of the population over the following century and the extra-economic measures taken by the elite to contain the erosion of their relative wealth (Elliott 2024; Oddo et al. 2023, p. 22).
The decline in annual rents from wheat fields in Egypt is also significant: while in the aggregate for the pre-pandemic period between 100 and 165 CE, the median rent in the Arsinoite nome stood at 7.63 artabas per aroura, this average fell to 3.55 in the aftermath of the crisis, between 211 and 268 CE (the most extreme example); in Oxyrhynchus, the drop was from 8 artabas per aroura (103–165 CE) to 6 artabas per aroura (205–262 CE), illustrating regional inequalities in the reduction of land income (Scheidel 2009, pp. 18–23). According to Duncan-Jones (1996, p. 123), the highest monetary rents recorded between 165 and 183 CE amounted to only 50%—or less—of the highest values observed in earlier periods. A comparable phenomenon appears to have occurred in Asia Minor and the Italian peninsula, suggesting a pattern not confined to Roman Egypt alone (Scheidel 2007).
Even though imperial elites mobilized their political power to prevent significant advances in wealth redistribution, the late second century CE was marked by developments that, while preserving their relative status, nonetheless point to a possible absolute impoverishment of these elites. In addition to the decline in land rents and the rise in wages, the impact of the Antonine Plague on slavery—an institution central to maintaining the Empire’s economic and social hierarchies—must also be considered (Scheidel 2012b, p. 89).
Estimates for Egypt suggest that about 15% of the urban population and more than 8% of the rural population in Middle Egypt were enslaved, with regional variations. In Italy, estimates place the rural slave population between 250,000 and 750,000 individuals, while urban slavery is thought to have exceeded this figure, totaling between 1 million and 1.5 million enslaved people—roughly 15 to 20% of the local population. On this basis, it is calculated that slaves constituted about 10% of the total population of the Empire (Scheidel 2012b, pp. 90–92).
The outbreak of the Antonine Plague had unequal effects across social strata, with the enslaved population disproportionately affected. Due to their precarious living conditions, communal sleeping arrangements, and inferior diets, slaves were particularly vulnerable to the disease (Duncan-Jones 1996, p. 113). Although the Roman slave system encompassed a wide range of forms—from agricultural slaves to administrators of large estates and even professionals such as physicians and midwives—all remained in a subordinate position within the social hierarchy. The norms governing the distribution of food within elite households, for instance, reflected this hierarchy, structurally disadvantaging the enslaved, even when their general material conditions were relatively better than those of rural captives (Scheidel 2012b, pp. 90–91; Veyne 1995, pp. 61–63).
Contemporary testimonies corroborate the disproportionate impact of the plague on enslaved populations: Galen reported losing all his slaves, while Aelius Aristides noted that many of his neighbors’ household servants fell ill. Chronicles of earlier outbreaks, such as that of 174 BCE, already indicated particularly high mortality among slaves, and the absence of proper burial practices for these groups revealed their social marginalization. Accounts of unburied slave corpses point to a dangerous sanitary overload (Duncan-Jones 1996, 2018).
These data suggest that the Antonine Plague may have played a decisive role in exhausting the viability of the slave-based model, at least in Italy. Scheidel (2012b) argues that the only way to sustain an enslaved population of such magnitude for centuries would have been through natural reproduction, as external sources—piracy, warfare, or infant abandonment—were demographically insufficient. In this sense, the population decline caused by the plague would have undermined this reproductive base, rendering the system increasingly unviable. As Bruun (2007, p. 216) observes, it is difficult to support the notion of a large captive population by the late second and early third centuries, reinforcing the hypothesis that the plague eroded its demographic foundations. The particular vulnerability of slaves to infectious mortality would have further compromised the feasibility of reproduction as a means of replenishing the labor force.
Thus, when combined with declining land rents, heightened competition for surplus with the peasant masses, and the blow dealt to the enslaved labor force, there is a basis for hypothesizing that the events surrounding the Antonine Plague imposed, albeit with regional variations, a tangible impact on the imperial elites’ patterns of luxury consumption. These elites were a crucial link connecting the westernmost regions of the then Afro-Eurasian world-system with its easternmost segments.
The vigorous cycle of construction that characterized Italian cities and towns since the early first century CE virtually disappeared by the end of the second century CE; in places such as Forum Popilii, a progressive abandonment of luxurious houses is observed in the third century CE, with no evidence of restoration or upkeep—revealing either a previous loss of economic capacity or a waning interest among local elites in maintaining their urban properties. This decline is likewise reflected in the cessation of production of ceramic goods such as amphorae, tiles, and bricks from the 170s CE onwards, likely caused by interrelated factors including the collapse of trade networks, a drastic reduction in demand, or shortages of natural resources such as the fuel required for kilns.
In southern Tuscany, rural villas showed even clearer signs of this collapse: rooms once used as living quarters were repurposed for productive activities such as smelting, reflecting a forced adaptation to new economic conditions. At the same time, there was a shift away from large-scale viticulture—long associated with aristocratic prestige—toward pig farming and the cultivation of cheaper crops such as grains, particularly on estates absorbed into the imperial fiscus, where luxury ceased to be a priority. However, this pattern was not uniform: in northern Tuscany, senatorial families continued to invest in and maintain their properties, demonstrating that the reorganization of landholding and specific regional contexts played a crucial role in shaping how the decline affected different segments of the Roman elite (Marzano 2021, pp. 505–15, 520–24).
This process of economic weakening among the Roman elite was accompanied by an equally profound transformation in their civic attitudes and practices. From the late first to the end of the second century CE, provincial urban elites actively funded the construction of fora, temples, theatres, bathhouses, and other public monuments that shaped the character of Roman cities in the West, often using their own resources in acts of munificence designed to affirm their social and political standing. This practice was also linked to investments in festivals, food provisioning, and other forms of public beneficence, actions that reaffirmed their dominant position within local communities. By the end of the century, however, this civic impulse went into marked decline, as shown by the drastic reduction in public inscriptions commemorating the achievements and offices held by members of ruling families. This trend can be observed both in Gaul and in Hispania, where from the second half of the second century onward, elites saw less utility in inscribing their deeds in stone or in sustaining the monumental fabric of their cities. Thus, by the third century CE, a reconfiguration of the relationship between elites and urban spaces had taken place, with the collapse of a culture of civic expenditure that had been central to the identity and power of the Roman ruling class (Erdkamp 2019, pp. 456–57).
To all these events was added a monetary crisis, already foreshadowed by the contraction in silver production mentioned earlier. The minting of coins in Rome collapsed between December 166 and December 167 CE, falling to roughly one-third of its previous average—a drop attributed to social disorganization and the mortality of workers in the minting services (Elliott 2024, p. 88; Sabbatani and Fiorino 2009, p. 269). Nor was this impact limited to the imperial capital: in Egypt, a sharp decline in the silver content of coins was observed beginning in 164–165 CE, intensifying in 167–168 CE, and culminating in a complete halt of silver coinage in Alexandria between 170 and 180 CE. Similar disruptions occurred in local mints in Palestine (166–176 CE) and Syria (169–177 CE), pointing to a systemic crisis in the issuance of currency (Harper 2017, p. 113; Kennedy 2024). Analysis of hoarded coin stocks confirms an abrupt drop in 167 CE, although followed by a partial recovery, indicating that the plague caused a sharp, but somehow temporary, interruption in monetary production (Duncan-Jones 1996, p. 133). The chronological alignment between the Roman monetary crisis and Egyptian numismatic evidence strengthens the hypothesis of a common origin linked to the pandemic (Duncan-Jones 1996, p. 134).
Under Commodus (r. 180–192 CE), devaluation deepened: although the silver content of the denarius fell only slightly, the weight was drastically reduced, resulting in a loss of approximately 25% in the purchasing power of wages paid in silver between 178 and 190 CE (Elliott 2024, pp. 179–80, 224). Coinage remained “chronically low,” signaling that the shortage of metals (already evident under Marcus Aurelius) persisted or even worsened. This translated into a significant contraction of the supply of high-quality coinage, accompanied by the proliferation of debased coins. This transformation led to the partial demonetization of some regions of the Empire, especially in the West, with profound impacts on land contracts, which depended on monetary payments. Devaluation eroded trust in currency, forcing landlords to renegotiate terms or accept less advantageous agreements in times of monetary uncertainty (Howgego 1992).
This crisis of monetary confidence had collateral effects on the economic integration of the Empire. Before the second century CE, monetary circulation was closely tied to tax collection in cash, land rents, and government spending in distant provinces, all of which fostered economic cohesion and commercial fluidity (Hopkins 1980, pp. 12–13). From the late second century CE onward, however, this integration began to unravel: the supposed “completely mixed coinage”—implying the free circulation of currency throughout the Empire—did not withstand empirical scrutiny. The distribution of certain coin types became highly regionalized, with some variants found predominantly in specific regions, such as Britain or Gaul (De Ligt 2002, pp. 46–48). The movement of troops and the transfer of military pay could temporarily redistribute coins, but were insufficient to override established regional patterns. Thus, monetary circulation became increasingly localized, especially in frontier provinces such as Pannonia, the Rhineland, and Britain, where the presence of large armies sustained monetization (Blois 2002, pp. 214–16).
This process of economic disintegration was also reflected in the Empire’s external sphere. Pliny the Elder (23–79 CE) had been among the most vocal critics of Rome’s trade deficit with the eastern cities and production centers, which, he claimed, drained an impressive 100 million sesterces21 per year from the imperial economy (Harper 2017, p. 94). From the reign of Marcus Aurelius onward, however, archeological finds of Roman coins in India became notably scarce, suggesting either a decline in long-distance trade or changes in financial practices that limited the outflow of Roman currency (Paolilli 2008; Young 2003, pp. 282–85). This monetary outflow was linked to the negative flow of precious metals, driven both by compulsory payments to frontier peoples and by persistent trade imbalances with the East. The end of mining expansion under Trajan, the suspension of mining operations during Marcus Aurelius’s reign, and the lack of significant new sources all contributed to this metal supply crisis (Howgego 1992, pp. 7–10). Although the denarius was favored in eastern trade for its high purity, its gradual debasement under Commodus and the Severans—accelerating sharply from 238 CE—led to a loss of confidence in the Roman monetary standard and to the contraction of Red Sea trade by the late third century (Nappo 2007, pp. 236–38).
In response to this deterioration, the Roman state increasingly resorted to levies in kind—collecting goods instead of money—a practice that intensified during the third century CE but whose roots can be traced to the late second century. Faced with the impossibility of creating new taxes and spiraling inflation, local governments began collecting taxes directly in agricultural products, particularly to cover the costs of the army (Howgego 1992, pp. 5–8; Hopkins 1980, pp. 123–25). Debasement allowed military commanders to use inferior coinage to procure supplies, harming local producers. As a result, an expansion of the fiscal bureaucracy became necessary to enforce in-kind taxation, which reduced the monetization of the urban economy, diminished commercial activity, and weakened cities as economic hubs. The transition to a fiscal regime based on direct exchange and payments in goods would culminate, in the following centuries, in the collapse of the classical Roman monetary system and the emergence of more localized and less complex economic structures (Paolilli 2008, p. 283).
This accumulation of crises thus imposed a significant cost on the Empire’s commercial integration with the East; intercontinental trade structures buckled under the weight of multi-level disruptions. While some indicators—such as the number of amphorae found at archeological sites—do not reveal an immediate commercial collapse, more dynamic evidence, such as the decline in documented shipwrecks, points to a contraction in maritime trade, with a sharper downturn during the reign of Marcus Aurelius (Paolilli 2008, pp. 274–78; Whittow 2015, pp. 138–39). This economic contraction affected not only the circulation of luxury goods but also severely weakened the logistical infrastructure of the Roman Empire. A striking illustration is the collapse of caravan trade through Palmyra between 161 and 193 CE, coupled with an entire decade of silence in trade records from Antioch (Sabbatani and Fiorino 2009, pp. 269–70). The Antonine Plague, which disproportionately impacted urban centers linked by the Silk Roads in Asia, further disrupted long-distance commercial networks. This is evidenced by marked shifts in ceramic distribution patterns, reflecting a breakdown in large-scale market connectivity (Elliott 2024, pp. 230–31).
The decline of the Afro-Eurasian integration was not limited to the movement of goods but profoundly affected the commercial agents operating within it. The inscription of Tyrian merchants in Puteoli (174–175 CE) records that they had once been “numerous and wealthy,” but now were “fewer in number,” to the point of requesting subsidies (Elliott 2024, pp. 161–62). The retreat of connectivity was accompanied by the progressive dismantling of logistical systems, such as Nile river transport, which became increasingly coercive by the late second century CE. The obligation to carry state grain at below-market prices turned maritime commerce into a burden, driving marginal enterprises to bankruptcy and causing the irreversible loss of productive capital. With the collapse of the profitability of long-distance transport, even basic commodities like oil and wine circulated less widely and became more expensive, while luxury products ceased to be imported regularly (Elliott 2024, pp. 239–40).

8. Collapse, West and East

Roman trade and power networks were a remarkable feature of the Afro-Eurasian world system, and it is not surprising that a major crisis affecting its key nodes ultimately produced transcontinental cascading effects. One might argue, however, that it takes more than the Roman Empire alone to sustain a continental world-system, and that the long-distance networks could have contracted rather than collapsed entirely, maintaining connections between the Asian empires. Yet the easternmost nodes of this system—the major cities under Han political control—were simultaneously experiencing their own period of crisis, much of it correlated with the same factors that pushed the Roman integrative web toward collapse.
The fall of the Han dynasty in 220 CE brought down one of China’s most durable imperial institutions and plunged the realm into centuries of political fragmentation known as the Era of Disunity. While traditional historiography has emphasized political factors—succession crises, factional struggles, and administrative corruption—recent scholarship has increasingly highlighted the crucial role of environmental and epidemiological pressures in undermining the dynasty’s structural foundations. The Han collapse represents a paradigmatic case of how climate change, natural disasters, and epidemic disease can interact with political fragility to produce systemic crisis in pre-modern agrarian empires. The Romans found themselves in troubled company: from west to east, the great empires of the Afro-Eurasian world-system were simultaneously convulsed by crisis.
The imperial authority of the Han dynasty during the second century CE was characterized by inherent fragility resulting from structural political factors that undermined governmental stability and left the dynasty ill-equipped to withstand profound pressures from climate change and recurring epidemics. Beginning with Emperor He (r. 88–106 CE), who ascended the throne at age nine, all subsequent rulers of the Later Han came to power as children, with none living beyond forty years, creating a series of brief and unstable regimes led by emperors who seldom reached political maturity or accumulated necessary governing experience (De Crespigny 2017, pp. 110, 146–49). The monarchy’s weakness was compounded by prolonged regencies, as powerful empress dowagers exercised absolute authority by selecting infant successors and refusing to cede power even when emperors reached nominal adulthood, while instability was further intensified by relentless struggles for power among three competing factions: consort families, eunuchs, and scholar-officials. Consort families rose to dominance by controlling access to the throne through strategic marriages with the imperial household, repeatedly overthrowing rivals and consolidating authority through regencies, with some family leaders even accused of poisoning young emperors who showed signs of independence.
Meanwhile, eunuchs gained increasing influence due to their proximity to child emperors, organizing coups to place new rulers on the throne and achieving such dominance that emperors referred to leading eunuchs as parental figures, while their conflicts with the bureaucracy consumed much of the court’s energy and weakened administrative efficiency. The scholar-officials, traditional advisors to the throne, found themselves marginalized through systematic purges and dismissals, with senior bureaucratic positions becoming ceremonial roles whose advice was frequently ignored, and appointments rarely lasting more than a year under the final emperors, making coherent policy implementation impossible (Tse 2018, pp. 8–15, 127–40). This political fragmentation culminated in the rise of regional strongmen who openly challenged imperial authority, symbolized by the warlord takeover that contemporaries regarded as a “sacrilege against sovereign authority” (Tse 2018, p. 3), ultimately opening the way for centuries of political fragmentation as the Han bureaucracy proved unable to maintain state unity, and internal disputes forced reliance on military leaders who became the foundation of regional warlordism (Levenson and Schurmann 1969, pp. 24–35).
Frequent natural disasters triggered peasant uprisings, while the growing power of regional elites eroded the authority of the central government. Simultaneously, dramatic population shifts and mounting economic strain from imperial policies and military conflicts further destabilized the dynasty (Kidder et al. 2016, pp. 72–80). Peasant unrest was one of the most striking symptoms of this decline. Droughts, the most common natural disaster in the Han dynasty, were a primary cause of famine, which in turn drove large-scale rural flight and recurrent revolts. Severe famines often preceded dynastic collapses and played a decisive role in major uprisings, such as the Lulin and Chimei rebellions at the end of the Western Han. In the early decades of the Common Era, catastrophic flooding of the Yellow River (14–17 CE) devastated the Central Plains, destroying harvests, collapsing infrastructure, and producing widespread starvation and epidemic disease. Bureaucratic incompetence, corruption, and political indifference compounded the government’s inability to suppress the subsequent unrest, leading to rebellions and what contemporary sources described as “banditry.”
Powerful clans often diverted taxpayer households from state registers, transforming them into private tenants and retainers. This not only drained imperial resources but also enriched the families, who prospered during famine years by trading grain and lending at interest, often seizing the land of indebted peasants. Such dynamics fueled a strong sense of regionalism among elites, leading them to privilege local over imperial interests. The Later Han state itself was, to a significant degree, sustained and dominated by these great families, whose rivalries destabilized politics from within.
Population decline and large-scale demographic shifts further weakened the Han. Registered populations in northern and northwestern frontier commanderies, such as Beidi and Anding, fell dramatically—from over 350,000 at the beginning of the dynasty to fewer than 50,000 by the 140s CE. In some regions devastated by the Qiang Wars, the registered population plummeted to just 5–10% of Former Han levels. Civil war following the fall of Wang Mang, foreign conflicts with the Xiongnu and Qiang, and recurrent natural disasters all contributed to this demographic disaster. Tens of thousands perished in war, while many more fled their homes to escape taxation, corvée labor, or military service.
Finally, the economic burden of imperial expansion and persistent warfare placed unrelenting strain on the Han state. Ambitious campaigns, such as Dou Xian’s wars against the Xiongnu, depleted the treasury, forcing currency devaluation and heavier taxation. Disaster relief policies—including tax remissions and the opening of granaries—were costly and only partially effective, given the frequency of disasters and the inefficiency of the bureaucracy. Fiscal weakness became a permanent feature of the Later Han, exacerbated by the loss of revenue from rebel-held territories, particularly in Liang and Bing provinces (Levenson and Schurmann 1969, pp. 87–96, 119; Liu and Yan 2020).
The decades leading up to 160 CE witnessed an escalating convergence of destabilizing forces: intensifying political strife within the imperial court, growing fiscal pressures on the state, widespread peasant unrest, and the increasingly severe impacts of climate change. During the late Western Han and Eastern Han periods (202 BCE—220 CE), China was affected by the effects of a weakening East Asian monsoon, leading to greater aridity and increased variability in the amplitude and frequency of floods and droughts. North China became increasingly arid as the Inter-Tropical Convergence Zone (ITCZ) retreated southward and the East Asian monsoon weakened. This general trajectory toward aridity is supported by both climate proxy data and historical records from across the Yellow River Valley. During the Han period, the climate system also exhibited greater amplitude variation, with droughts and floods becoming more frequent and intense compared to earlier times, a shift that profoundly affected Han farmers and agricultural stability.
Anthropogenic pressures exacerbated these changes: widespread cultivation with iron tools accelerated sediment erosion on the Loess Plateau, while the increasingly arid climate reduced vegetation cover, intensifying erosion early in the Han. Human interventions such as the construction of dikes and levees further worsened the problem by causing rapid sediment accumulation in the riverbed. Over time, the riverbed rose above the surrounding floodplain, producing a “rigidity trap” in which ever-greater investment in flood control was required to stave off catastrophic failure from infrequent but high-amplitude floods (Kidder et al. 2016, p. 75).
The early Common Era saw several catastrophic flood events with devastating consequences for the Western Han. Historical records describe major floods in 1–2 CE, 11 CE, and, most destructively, between 14 and 17 CE. The latter inundated vast areas of the Central Plains, including the Sanyangzhuang region, which was buried beneath a massive sediment splay. Affecting more than 40% of the population in the lower Yellow River Valley, this flood destroyed harvests and infrastructure, buried entire communities, and unleashed famine, disease, and widespread displacement. Its impact was not merely environmental but political and social, acting as a trigger for cascading crises that pushed the Western Han state beyond its capacity for adaptation. Popular unrest and peasant uprisings that followed, including the revolts leading to the downfall of Wang Mang, were directly shaped by this disaster, which stands as a critical factor in the collapse of the Western Han Empire (Kidder et al. 2016, pp. 78–85).
Even during the climatic phase known as the “Han Warm Period” (210 BCE–180 CE), which encompassed nearly the entire duration of both Western and Eastern Han rule and demonstrated striking temporal alignment with the contemporaneous Roman Warm Period, there remained a strong correlation between social decline and cold-dry conditions: dynastic weakening coincided with reduced precipitation in over 66% of cases, while phases of social consolidation and expansion aligned with warm-wet conditions in more than 57% of cases. This evidence strongly suggests that any increase in aridity would have negatively impacted social and political stability (Yin et al. 2016).
Towards the end of the Han dynasty, a decisive climatic transition becomes evident. The termination of the Han Warm Period around 180 CE ushered in the “Wei, Jin, Northern and Southern Dynasties Cold Period” (181–540 CE), bringing lower temperatures and reduced precipitation that coincided with the dynasty’s final decades and intensified existing structural pressures. Climate-society correlations operated on different temporal scales: temperature changes influenced social outcomes over multi-decadal periods (30–90 years), while precipitation oscillations affected shorter cycles (10–30 years), suggesting that rapid precipitation declines during the late Eastern Han delivered acute shocks that accelerated the dynasty’s collapse (Liu and Yan 2020; Yin et al. 2016). The Guliya ice core in the western Kunlun Mountains confirms this transition, registering sharp declines in both temperature and precipitation between ca. 250–280 CE that marked the end of warm, humid conditions characteristic of the Han period. This colder, drier phase persisted into the mid-first millennium CE, supported by broader evidence of aridification in the Tarim Basin after ca. 150 CE and the subsequent abandonment of oasis settlements after ca. 450 CE. Floods struck Jincheng in 183 CE and Jiuquan in 180 CE, while droughts and locust invasions further exacerbated socioeconomic vulnerability, leading to forced migrations and refugee crises (Zhang and Zhang 2019; Yang et al. 2019; Mischke et al. 2019).
Intensified irrigation farming led to dramatic landscape changes, including the near or complete desiccation of several large lakes in arid Western China. Expansion into the arid regions of the west was directly accompanied by a significant intensification of irrigation farming along rivers that drained from the Qilian, Tianshan, and Kunlun Mountains. This led to a large population migration towards the west and significant land-use changes. Historical data for oasis agriculture in the Tarim and Yanqi basins during the Han Dynasty indicates that the area of cultivated land was significantly larger during this period. The drying of Lop Nur and the decline of Loulan, for instance, appear to have resulted not from regional climatic deterioration but from water withdrawals from feeder rivers, representing some of the earliest known anthropogenic ecological collapses. Lop Nur Lake sediments were largely replaced by aeolian sands around 1800 years bp (before present), indicating a period of near-desiccation. The decline of the Loulan Kingdom in the Tarim Basin might have resulted from a man-made environmental disaster due to water withdrawal for irrigation, rather than natural climate change, as adjacent regions experienced wet conditions at the same time (Mischke et al. 2019).
Lastly, the Later Han period saw an unprecedented epidemic that fundamentally changed social and religious patterns throughout the empire. Prior to the 170s CE, there were few references to widespread sickness in earlier reigns of the Later Han dynasty. As said before, the earliest outbreaks of disease in Han China appeared in Luoyang in the spring of 151 CE, the city that would later receive a Roman “embassy.” Ten years later, Luoyang was again afflicted by disease before a “great pestilence” struck the army in Xinjiang. In 162 CE, a “great pestilence” affected the Xinjiang region, leading to significant military losses: Chinese chroniclers recorded the February arrival of this outbreak in the ranks of the army serving on the northwestern frontier (Xinjiang and Kokonor) against nomads. This occurred during intense border wars, and around one-third (three or four out of ten) of the soldiers reportedly died. In 166 CE, a report to Emperor Huan described widespread pestilence; that same year, the Antonine Plague ravaged Rome (De Crespigny 2017, p. 344). Another “great pestilence” struck in 171 CE, followed by regional epidemics in 173, 179, 182, and 185 CE, which usually occurred during the colder seasons. Chinese sources specifically show a concentration of winter or spring deaths in 173, 182, and 179.
The last of these major epidemics coincided with the Yellow Turban Rebellion (Turchin 2008, p. 185). Due to the frequent and frightening outbreaks of illness and widespread death, there was an increased interest in supernatural means to combat or avoid the plague, leading to the appearance of several sects in different regions of the empire. Between 172 and 183 CE, Luo Yao in the Three Adjuncts (around Chang’an) taught the Method of Contemplating the Concealed, Zhang Jue in the east founded the Way of Great Peace (taiping dao), and Zhang Xiu in Hanzhong established the Way of the Five Dou of Rice (wudoumi dao). The frequent epidemics in China, combined with other natural disasters, had profound social and political impacts, notably influencing the perception of the Han emperor’s legitimacy and fueling significant rebellions. For the rebels involved in the Yellow Turban Rebellion of 184 CE, the sheer number of these disasters—epidemics, famines, and floods—confirmed that the Han emperor had lost the “Mandate of Heaven,” which was their divine legitimacy to rule. This belief undermined imperial authority and, despite being suppressed, provided a powerful justification for challenging the existing dynasty (Elliott 2024, pp. 103–5; Levenson and Schurmann 1969, pp. 124–31; Tse 2018, p. 9).
This mixture of political instability, rebellions, and climatic stress fundamentally undermined the Chinese imperial state’s capacity to project power westward into Central Asia, thereby weakening its control over the lucrative long-distance trade networks that had previously sustained imperial revenues and facilitated cultural exchange across Eurasia. The Han dynasty’s commercial vulnerabilities were compounded by its dependence on intermediaries, particularly from India and Parthia, in trade with the Roman Empire. The Parthian stranglehold on overland routes forced greater reliance on alternative networks, with Indian merchants playing an increasingly crucial role in linking China and Rome through maritime routes. Chinese silk and other goods were often exported to Rome via India, with Han products passed from Central Asian peoples to Kushan, Sogdian, or South Asian merchants, who then transported them further west or south to the Mediterranean Basin. However, even these alternative routes became increasingly precarious as Han authority over the Western Regions diminished from the 130s CE onward, when the imperial state proved neither willing nor able to maintain its commitment to the region.
The collapse of Chinese control over its western zones of influence created a cascade of commercial disruption that was further exacerbated by the simultaneous spread of epidemic disease along the Silk Road networks. Military disasters in frontier areas made trade routes extremely perilous: the catastrophic campaign against the Xianbi in 177 CE resulted in three-quarters of Chinese forces being killed or captured, while the massacre of the Xianlian Qiang in the late 160s provided only a facade of peace. By 184 CE, widespread conflict continued across the provinces, with the Southern Xiongnu government in Bing province collapsing, Wuhuan maintaining their rebel state, and Xianbi conducting frequent raids across the frontier (Benjamin 2018, pp. 170–73; Fitzpatrick 2011, pp. 44–45; McLaughlin 2010; Thorley 1971, p. 73).
The collapse of the Han dynasty was not an isolated event but part of a broader Eurasian crisis that fundamentally disrupted the interconnected world system of the ancient Silk Roads. The 3rd century CE marked a significant downturn for the Silk Roads, characterized by a substantial decrease in transregional commercial and cultural exchange. This decline was attributed to a Eurasian-wide cycle of contraction, as key agrarian civilizations were compelled to withdraw from the network.
The Later Han Dynasty officially ended in 220 CE, with warlords gaining control and dividing China into the Three Kingdoms (230–280 CE). This period was one of economic retrogression and a significant decline in internal and external trade. Simultaneously, the Parthian Empire collapsed in 224 CE under a wave of Sasanian invaders led by Ardashir, while the Kushan Empire went into decline after 225 CE, with its royal cities destroyed by Sasanian forces by 262 CE. The disintegration of the Kushan Empire ended what was considered the “Golden Age” of ancient Central Asia, even though its cultural, political, and economic achievements continued to influence regional successors (Turchin 2008, pp. 185–86).

9. Conclusions

A complex sequence of events gradually brought to an end the once-thriving network of economic, demographic, cultural and technological integration that had taken root across Afro-Eurasia since the first century BCE—a network that would not regain its vitality for nearly five centuries after the events surrounding the Antonine Plague. Climate change, the waning dynamism of the agrarian economy, the decline in land-based revenues and its repercussions on the elites’ standards of status and prestige, the heavy tax burdens imposed on rural settlements leading to social unrest in the Nile Delta, supply disruptions of African grain to the Italian peninsula, famine, and heightened vulnerability to a pathogen traveling along commercial routes—all these factors, combined with a contraction in demand for luxury goods within the Roman Empire and the impact of the plague on cities linked by the Silk Roads in Asia, culminated in the collapse of the Afro-Eurasian trade corridors and a pronounced fragmentation of civilizational zones that had once been deeply interconnected.
The collapse of the Afro-Eurasian world-system between the second and third centuries CE provides a revealing perspective on the entropic nature of complex systems. It demonstrates that world-systems are, fundamentally, structures that operate against the natural tendency toward entropy, requiring constant flows of energy, matter, and information to maintain their organizational cohesion. And this creates a fundamental thermodynamic paradox: the very process of building complex organization generates its own vulnerability to entropic collapse. World-systems exist in a state of dynamic tension against entropy, constantly employing flows of energy, matter, and information to maintain improbable patterns of order. However, each increase in organizational complexity—while improving systemic efficiency–simultaneously creates new points of potential failure and increases the overall requirements of the system for maintenance. This complexity enabled unprecedented coordination across vast distances and diverse populations, but also created a web of interdependencies where disruption at any critical node could trigger systemic cascade failures. When climatic and epidemic perturbations interrupted these critical flows, the system could no longer maintain itself far from thermodynamic equilibrium, initiating an irreversible process of disintegration.
B phases represent the moment when disruptive phenomena—climate, plague, political instability—negatively interfere with necessary flows, provoking negative feedback and accelerating the entropic tendency. The Egyptian case perfectly illustrates this process: expansion over marginal lands during favorable climatic periods became unsustainable with aridification, leading to village abandonment, collapse of hydraulic infrastructure, and disintegration of supply chains that sustained Rome. The failure of Nile floods, rural depopulation, and lack of labor for maintaining hydraulic works created a vicious cycle where fiscal pressure continued even amid crisis, driving further flight and banditry. This Egyptian crisis, in turn, triggered a catastrophic disruption in grain supply to other regions of the Roman Empire, exacerbating malnutrition and supply problems that made those populations even more susceptible to the viral disease spreading throughout the system. The colder, more arid climate not only amplified pathogen virulence but also reduced agricultural output, creating a deadly synergy between climatic stress, nutritional vulnerability, and epidemic disease. Malnutrition was already structural in the Roman Empire, creating what amounted to a demographic time bomb waiting for the shift in pathocenotic equilibrium to explode.
The Chinese case reinforces this pattern: political fragilities in the Han state diminished ordering capacity in the face of natural disasters and peasant revolts, creating a feedback loop of governmental collapse. Weakening of the East Asian monsoon, aridity, and climatic instability around 160 CE, combined with anthropogenic impacts on the environment, imposed structural pressure on the dynasty. Population decline and demographic changes undermined corvée labor and tribute collection, while the state’s capacity to project power over western territories was exhausted, weakening control over long-distance trade routes.
In sum, the interconnected nature of these crises exemplifies how systemic integration transforms localized perturbations into cascading system-wide failures.
Ugo Bardi represented the cascade of events in a collapse situation as the Seneca Effect, a phenomenon observed ubiquitously in complex systems where growth occurs slowly but collapse happens rapidly. Named after the ancient Roman philosopher Lucius Annaeus Seneca’s observation that “increases are of slow growth, but the way to ruin is rapid,” this effect describes how complex, networked systems exhibit fundamentally non-linear responses to external perturbations. Unlike simple systems that react proportionally to disturbances, complex systems displaying the Seneca Effect can amplify the consequences of perturbations many times over through positive feedback mechanisms, driving the system toward more extreme states and accelerating decline. The collapse itself is defined as the rapid rearrangement of a large number of links within a complex, interconnected system, including their breakdown and disappearance—a process that is not a “bug” but rather an inherent “feature” of the universe that serves as a tool for discarding the old and creating space for the new. This phenomenon manifests as a collective process occurring exclusively in systems that are both “complex” and “networked,” formed by “nodes” connected by “links,” since single elements cannot collapse in the same manner as an entire network.
“Complex systems don’t just tend to flow in a certain direction. They tend to dissipate the thermodynamic potential at the maximum possible speed. It is the principle of maximum entropy production. That doesn’t mean that the system has a conscious will, it is just that it is a networked system of interactions where the various parts tend to move along the easiest route downhill. (…) [A]n avalanche is a fast way to dissipate the gravitational energy accumulated in a pile of snow or rock. At the same time, the rocks in the pile don’t know anything about avalanches, they just push against each other. If there is some space for a rock to roll down, it will. And if there is a way to dissipate more energy by making other rocks rolling down, they will. This is the very essence of the ‘Seneca Effect,’ the fact that the system tends to dissipate potentials at the maximum possible speed. When the system finds a way to collapse, it will”.
The underlying mechanisms driving the Seneca Effect operate through several interconnected principles, with positive (amplifying/enhancing) feedback serving as the central mechanism. This occurs when the system itself amplifies an external perturbation, pushing it toward more extreme states through self-reinforcing processes that accelerate decline rather than promoting stability. The collapse process is characterized by tipping points—critical thresholds where the system undergoes drastic changes from one state to another through an unstable intermediate state, after which the process becomes self-feeding and accelerates exponentially.
The collapse spreads hierarchically: first, the world-system itself, as the most general level of the nested structure, loses cohesion; then, meso-level units, such as regional polities, suffer the effects if they are highly integrated into the network; finally, local groups with greater capacity for autonomous subsistence can resist entropic disintegration. This pattern reflects the fundamental principle that systems decay from the top down, even though shocks may originate from below—like an earthquake that shakes the pillars of a building, causing the upper floors to collapse while the foundation remains intact.
The disruption of trade in luxury goods exemplifies how entropy also operates through cultural and material channels. These goods fall under Polanyi’s principle of non-substitutability and are fundamentally not subject to rational choice criteria on the part of economic agents. When the flow of luxury goods lost intensity, it precipitated a profound crisis because status patterns are not modified abruptly and depend on deeply embedded cultural structures that transform only over the longue durée, creating a temporal mismatch between immediate material disruption and the glacial pace of cultural adaptation required for elite reconfiguration. This occurred not only because of the lack of revenue, but also because of the rigidity of a moral economy underlying competition for elite status and codes of representation of power.
The second-century crisis clearly set in motion a downward trend in the multisecular cycles that shape world-systems dynamics, and it is not far-fetched to suggest that a B phase would emerge from the combined effects of climate change and a likely intercontinental pandemic. Although long-distance trade links regained some intensity in the following century, this was not without consequences: the Plague of Cyprian (249–262 CE) once again plunged this high-degree node of the systemic network—the Roman Empire—into turmoil, leaving a lasting imprint on the decisive Third-Century Crisis. The gradual collapse of Rome and the Han dynasty in China, together with the Plague of Justinian (541–549 CE), must be understood within these structural and long-term macrotrends. Subsequent A phases of the world-system remained susceptible to slipping into B phases due to the interaction between climate fluctuations and the movement of pathogens, as exemplified by the Black Death (1346–1353 CE).
However, from a long-term perspective, the second-century crisis appears even more consequential (Korotayev et al. 2006b, pp. 146–59). It should be understood as a decisive demographic and integrative turning point, signaling a transition from one fundamental mode of growth to another. Mathematical models of world-system evolution outline a macroperiodization defined by an overarching trajectory of hyperbolic growth—sometimes described as a blow-up regime—where the absolute growth rate of population and other key indicators tends to be proportional to the square of their current values, producing an acceleration far more intense than simple exponential growth. Within this hyperbolic framework, Korotayev and colleagues distinguish two successive epochs, the ‘older hyperbola’ and the ‘younger hyperbola,’ separated by a turning point explicitly placed between the late first and the second centuries CE. This juncture is interpreted not as a mere fluctuation within a cyclical pattern, but as a structural transformation that redefined the trajectory of the World System, dividing its history into two broadly comparable macrophases.
The passage from the older to the younger hyperbola was not a return to an earlier equilibrium or a temporary downturn but rather a radical change in the very dynamics of growth. Whereas the older hyperbola was characterized by a steeper trajectory of hyperbolic acceleration, the younger hyperbola introduced a smoother but more stable pattern of acceleration, which redefined the parameters of long-term development. In mathematical terms, had the explosive growth trajectory of the older hyperbola continued without alteration, the system would have reached absolute impossibility, with its main indicators approaching infinity within only a few centuries. For example, extrapolations of urban population size under the earlier regime suggest that the largest city in the world would have surpassed fifty billion inhabitants by the mid-second century CE, a figure that exposes the unsustainability of the trajectory. The continuation of this hyperbolic pattern was impossible by definition, and thus the system was compelled to shift to a new regime within a relatively short historical span (Korotayev et al. 2006b, p. 156).
The causes of this transformation were multiple and interrelated, reflecting demographic, epidemiological, political, and ecological processes. By the end of the first millennium BCE, the global population had reached nine-digit figures, which provided fertile ground for the emergence of new generations of pathogens of unprecedented lethality and epidemic destructiveness. The medical technologies available at the time were utterly inadequate to cope with these threats, resulting in episodes of large-scale depopulation, most notably the Antonine pandemic of the second century and later the Justinian pandemic of the sixth century, both of which dramatically slowed demographic expansion.
Simultaneously, the degree of political centralization in the World System had reached a critical level. By the end of the first century CE, the majority of the global population was concentrated under the authority of only four great empires—the Roman, Parthian, Kushana, and Han polities. This unprecedented consolidation entailed a radical increase in sociopolitical complexity and in the costs of maintaining vast administrative, military, and infrastructural systems. As a result, the minimum necessary product required for social reproduction rose sharply, while the per capita surplus declined, producing structural constraints on population and technological growth rates.
The cumulative impact of these factors generated conditions under which the explosive hyperbolic regime could no longer be sustained, forcing the system to diverge from the blow-up dynamic and adopt a new developmental pathway. This divergence has been described as the First Transitional Epoch (Korotayev et al. 2006b, p. 156), a period during which growth slowed markedly and key indicators such as economic performance, literacy, and urbanization either stagnated or declined, not due to saturation or prosperity, as in later demographic transitions, but as a result of systemic stress and crisis.
Unlike later episodes of global disruption, such as the fourteenth-century CE Black Death or the seventeenth-century CE general crisis, which are usually interpreted as cyclical pulsations within larger secular patterns, the transformation of the second century CE represented a qualitative break in the structural dynamics of the world-system. Those later crises did not fundamentally alter the regime of hyperbolic growth, whereas the second century inaugurated a transition to a distinct hyperbolic trajectory, one that was smoother and more sustainable while still maintaining long-term acceleration. In this sense, the crisis of the second century CE must be recognized not simply as a downturn but as a necessary and fundamental reconfiguration of the growth mechanisms that had driven world development since the Neolithic. In this sense, it marks the most profound and necessary transformation in the long arc of world-system history—a turning point without which humanity’s trajectory might have collapsed under the impossible demands of its own unsustainable acceleration. At the core of this transformation lay the critical interplay between climate change and pandemic diseases.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article.

Acknowledgments

I would like to thank Tatiana Massuno for our ongoing conversations and reflections on the fluid and shifting boundaries between humans and non-humans, new ontologies, and alternative perspectives on the question of agency. These exchanges have deeply informed my thinking throughout this work. I would also like to thank Eduardo Crespo and Pedro Rocha, with whom I co-teach the course “Complex Systems and Collapses in the Longue Durée” in the Graduate Program in International Political Economy at the Federal University of Rio de Janeiro. This article is a product of both the collaborative teaching experience shared with them and the many stimulating discussions we have had. I am equally grateful to the students of this course, whose questions, critiques, and insights have significantly enriched our debates and contributed to the development of several of the ideas explored here. Early findings from this research were first presented at the seminar “Global Disasters and World Society: The Ecological Dimension of the Crisis of the Modern World System,” hosted by the Federal University of Santa Catarina, Brazil, 25–27 March 2024. I am grateful to Christopher Chase-Dunn, editor of this issue and one of the seminar organizers, as well as to fellow organizers Fabio dos Santos and Pedro Vieira. Special thanks are due to Andrey Korotayev for his incisive comments during the event and valuable suggestions that have shaped the direction of this research. I also extend my appreciation to Leonid and Anton Grinin for their constructive feedback and insights. This article stems directly from the study group “Pandemics, World-Systems, and the Collapse of Classical Civilizations (1st century BCE–6th century CE),” which took place between 2020 and 2022, and I am especially grateful to Bernardo Sá, Daniel Vainfas, Felipe Blois, and Gabriel Gonçalves, who were part of this group. The discussions we held on collapse, complexity, and historical transformation during that period have significantly informed the arguments developed in these pages. Finally, I would like to thank the two anonymous peer reviewers for their thoughtful comments, suggestions, and critiques, which have significantly improved this work.

Conflicts of Interest

The author declares no conflicts of interest.

Notes

1
The debate on the “5000-year World System” is fundamental to the conceptual framework of this article. However, I do not engage with the controversy over the existence of several world-systems in a diachronic perspective, as argued by Wallerstein, or the existence of multiple synchronic world-systems, as argued by Chase-Dunn and Hall (1991, 1993) and Chase-Dunn and Lawrence (2010), or the emergence of a single system in constant evolution, as proposed by Frank (1990). For the sake of clarity and consistency, I will use the term world-system throughout the article to refer to the asymmetrical network of economic, social, cultural and political interactions that emerged in Afro-Eurasia around the second century BCE. Whether this integrative network consisted of flows between world-systems, or a single World System, is of no essential consequence to this study. It may also be that the relationship between interconnected blocks of civilizations consists of a world-system of world-systems, respecting the idea that we are dealing with nested structures.
2
Although Chase-Dunn and Hall (1993) and Chase-Dunn and Grimes (1995) question the existence of well-defined long cycles of integration and disintegration in Afro-Eurasia, they identified a relative synchronicity in patterns of demographic and urban spatial growth, as well as in the territory claimed by empires, between the Near East (gateway to Roman territory) and China from 450 BCE to 1600 CE. Between approximately 100 CE and 360 CE (a period encompassing the systemic shock caused by the Antonine Plague) both the major urban centers under Roman imperial authority and those under Chinese state control experienced intense demographic contraction (Chase-Dunn and Willard 1993; Chase-Dunn et al. 2000). See the discussion on the impacts of plague and climate change on the Afro-Eurasian integration network further below.
3
For Johnson and Earle (2000), a family group represents the basic subsistence unit: typically a nuclear or slightly extended family (5–8 people), sometimes co-residing in a small “camp” or “hamlet” (up to ~25–50 individuals). Originally, its economy is based on foraging or early horticulture, operating within a generalized-reciprocity household model. Labor is shared informally among kin, with little to no specialization or surplus, no formal leadership, and considerable residential mobility. As population density increases, resource competition prompts families to disperse into less contested areas. A local group, on the other hand, is made up of several families, ranging from a few hundred to several thousand, organized through kinship structures such as lineages or clans. From an economic point of view, these groups engage in modest intensification: itinerant agriculture, herding, gathering rich natural resources or horticulture. Labor becomes more collective, with group coordination needed for tasks such as irrigation or large-scale hunting. Leadership emerges via consensus, lineage elders, or a charismatic “Big Man” who coordinates inter-family ceremonies, resource storage, and trade networks. Warfare and defense of territory grow more common as group integration deepens. At the highest level of complexity, a regional polity (such as a chiefdom or state) exhibits centralized leadership, formal institutions, and marked social stratification. Intensive agriculture supports population concentrations and surpluses. Labor is hierarchically organized: non-elite families contribute labor or tribute (“rent”) to elites, who in turn manage large-scale infrastructure, bureaucracy, and professional military forces. Elites control resource accumulation through storage and redistribution systems, derive authority from office (not just charisma), and defend or expand territorial domains. Importantly, Johnson and Earle emphasize that the rise of a regional polity does not eliminate lower levels of social complexity. Family and local groups persist, but are functionally transformed: they are no longer politically autonomous, becoming instead subordinated units embedded within the broader institutional framework. Family groups serve as domestic labor units; local groups retain cultural importance but lose decision-making independence. This reflects a principle of hierarchical integration, whereby each new level incorporates and reorganizes earlier ones to extend administrative and economic control.
4
According to Wallerstein (1976, 2011), world-empires and mini-systems represent two historical forms of social systems. World-empires are defined as a type of world-system that integrates a single political system across most of its territory, even if the effective control varies. Historically prevalent for millennia, such as those of China, Persia, and Rome, they primarily functioned through a centralist-redistributive economic form where economic flows from the periphery to the center were enforced by tribute, taxation, and monopolistic trade advantages. Despite their strengths in guaranteeing these flows, their political centralization often led to internal inefficiencies and increased military expenditures due to revolts, marking them as a primitive means of economic domination compared to later capitalist systems. Wallerstein argues that capitalism cannot flourish within the framework of a world-empire. In contrast, mini-systems are social systems characterized by a complete division of labor within a single cultural framework. These small, highly autonomous subsistence economies, typically found in simple agricultural or hunting and gathering societies, are considered self-contained entities whose developmental dynamics are largely internal. Crucially, a mini-system ceases to be a “system” if it becomes entangled in a larger tribute-demanding network, as it loses its self-contained division of labor and integrates into a redistributive economy. These mini-systems no longer exist in the modern world.
5
“We are, as you see, coming to the essential feature of a capitalist world-economy, which is production for sale in a market in which the object is to realize the maximum profit. In such a system production is constantly expanded as long as further production is profitable, and men constantly innovate new ways of producing things that will expand the profit margin”. (Wallerstein 1976, p. 398); “If we say that a system ‘gives priority’ to such endless accumulation, it means that there exist structural mechanisms by which those who act with other motivations are penalized in some way, and are eventually eliminated from the social scene, whereas those who act with the appropriate motivations are rewarded and, if successful, enriched” (Wallerstein 2007, p. 24).
6
Schneider’s original article was originally published in 1977.
7
For the relationship between demographic concentration and cultural innovation before the emergence of cities, see Powell et al. (2009). For a discussion of the relationship between innovation and urbanization today, see Bettencourt et al. (2007).
8
In a sense, this is convergent to what Frank (1991, p. 22) calls an “interpenetrating transfer of surplus”, something that has “long characterized and related different parts of the same world system”.
9
The concept of energy density, as defined by Smil, refers to the “amount of energy per unit mass of an energy source” (Smil 2024, pp. 9–10). This parameter is crucial for food production, as it determines how much metabolizable energy a food can offer in relation to its weight. Smil argues that even though foods such as fruit, tubers and milk are abundant in certain regions, their low energy density makes it impossible to use them as a staple food on a large scale. In contrast, cereals concentrate around six times more energy per unit mass than these foods, which explains their structural role in the human diet since the first agricultural revolutions. Low energy density foods do not become socially basic items in a diet, precisely because they do not meet the minimum metabolic demands of growing populations based on viable volumes of production and consumption.
10
Seligman (1937) dismissed the significance of the Silk Roads in supplying essential goods; however, this view has been increasingly challenged by recent scholarship (as referenced throughout this paper).
11
Redundancy can be defined as a variable systemic property that emerges from the deliberate operation of institutional, infrastructural, logistical, economic, political, and cultural mechanisms to ensure the continued functioning of its essential flows—including goods, services, people, and information—in the event of systemic crises or disruptions. It constitutes a form of resilience, in which elements that appear underutilized or excessive during periods of normalcy play a strategic role under exceptional conditions. Contrary to the logic of maximum efficiency, redundancy operates as a structural safeguard against natural, social, economic, or political shocks, through the preservation of alternative or idle capacities. From a thermodynamic perspective, redundancy entails an apparently useless short-term energy expenditure, as it requires the mobilization of free energy not only to construct backup systems and infrastructures, but also to maintain them over time—a work investment that produces no immediate return, yet functions as a hedge against systemic entropy. Typical examples include: secondary roads maintained despite low usage, but which become essential when main transport routes fail; public stockpiles of food, fuel, or medicine; reserve hospital beds; complementary or decentralized energy sources; alternative currencies and local exchange systems; multi-layered governance structures; and even the conservation of knowledge and skills deemed obsolete, yet vital during technological collapse. Redundancy, therefore, should not be conflated with wastefulness; rather, it must be understood as a foundational principle for building resilient social systems, capable of withstanding uncertainty and mitigating systemic risk through functional diversity and organizational plasticity. The concept, as defined here, integrates insights from Ashby (1956), Meadows (2022), and Taleb (2012) into an approach grounded in systemic variety, feedback loops, and antifragility.
12
A fundamental distinction must be drawn between modern capitalist societies—where populations depend overwhelmingly on market mechanisms for subsistence, economic inputs, and wage-based income—and ancient economic systems characterized by substantially lower market dependency. In pre-modern contexts, only a minority of individuals relied on market exchange for their livelihood, whether as consumers or income earners. Consequently, when trade networks collapsed, the impact fell disproportionately on major urban centers rather than the predominantly rural populations who maintained greater economic autonomy. This study, however, examines the breakdown of the broader systemic networks that interconnected these urban centers through complex flows of information, goods, services, and people. Our analysis focuses on understanding this infrastructural collapse itself, independent of whether it precipitated severe social crises—though as we shall see in the following sections, such crises did emerge in several regions.
13
VARV (Variola virus) is the etiological agent of human smallpox, disease eradicated in 1980. Belonging to the Orthopoxvirus genus, modern VARV (or mVARV) includes the Variola major and Variola minor strains, characterized by exclusive human-to-human transmission, high virulence, and a distinctive clinical profile involving high fever, pustular rash, and significant mortality. In contrast, aVARV (ancient Variola virus) refers to ancestral Variola lineages identified in human remains dated between the 7th and 10th centuries CE through paleogenomic sequencing. Although genetically related to mVARV, these ancient lineages exhibit distinct patterns of gene inactivation (including in genes associated with virulence), suggesting that they caused diseases with clinical manifestations and epidemiological dynamics that were likely different from those of clinically recognized smallpox. The existence of aVARV does not confirm the presence of classical smallpox in antiquity, but rather the circulation of poorly understood, now likely extinct, human orthopoxviruses. See Duggan et al. (2016).
14
Zhao and Wilson (2025) share this critique, arguing that reductive nominalism compromises the analysis presented by Newfield and colleagues.
15
The Volcanic Explosivity Index (VEI) is a semi-quantitative, logarithmic scale developed by Christopher Newhall and Stephen Self to assess the magnitude of explosive volcanic eruptions. The VEI ranges from 0 to 8, where each increment represents a tenfold increase in the volume of erupted material. It considers multiple parameters including the volume of tephra (ash and rock), eruption column height, duration, and qualitative descriptions of eruption intensity (e.g., “gentle,” “cataclysmic,” “colossal”). For example, a VEI 3 eruption ejects between 0.01 and 0.1 km3 of material and may reach 15 km in plume height, while a VEI 7 eruption, like Mount Tambora in 1815, exceeds 100 km3 and can impact global climate. VEI is particularly useful in volcanic hazard assessment and paleoclimate research, though it has limitations—such as underrepresenting long-duration eruptions or effusive activity. The scale was first introduced to address inconsistencies in eruption descriptions across disciplines and has since become a standard tool in volcanology (Newhall and Self 1982).
16
Heptanomia refers to the “Land of the Seven Nomes” in Middle Egypt, a regional administrative division under both the Ptolemaic and Roman rule. During the Roman period, the Heptanomia remained a significant cultural and economic area, stretching from Hermopolis (modern El Ashmunein) in the south to the outskirts of Memphis in the north. It was distinguished from Lower Egypt (the Delta) and the Thebaid to the south. The term underscores the persistence of ancient Egyptian regional identities despite the imposition of Greco-Roman administrative systems.
17
The artaba was a unit of capacity primarily used for dry goods, such as grains, in Ancient Egypt and other regions of the Near East, including the Persian Empire. Its exact value varied depending on the time and place, but in Egypt, it is estimated that one artaba corresponded to approximately 30 to 40 L. This unit was essential in agricultural administration, as it was used to calculate taxes, record harvests, organize food storage, and regulate commercial exchanges. Records in papyri, inscriptions, and accounts by historians such as Herodotus indicate that the artaba played a significant role in the fiscal and logistical structure of the state, enabling efficient control over the production and distribution of basic resources. Beyond its practical use, the artaba also symbolizes the degree of bureaucratic organization and the standardization of measurements that ancient civilizations developed to manage complex societies and agriculture-based economies.
18
An aroura (plural: arourae) was a unit of land measurement used in ancient Egypt, equivalent to approximately 2756 square meters (or about 0.68 acres). It was traditionally defined as a square measuring 100 cubits by 100 cubits, and was commonly employed to assess agricultural land, particularly in administrative and fiscal records under Ptolemaic and Roman rule.
19
Forth-century literary accounts, particularly those found in Ammianus Marcellinus and the Historia Augusta, attribute the plague’s origin to the sacking of Seleucia on the Tigris by troops under the command of Avidius Cassius, a general of Lucius Verus, during the Parthian War. However, this narrative likely represents historical misdirection, as the record appears to have been colored by Cassius’s later rebellion against Marcus Aurelius, which branded him a traitor and tainted his memory. This association also tainted Lucius Verus’s reputation, suggesting that linking the devastating plague outbreak to Cassius may have served as propaganda to discredit him. More importantly, some evidence contradicts this official story: as we have shown, the pestilence had already reached the empire’s territory by 165 CE, before Cassius’s armies concluded their Seleucia campaign in 166 CE. See (Harper 2021a, p. 23).
20
Obols were small coins in the ancient Greek monetary system, worth 1/6 of a drachma. In Roman Egypt, which retained the Ptolemaic monetary system, obols continued to represent very modest sums, sufficient to purchase basic food portions or pay for unskilled daily labor.
21
The sestertius (plural: sestertii) was a bronze or brass coin widely used in the Roman Empire, especially during the 1st and 2nd centuries CE. Within the monetary system of the time, one silver denarius was equivalent to four sestertii, meaning that each sestertius theoretically represented about 0.975 g of silver. However, this ratio was affected by the progressive debasement of the denarius, whose silver content began to decline under the reign of Marcus Aurelius and deteriorated significantly during the 3rd century CE, reducing the intrinsic value associated with each sestertius. Despite its limited metal content, the sestertius remained central to daily economic life, typically sufficient to cover small expenses such as a modest meal or part of a day’s wage for a laborer.

References

  1. Ambasciano, Leonardo. 2016. The fate of a healing goddess: Ocular pathologies, the Antonine Plague, and the ancient Roman cult of Bona Dea. Open Library of Humanities 2: e13. [Google Scholar] [CrossRef]
  2. Andorlini, Isabella. 2012. Considerazioni sulla “peste antonina” in Egitto alla luce delle testimonianze papirologiche. In L’impatto Della “Peste Antonina”. Edited by Elio Lo Cascio. Bari: Edipuglia, pp. 15–28. [Google Scholar]
  3. Arruñada, Benito. 2016. How Rome enabled impersonal markets. Explorations in Economic History 61: 68–84. [Google Scholar] [CrossRef]
  4. Ashby, William Ross. 1956. An Introduction to Cybernetics. London: Chapman & Hall. [Google Scholar]
  5. Ayoub, Houssein H., Ghina R. Mumtaz, Shaheen Seedat, Monia Makhoul, Hiam Chemaitelly, and Laith J. Abu-Raddad. 2021. Estimates of global SARS-CoV-2 infection exposure, infection morbidity, and infection mortality rates in 2020. Global Epidemiology 3: 100068. [Google Scholar] [CrossRef] [PubMed]
  6. Babkin, Igor, and Irina Babkina. 2015. The origin of the variola virus. Viruses 7: 1100–12. [Google Scholar] [CrossRef]
  7. Bagnall, Roger S. 2000. P. Oxy. 4527 and the Antonine Plague in Egypt: Death or flight? Journal of Roman Archaeology 13: 288–92. [Google Scholar] [CrossRef]
  8. Baker, Patrick L. 1993. Chaos, order, and sociological theory. Sociological Inquiry 63: 123–49. [Google Scholar] [CrossRef]
  9. Balland, Daniel, Thomas J. Barfield, Mansura Haider, and Andre Gunder Frank. 1992. Comments on Andre Gunder Frank’s “The centrality of central Asia”. Bulletin of Concerned Asian Scholars 24: 75–82. [Google Scholar] [CrossRef]
  10. Banaś, Agnieszka. 2021. Antonine Plague, Black Death and Smallpox epidemic versus COVID-19. How did humankind cope with the grapple against the biggest epidemics, and what does it look like today? Studia Orientalne 20: 82–98. [Google Scholar] [CrossRef]
  11. Bardi, Ugo. 2017. The Seneca Effect: Why Growth Is Slow but Collapse Is Rapid. Cham: Springer. [Google Scholar]
  12. Barisitz, Stephan. 2017. Central Asia and the Silk Road: Economic Rise and Decline Over Several Millennia. Cham: Springer. [Google Scholar]
  13. Beaujard, Philippe. 2010. From three possible Iron-Age world-systems to a single Afro-Eurasian world-system. Journal of World History 21: 1–43. [Google Scholar] [CrossRef]
  14. Benjamin, Craig. 2014. “But from this time forth history becomes a connected whole”: State expansion and the origins of universal history. Journal of Global History 9: 357–78. [Google Scholar] [CrossRef]
  15. Benjamin, Craig. 2018. Empires of Ancient Eurasia: The First Silk Roads Era, 100 BCE–250 CE. Cambrigde: Cambridge University Press. [Google Scholar]
  16. Bentley, Jerry. 1996. Cross-cultural interaction and periodization in world history. American Historical Review 101: 749–70. [Google Scholar] [CrossRef]
  17. Berche, Patrick. 2022. Life and death of smallpox. La Presse Médicale 51: 104117. [Google Scholar] [CrossRef]
  18. Bernabei, Mauro, Jarno Bontadi, Rossella Rea, Ulf Büntgen, and Willy Tegel. 2019. Dendrochronological evidence for long-distance timber trading in the Roman Empire. PLoS ONE 14: e0224077. [Google Scholar] [CrossRef] [PubMed]
  19. Bettencourt, Luis M. A., José Lobo, Dirk Helbing, Christian Kühnert, and Geoffrey B. West. 2007. Growth, innovation, scaling, and the pace of life in cities. Proceedings of the National Academy of Sciences 104: 7301–6. [Google Scholar] [CrossRef] [PubMed]
  20. Blois, Lukas de. 2002. The Crisis of the third century A.D. in the Roman Empire: A modern myth? In The Transformation of Economic Life Under the Roman Empire. Edited by Lukas de Blois and John Rich. Boston: Brill, pp. 204–17. [Google Scholar]
  21. Blouin, Katherine. 2010. La révolte des ‘Boukoloi’ (delta du Nil, Égypte, ca 166—172 de notre ère): Regard socio-environnemental sur la violence. Phoenix 64: 386–422. [Google Scholar] [CrossRef]
  22. Bowman, Alan Keir, and Andrew Wilson. 2009. Quantifying the Roman economy: Integration, growth, decline? In Quantifying the Roman Economy: Methods and Problems. Edited by Alan Bowman and Andrew Wilson. Oxford: Oxford University Press, pp. 3–86. [Google Scholar]
  23. Boyd, Douglas. 2022. Plagues and Pandemics: Black Death, Coronaviruses and Other Killer Diseases Throughout History. Havertown: Pen & Sword. [Google Scholar]
  24. Braudel, Fernand. 1981. Civilization and Capitalism, 15th–18th Century. Volume I: The Structures of Everyday Life: The Limits of the Possible. New York: Harper & Row. [Google Scholar]
  25. Braudel, Fernand. 1982. Civilization and Capitalism, 15th–18th Century. Volume II: The Wheels of Commerce. New York: Harper & Row. [Google Scholar]
  26. Braudel, Fernand. 1984. Civilization and Capitalism, 15th–18th Century. Volume III: The Perspective of the World. New York: Harper & Row. [Google Scholar]
  27. Brown, Sam P., Daniel M. Cornforth, and Nicole Mideo. 2012. Evolution of virulence in opportunistic pathogens: Generalism, plasticity, and control. Trends in Microbiology 20: 336–42. [Google Scholar] [CrossRef]
  28. Bruun, Christer. 2007. The Antonine Plague and the “third-century crisis”. In Crises and the Roman Empire. Edited by Olivier Hekster, Gerda de Kleijn and Daniëlle Slootjes. Leiden: Brill, pp. 201–17. [Google Scholar] [CrossRef]
  29. Bruun, Christer. 2012. La mancanza di prove di un effetto catastrofico della “peste antonina” (dal 166 d.C. in poi). In L’impatto Della “Peste Antonina”. Edited by Elio Lo Cascio. Bari: Edipuglia, pp. 123–65. [Google Scholar]
  30. Büntgen, Ulf, Vladimir S. Myglan, Fredrik Charpentier Ljungqvist, Michael McCormick, Nicola Di Cosmo, Michael Sigl, Johann Jungclaus, Sebastian Wagner, Paul J. Krusic, Jan Esper, and et al. 2016. Cooling and societal change during the Late Antique Little Ice Age from 536 to around 660 AD. Nature Geoscience 9: 231–36. [Google Scholar] [CrossRef]
  31. Büntgen, Ulf, Willy Tegel, Kurt Nicolussi, Michael McCormick, David Frank, Valerie Trouet, Jed O. Kaplan, Franz Herzig, Karl-Uwe Heussner, Heinz Wanner, and et al. 2011. 2500 years of European climate variability and human susceptibility. Science 331: 578–82. [Google Scholar] [CrossRef]
  32. Casson, Lionel. 1980. The role of the state in Rome’s grain trade. Memoirs of the American Academy in Rome 36: 21–33. [Google Scholar] [CrossRef]
  33. Chase-Dunn, Christopher, and Alice Willard. 1993. Systems of cities and world-systems: Settlement size hierarchies and cycles of political centralization, 2000 BC to 1988 AD. Paper presented at the Annual Meeting of the International Studies Association, Acapulco, Mexico, March 25; Available online: https://irows.ucr.edu/papers/irows5/irows5.htm (accessed on 2 July 2025).
  34. Chase-Dunn, Christopher, and Kirk S. Lawrence. 2010. Alive and well: A response to Sanderson. International Journal of Comparative Sociology 51: 470–79. [Google Scholar] [CrossRef]
  35. Chase-Dunn, Christopher, and Peter Grimes. 1995. World-systems analysis. Annual Review of Sociology 21: 387–417. [Google Scholar] [CrossRef]
  36. Chase-Dunn, Christopher, and Thomas D. Hall. 1991. Conceptualizing core/periphery hierarchies for comparative study. In Core/Periphery Relations in the Precapitalist Worlds. Edited by Christopher K. Chase-Dunn and Thomas D. Hall. Boulder: Westview Press, pp. 5–44. [Google Scholar]
  37. Chase-Dunn, Christopher, and Thomas D. Hall. 1993. Comparing world-systems: Concepts and working hypotheses. Social Forces 71: 851–86. [Google Scholar] [CrossRef]
  38. Chase-Dunn, Christopher, Susan Manning, and Thomas D. Hall. 2000. Rise and fall: East-West synchronicity and Indic exceptionalism reexamined. Social Science History 24: 727–54. [Google Scholar] [CrossRef]
  39. Christian, David. 2000. Silk roads or steppe roads? The Silk Roads in world history. Journal of World History 11: 1–26. [Google Scholar] [CrossRef]
  40. Christiansen, John H., and Mark Altaweel. 2006. Simulation of natural and social process interactions: An example from Bronze Age Mesopotamia. Social Science Computer Review 24: 209–26. [Google Scholar] [CrossRef]
  41. Clemente-Suárez, Vicente Javier, Eduardo Navarro-Jiménez, Libertad Moreno-Luna, María Concepción Saavedra-Serrano, Manuel Jimenez, Juan Antonio Simón, and Jose Francisco Tornero-Aguilera. 2021. The impact of the COVID-19 pandemic on social, health, and economy. Sustainability 13: 6314. [Google Scholar] [CrossRef]
  42. Cline, Eric. 2021. 1177 B.C.: The Year Civilization Collapsed (Revised and Updated). Princeton: Princeton University Press. [Google Scholar]
  43. Cooper, Claire L., Graeme T. Swindles, Ivan P. Savov, Anja Schmidt, and Karen L. Bacon. 2018. Evaluating the relationship between climate change and volcanism. Earth-Science Reviews 177: 238–47. [Google Scholar] [CrossRef]
  44. Crabtree, Stefani. 2016. Simulating littoral trade: Modeling the trade of wine in the Bronze to Iron Age transition in Southern France. Land 5: 5. [Google Scholar] [CrossRef]
  45. Cravioto, Enrique G., and Inmaculada García. 2007. La primera peste de los Antoninos (165–170). Una epidemia en la Roma imperial. Asclepio 59: 7–22. [Google Scholar] [CrossRef]
  46. Cravioto, Enrique G., and Inmaculada García. 2014. Una aproximación a las pestes y epidemias en la antigüedad. Espacio Tiempo y Forma. Serie II, Historia Antigua 26: 63–82. [Google Scholar] [CrossRef]
  47. De Crespigny, Rafe. 2017. Fire over Luoyang: A History of the Later Han Dynasty, 23–220 AD. Leiden: Brill. [Google Scholar]
  48. De Ligt, Luuk. 2002. Tax transfers in the Roman Empire. In The Transformation of Economic Life Under the Roman Empire. Edited by Lukas de Blois and John Rich. Boston: Brill, pp. 48–66. [Google Scholar]
  49. De Ligt, Luuk, and Laurens E. Tacoma. 2016. Approaching migration in the early Roman Empire. In Migration and Mobility in the Early Roman Empire. Edited by Luuk De Ligt and Laurens Ernst Tacoma. Leiden: Brill, pp. 1–22. [Google Scholar]
  50. De Romanis, Federico. 2007. In tempi di guerra e di peste. Horrea e mobilità del grano pubblico tra gli Antonini ei Severi. Antiquités Africaines 43: 187–230. [Google Scholar] [CrossRef]
  51. Diamond, Jared. 2011. Collapse: How Societies Choose to Fail or Succeed. New York: Penguin. [Google Scholar]
  52. Djurdjevac Conrad, Nataša, Luzie Helfmann, Johannes Zonker, Stefanie Winkelmann, and Christof Schütte. 2018. Human mobility and innovation spreading in Ancient Times: A tochastic agent-based simulation approach. EPJ Data Science 7: 24. [Google Scholar] [CrossRef]
  53. Dudbridge, Glen. 2018. Reworking the world system paradigm. Past & Present 238: 297–316. [Google Scholar] [CrossRef]
  54. Duggan, Ana T., Maria A. Perdomo, Lucy H. Piombino-Mascali, Gino Fornaciari, Alberto Marota, Hendrik N. Poinar, and Edward C. Holmes. 2016. 17th century variola virus reveals the recent history of smallpox. Current Biology 26: 3407–12. [Google Scholar] [CrossRef]
  55. Duncan-Jones, Richard. 1990. Structure and Scale in the Roman Economy. Cambridge: Cambridge University Press. [Google Scholar]
  56. Duncan-Jones, Richard. 1996. The impact of the Antonine plague. Journal of Roman Archaeology 9: 108–36. [Google Scholar] [CrossRef]
  57. Duncan-Jones, Richard. 2018. The Antonine plague revisited. Arctos: Acta Philologica Fennica 52: 41–72. [Google Scholar] [CrossRef]
  58. Elliott, Colin. 2016. The Antonine Plague, climate change and local violence in Roman Egypt. Past & Present 231: 3–31. [Google Scholar] [CrossRef]
  59. Elliott, Colin. 2024. Pox Romana: The Plague That Shook the Roman World. Princeton: Princeton University Press. [Google Scholar]
  60. Erdkamp, Paul. 2005. The Grain Market in the Roman Empire: A Social, Political and Economic Study. New York: Cambridge University Press. [Google Scholar]
  61. Erdkamp, Paul. 2019. War, food, climate change, and the decline of the Roman Empire. Journal of Late Antiquity 12: 422–65. [Google Scholar] [CrossRef]
  62. Erdkamp, Paul. 2021. Climate change and the productive landscape in the mediterranean region in the Roman period. In Climate Change and Ancient Societies in Europe and the Near East: Diversity in Collapse and Resilience. Edited by Paul Erdkamp, Joseph G. Manning and Koenraad Verboven. Cham: Springer, pp. 411–42. [Google Scholar]
  63. Fears, Jesse Rufus. 2004. The plague under Marcus Aurelius and the decline and fall of the Roman Empire. Infectious Disease Clinics of North America 18: 65–77. [Google Scholar] [CrossRef]
  64. Ferreira, Claudia, Marie-Françoise J. Doursout, and Joselito S. Balingit. 2023. 2000 Years of Pandemics: Past, Present, and Future. Cham: Springer. [Google Scholar]
  65. Fitzpatrick, Matthew P. 2011. Provincializing Rome: The Indian Ocean trade network and Roman imperialism. Journal of World History 22: 27–54. [Google Scholar] [CrossRef]
  66. Flemming, Rebecca. 2019. Galen and the Plague. In Galen’s Treatise Περὶ Ἀλυπίας (De Indolentia) in Context: A Tale of Resilience. Edited by Caroline Petit. Leiden: Brill, pp. 219–44. [Google Scholar]
  67. Frank, Andre G., and Barry K. Gills. 1993. The World System: Five Hundred Years or Five Thousand? New York: Routledge. [Google Scholar]
  68. Frank, Andre Gunder. 1990. A theoretical introduction to 5,000 years of world system history. Review (Fernand Braudel Center) 13: 155–248. [Google Scholar]
  69. Frank, Andre Gunder. 1991. A plea for world system history. Journal of World History 2: 1–28. [Google Scholar]
  70. Frank, Tenney. 2006. An Economic History of Rome. Kitchener: Batoche. [Google Scholar]
  71. Fulford, Michael. 1987. Economic interdependence among urban communities of the Roman Mediterranean. World Archaeology 19: 58–75. [Google Scholar] [CrossRef]
  72. Gaia, Deivid Valério. 2020. Os Antoninos: O apogeu e o fim da Pax Romana. In História de Roma Antiga: Volume II: Império Romano do Ocidente e Romanidade Hispânica. Edited by José Luís Brandão and Francisco de Oliveira. Coimbra: Imprensa da Universidade de Coimbra, pp. 175–215. [Google Scholar]
  73. Geoffroy, Andrés Sáez. 2025. Redes de difusión de la Plaga de Atenas y la Peste Antonina: Notas para una comprensión histórica de la transmisión de las enfermedades en la Antigüedad. Revista Chilena de Infectología 42: 169–75. [Google Scholar] [CrossRef]
  74. Geoffroy, Andrés Sáez, and Joel Parra Díaz. 2020. De la Peste Antonina a la peste de Cipriano: Alcances y consecuencias de las pestes globales en el Imperio Romano en el siglo III d.C. Revista Chilena de Infectología 37: 450–55. [Google Scholar] [CrossRef]
  75. Georgescu-Roegen, Nicholas. 1971. The Entropy Law and the Economic Process. Cambridge: Harvard University Press. [Google Scholar]
  76. Gilliam, James F. 1961. The plague under Marcus Aurelius. The American Journal of Philology 82: 225–51. [Google Scholar] [CrossRef]
  77. Gills, Barry K., and Andre Gunder Frank. 1990. The cumulation of accumulation: Theses and research agenda for 5000 years of world system history. Dialectical Anthropology 15: 19–42. [Google Scholar] [CrossRef]
  78. Gills, Barry K., and Andre Gunder Frank. 1992. World system cycles, crises, and hegemonial Shifts, 1700 BC to 1700 AD. Review (Fernand Braudel Center) 15: 621–87. [Google Scholar]
  79. Gonzalez, Jean-Paul, Micheline Guiserix, Frank Sauvage, Jean-Sébastien Guitton, Pierre Vidal, Nargès Bahi-Jaber, Hechmi Louzir, and Dominique Pontier. 2010. Pathocenosis: A holistic approach to disease ecology. EcoHealth 7: 237–41. [Google Scholar] [CrossRef]
  80. Gourevitch, Danielle. 2005. The Galenic plague: A breakdown of the imperial pathocoenosis: Pathocoenosis and longue durée. History and Philosophy of the Life Sciences 27: 57–69. [Google Scholar]
  81. Grimes, Peter E. 2017. Evolution and world-systems: Complexity, energy, and form. Journal of World-Systems Research 23: 678–732. [Google Scholar] [CrossRef]
  82. Grinin, Leonid E. 2017. On systemic integration in the World System since the Bronze Age. Social Evolution and History 16: 76–111. [Google Scholar]
  83. Grmek, Mirko Drazen. 1969. Préliminaires d’une étude historique des maladies. Annales: Economies, Sociétés, Civilizations 24: 1473–83. [Google Scholar] [CrossRef]
  84. Gunaratne, Shelton A. 2007. World-system as a dissipative structure: A macro model to do communication research. The Journal of International Communication 13: 11–38. [Google Scholar] [CrossRef]
  85. Haas, Charles. 2006. La peste antonine. Bulletin de l’Académie Nationale de Médecine 190: 1093–98. [Google Scholar] [CrossRef]
  86. Haldon, John, Hugh Elton, Sabine R. Huebner, Adam Izdebski, Lee Mordechai, and Timothy P. Newfield. 2018. Plagues, climate change, and the end of an empire: A response to Kyle Harper’s The Fate of Rome (1): Climate. History Compass 16: e12508. [Google Scholar] [CrossRef]
  87. Haller, Sherry L., Chen Peng, Grant McFadden, and Stefan Rothenburg. 2014. Poxviruses and the evolution of host range and virulence. Infection, Genetics and Evolution 21: 15–40. [Google Scholar] [CrossRef]
  88. Harper, Kyle. 2017. The Fate of Rome: Climate, Disease, and the End of an Empire. Princeton: Princeton University Press. [Google Scholar]
  89. Harper, Kyle. 2021a. Germs and empire: The agency of the microscopic. In Empire and Religion in the Roman World. Edited by Harriet Flower. Cambridge: Cambridge University Press, pp. 13–34. [Google Scholar]
  90. Harper, Kyle. 2021b. Plagues Upon the Earth: Disease and the Course of Human History. Princeton: Princeton University Press. [Google Scholar]
  91. Hopkins, Keith. 1980. Taxes and trade in the Roman Empire (200 B.C.–A.D. 400). Journal of Roman Studies 70: 101–25. [Google Scholar] [CrossRef]
  92. Howgego, Christopher. 1992. The Supply and use of money in the Roman world, 200 B.C. to A.D. 300. Journal of Roman Studies 82: 1–31. [Google Scholar] [CrossRef]
  93. Huebner, Sabine R. 2020. Climate change in the breadbasket of the Roman Empire: Explaining the decline of the Fayum villages in the third century CE. Studies in Late Antiquity 4: 486–518. [Google Scholar] [CrossRef]
  94. Hughes, Austin L., Stephanie Irausquin, and Robert Friedman. 2010. The evolutionary biology of poxviruses. Infection, Genetics and Evolution 10: 50–59. [Google Scholar] [CrossRef] [PubMed]
  95. Ivanov, Dmitry. 2021. Supply chain viability and the COVID-19 pandemic: A conceptual and formal generalisation of four major adaptation strategies. International Journal of Production Research 59: 3535–52. [Google Scholar] [CrossRef]
  96. Johnson, Allen W., and Timothy K. Earle. 2000. The Evolution of Human Societies: From Foraging Group to Agrarian State. Redwood City: Stanford University Press. [Google Scholar]
  97. Karamanou, Marianna, George Panayiotakopoulos, Gregory Tsoucalas, Antonis A. Kousoulis, and George Androutsos. 2021. From miasmas to germs: A historical approach to theories of infectious disease transmission. Le Infezioni in Medicina 20: 58–62. [Google Scholar]
  98. Karasaridis, Anestis, and Aleš Chalupa. 2025. Comparative SIR/SEIR modeling of the Antonine Plague in Rome. PLoS ONE 20: e0313684. [Google Scholar] [CrossRef] [PubMed]
  99. Karimjonova, Shahlo Ravshanjonovna. 2024. The historical and cultural heritage of the Silk Road: Urgent issues and research prospects. Madani Multidisciplinary Journal 4: 1557–65. [Google Scholar] [CrossRef]
  100. Kennedy, Jonathan Ryan. 2024. Pathogenesis: A History of the World in Eight Plagues. New York: Crown. [Google Scholar]
  101. Kessler, David, and Peter Temin. 2007. The organization of the grain trade in the early Roman Empire. The Economic History Review 60: 313–32. [Google Scholar] [CrossRef]
  102. Kidder, Tristram R., Liu Haiwang, Michael J. Storozum, and Qin Zhen. 2016. New Perspectives on the Collapse and Regeneration of the Han Dynasty. In Beyond Collapse: Archaeological Perspectives on Resilience, Revitalization, and Transformation in Complex Societies. Edited by Ronald K. Faulseit. Carbondale: Southern Illinois University Press, pp. 70–98. [Google Scholar]
  103. Kirchner, James W., and Bitty A. Roy. 2002. Evolutionary implications of host–pathogen specificity: Fitness consequences of pathogen virulence traits. Evolutionary Ecology 14: 665–92. [Google Scholar] [CrossRef]
  104. Korotayev, Andrey. 2008. Compact mathematical models of world-system development: How they can help us to clarify our understanding of globalization processes. In Globalization as Evolutionary Process: Modeling Global Change. Edited by George Modelski, Tessaleno C. Devezas and William R. Thompson. New York: Routledge, pp. 133–160. [Google Scholar]
  105. Korotayev, Andrey, and Julia Zinkina. 2017. Systemic boundaries issue in the light of mathematical modeling of the world-system Evolution. Journal of Globalization Studies 8: 78–96. [Google Scholar]
  106. Korotayev, Andrey, Artemy Malkov, and Daria Khaltourina. 2006a. Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth. Moscow: URSS. [Google Scholar]
  107. Korotayev, Andrey, Artemy Malkov, and Daria Khaltourina. 2006b. Introduction to Social Macrodynamics: Secular and Millenial Trends. Moscow: URSS. [Google Scholar]
  108. Kron, Geoffrey. 2012. Food production. In The Cambridge Companion to the Roman Economy. Edited by Walter Scheidel. Cambridge: Cambridge University Press, pp. 156–72. [Google Scholar]
  109. Leggett, Helen C., Angus Buckling, Gregory H. Long, and Mike Boots. 2013. Generalism and the evolution of parasite virulence. Trends in Ecology & Evolution 28: 592–96. [Google Scholar] [CrossRef]
  110. Leidwanger, Justin. 2013. Modeling distance with time in ancient Mediterranean seafaring: A GIS application for the interpretation of maritime connectivity. Journal of Archaeological Science 40: 3302–8. [Google Scholar] [CrossRef]
  111. Leslie, Donald D., and Kenneth Herbert James Gardiner. 1995. “All roads lead to Rome”: Chinese knowledge of the Roman Empire. Journal of Asian History 29: 61–81. [Google Scholar]
  112. Levenson, Joseph, and Franz Schurmann. 1969. China: An Interpretive History, from the Beginnings to the Fall of Han. Berkeley: University of California Press. [Google Scholar]
  113. Littman, Robert J., and Maxwell Littman. 1973. Galen and the Antonine Plague. The American Journal of Philology 94: 243–55. [Google Scholar] [CrossRef]
  114. Liu, Yanxin, and Xiaodong Yan. 2020. Comparison of Regional Droughts Impacts and Social Responses in the Historical China: A Case Study of the Han Dynasty. Physics and Chemistry of the Earth, Parts A/B/C 117: 102854. [Google Scholar] [CrossRef]
  115. Lo Cascio, Elio. 2016. The impact of migration on the demographic profile of the city of Rome: A reassessment. In Migration and Mobility in the Early Roman Empire. Edited by Luuk De Ligt and Laurens Ernst Tacoma. Leiden: Brill, pp. 23–33. [Google Scholar]
  116. Mariani, Manuel S., Zhuo-Ming Ren, Jordi Bascompte, and Claudio Juan Tessone. 2019. Nestedness in complex networks: Observation, emergence, and implications. Physics Reports 813: 1–90. [Google Scholar] [CrossRef]
  117. Marzano, Annalisa. 2021. Figures in an imperial landscape: Ecological and societal factors on settlement patterns and agriculture in Roman Italy. In Climate Change and Ancient Societies in Europe and the Near East: Diversity in Collapse and Resilience. Edited by Paul Erdkamp, Joseph G. Manning and Koenraad Verboven. Cham: Springer, pp. 505–534. [Google Scholar]
  118. McConnell, Joseph R., Andrew I. Wilson, Andreas Stohl, Monica M. Arienzo, Nathan J. Chellman, Sabine Eckhardt, Elisabeth M. Thompson, A. Mark Pollard, and e Jørgen Peder Steffensen. 2018. Lead Pollution Recorded in Greenland Ice Indicates European Emissions Tracked Plagues, Wars, and Imperial Expansion during Antiquity. Proceedings of the National Academy of Sciences 115: 5726–31. [Google Scholar] [CrossRef] [PubMed]
  119. McCormick, Michael, Ulf Büntgen, Mark A. Cane, Edward R. Cook, Kyle Harper, Peter Huybers, Thomas Litt, Sturt W. Manning, Paul Andrew Mayewski, Alexander F. M. More, and et al. 2012. Climate change during and after the Roman Empire: Reconstructing the past from scientific and historical evidence. The Journal of Interdisciplinary History 43: 169–220. [Google Scholar] [CrossRef]
  120. McDonald, Brandon. 2021. The Antonine crisis: Climate change as a trigger for epidemiological and economic turmoil. In Climate Change and Ancient Societies in Europe and the Near East: Diversity in Collapse and Resilience. Edited by Paul Erdkamp, Joseph G. Manning and Koenraad Verboven. Cham: Springer, pp. 373–410. [Google Scholar]
  121. McLaughlin, Raoul. 2010. Rome and the Distant East: Trade Routes to the Ancient Lands of Arabia, India and China. London: Continuum. [Google Scholar]
  122. McMichael, Anthony J. 2012. Insights from past millennia into climatic impacts on human health and survival. Proceedings of the National Academy of Sciences 109: 4730–37. [Google Scholar] [CrossRef]
  123. McNeill, William H. 1982. The Pursuit of Power: Technology, Armed Force, and Society Since A.D. 1000. Chicago: University of Chicago Press. [Google Scholar]
  124. McNeill, William H. 1989. Plagues and Peoples. New York: Anchor Books. [Google Scholar]
  125. McNeill, William H. 1995. The changing shape of world history. History and Theory 34: 8–26. [Google Scholar] [CrossRef]
  126. McNeill, William H. 1997. A History of the Human Community: Prehistory to 1500. London: Prentice Hall. [Google Scholar]
  127. Meadows, Donnella. 2022. Pensando em Sistemas: Como o Pensamento Sistêmico pode Ajudar a Resolver os Grandes Problemas Globais. Rio de Janeiro: Sextante. [Google Scholar]
  128. Mehlhorn, Heinz. 2023. Infectious Diseases Along the Silk Roads: The Spread of Parasitoses and Culture Past and Today. Cham: Springer. [Google Scholar]
  129. Mischke, Steffen, Chengjun Zhang, Chenglin Liu, Jiafu Zhang, Zhongping Lai, and Hao Long. 2019. landscape response to climate and human impact in Western China during the Han dynasty. In Socio-Environmental Dynamics Along the Historical Silk Road. Edited by Liang Yang, Hans-Rudolf Bork, Xiuqi Fang and Steffen Mischke. Cham: Springer, pp. 45–66. [Google Scholar]
  130. Mitrofan, Dragoș. 2014. The Antonine Plague in Dacia and Moesia Inferior. Journal of Ancient History and Archaeology 1: 9–13. [Google Scholar] [CrossRef]
  131. Monerie, Paul-Arthur, Marie-Pierre Moine, Laurent Terray, and Sophie Valcke. 2017. Quantifying the impact of early 21st century volcanic eruptions on global-mean surface temperature. Environmental Research Letters 12: 054010. [Google Scholar] [CrossRef]
  132. Moyer, Richard W. 2005. Smallpox in human history. In Orthopoxviruses Pathogenic for Humans. Edited by Sergei N. Shchelkunov, Svetlana S. Marennikova and Richard W. Moyer. New York: Springer, pp. 1–10. [Google Scholar]
  133. Nappo, Dario. 2007. The impact of the third century crisis on the international trade with the East. In Crises and the Roman Empire. Edited by Olivier Hekster, Gerda de Kleijn and Daniëlle Slootjes. Leiden: Brill, pp. 233–44. [Google Scholar]
  134. Newfield, Timothy P., Ana T. Duggan, and Hendrik Poinar. 2022. Smallpox’s antiquity in doubt. Journal of Roman Archaeology 35: 897–913. [Google Scholar] [CrossRef]
  135. Newhall, Christopher G., and Stephen Self. 1982. The Volcanic Explosivity Index (VEI): An estimate of explosive magnitude for historical volcanism. Journal of Geophysical Research 87: 1231–38. [Google Scholar] [CrossRef]
  136. Oddo, Luigi, Corrado Lagazio, and Alister Filippini. 2023. Pandemics are Similar, Societies Are Not: Roman Egypt’s Reaction to the Antonine Plague. SSRN, 1–33. [Google Scholar] [CrossRef]
  137. Oldstone, Michael Beaureguard A. 2010. Viruses, Plagues, and History: Past, Present, and Future. Oxford: Oxford University Press. [Google Scholar]
  138. Oliveira, Julio Cesar Magalhães de. 2022. A Peste Antonina: A experiência e o impacto de uma pandemia na antiguidade. Phoînix 28: 168–83. [Google Scholar] [CrossRef]
  139. Paolilli, Antonio Luigi. 2008. Development and crisis in ancient Rome: The role of Mediterranean trade. Historical Social Research/Historische Sozialforschung 33: 274–88. [Google Scholar]
  140. Perry, Jonathan S. 2011. Organized societies: Collegia. In The Oxford Handbook of Social Relations in the Roman World. Edited by Michael Peachin. Oxford: Oxford University Press, pp. 499–516. [Google Scholar]
  141. Pieterse, Jan Nederveen. 1988. A critique of world system theory. International Sociology 3: 251–66. [Google Scholar] [CrossRef]
  142. Polanyi, Karl. 2012. A Subsistência do Homem e Ensaios Correlatos. Rio de Janeiro: Contraponto. [Google Scholar]
  143. Powell, Adam, Stephen Shennan, and Mark G. Thomas. 2009. Late Pleistocene demography and the appearance of modern human behavior. Science 324: 1298–301. [Google Scholar] [CrossRef] [PubMed]
  144. Pujawan, I. Nyoman, and Alpha Umaru Bah. 2022. Supply chains under COVID-19 disruptions: Literature review and research agenda. Supply Chain Forum: An International Journal 23: 81–95. [Google Scholar] [CrossRef]
  145. Quammen, David. 2013. Spillover: Animal Infections and the Next Human Pandemic. New York: W. W. Norton & Company. [Google Scholar]
  146. Quandt, Amy, Annie J. Keeney, Luis Flores, Daniela Flores, and Mercy Villaseñor. 2022. “We left the crop there lying in the field”: Agricultural worker experiences with the COVID-19 pandemic in a rural US-Mexico border region. Journal of Rural Studies 95: 533–43. [Google Scholar] [CrossRef] [PubMed]
  147. Retief, Francois P., and Louise Cilliers. 2000. Epidemics of the Roman Empire, 27 BC-AD 476. South African Medical Journal 90: 267–72. [Google Scholar]
  148. Rickman, Geoffrey E. 1980. The grain trade under the Roman Empire. Memoirs of the American Academy in Rome 36: 261–75. [Google Scholar] [CrossRef]
  149. Ruiz-Patiño, Alejandro. 2020. La plaga antonina. Medicina 42: 175–81. [Google Scholar] [CrossRef]
  150. Sabbatani, Sergio, and Sirio Fiorino. 2009. La peste antonina e il declino dell’Impero Romano. Ruolo della guerra partica e della guerra marcomannica tra il 164 e il 182 d.C. nella diffusione del contagio. Le Infezioni in Medicina 17: 261–75. [Google Scholar] [PubMed]
  151. Sajovec, Katya, Samer Abboud, and Eliza Gettel. 2024. Archaeological insights to modern pandemics: Contemporary inferences from the Antonine Plague. Veritas: Villanova Research Journal 6: 32–43. [Google Scholar] [CrossRef]
  152. Sáez, Andrés. 2016. La Peste Antonina: Una peste global en el siglo II d.C. Revista Chilena de Infectología 33: 218–21. [Google Scholar] [CrossRef]
  153. Scheidel, Walter. 2001. Death on the Nile: Disease and the Demography of Roman Egypt. Leiden: Brill. [Google Scholar]
  154. Scheidel, Walter. 2007. A model of real income growth in Roman Italy. Historia: Zeitschrift für Alte Geschichte 56: 322–46. [Google Scholar] [CrossRef]
  155. Scheidel, Walter. 2009. Roman Wellbeing and the Economic Consequences of the “Antonine Plague”. SSRN, 1–29. [Google Scholar] [CrossRef]
  156. Scheidel, Walter. 2012a. Approaching the Roman economy. In The Cambridge Companion to the Roman Economy. Edited by Walter Scheidel. Cambridge: Cambridge University Press, pp. 1–22. [Google Scholar]
  157. Scheidel, Walter. 2012b. Slavery. In The Cambridge Companion to the Roman Economy. Edited by Walter Scheidel. Cambridge: Cambridge University Press, pp. 89–113. [Google Scholar]
  158. Schneider, Jane. 1991. Was there a precapitalist world-system? In Core/Periphery Relations in the Precapitalist Worlds. Edited by Christopher K. Chase-Dunn and Thomas D. Hall. Boulder: Westview Press, pp. 46–66. [Google Scholar]
  159. Seligman, Charles G. 1937. The Roman Orient and the Far East. Antiquity 11: 5–30. [Google Scholar] [CrossRef]
  160. Shannon, Thomas R. 1996. An Introduction to the World-System Perspective, 2nd ed. Boulder: Westview Press. [Google Scholar]
  161. Sherman, Irwin W. 2006. The Power of Plagues. Washington: ASM Press. [Google Scholar]
  162. Shi, Feng, Cheng Sun, Antoine Guion, Qiuzhen Yin, Sen Zhao, Ting Liu, and Zhengtang Guo. 2022. Roman Warm Period and Late Antique Little Ice Age in an Earth System model large ensemble. Journal of Geophysical Research: Atmospheres 127: e2021JD035832. [Google Scholar] [CrossRef]
  163. Silver, Morris. 2011. Antonine plague and deactivation of Spanish mines. Arctos: Acta Philologica Fennica 45: 133–42. [Google Scholar]
  164. Smil, Vaclav. 2024. Energia e Civilização: Uma História. Porto Alegre: Bookman. [Google Scholar]
  165. Stenchikov, Georgiy. 2009. The role of volcanic activity in climate and global change. In Climate Change. Edited by Trevor M. Letcher. Amsterdam: Elsevier, pp. 77–102. [Google Scholar]
  166. Tainter, Joseph. 1988. The Collapse of Complex Societies. Cambridge: Cambridge University Press. [Google Scholar]
  167. Taleb, Nassim Nicholas. 2012. Antifragile: Things That Gain from Disorder. New York: Random House. [Google Scholar]
  168. Tan, Jinfeng. 2020. The spread and integration of religious culture along the eastern Silk Road. Journal of Cultural and Religious Studies 8: 174–77. [Google Scholar] [CrossRef]
  169. Teggart, Frederick John. 1969. Rome and China: A Study of Correlations in Historical Events. Berkeley: University of California Press. [Google Scholar]
  170. Thèves, Catherine, Eric Crubézy, and Philippe Biagini. 2016. History of smallpox and its spread in human populations. Microbiology Spectrum 4: PoH-0004-2014. [Google Scholar] [CrossRef]
  171. Thèves, Catherine, Phillipe Biagini, and Eric Crubézy. 2014. The rediscovery of smallpox. Clinical Microbiology and Infection 20: 210–18. [Google Scholar] [CrossRef]
  172. Thorley, John. 1971. The Silk Trade between China and the Roman Empire at its height, circa A.D. 90–130. Greece and Rome 18: 71–80. [Google Scholar] [CrossRef]
  173. Tian, Huidong, Chuan Yan, Lei Xu, Ulf Büntgen, Nils C. Stenseth, and Zhibin Zhang. 2017. Scale-dependent climatic drivers of human epidemics in ancient china. Proceedings of the National Academy of Sciences 114: 12970–75. [Google Scholar] [CrossRef] [PubMed]
  174. Tse, Wicky W. K. 2018. The Collapse of China’s Later Han Dynasty, 25–200 CE: The Northwest Borderlands and the Edge of Empire. New York: Routledge. [Google Scholar]
  175. Turchin, Peter. 2008. Modeling periodic waves of integration in the Afro-Eurasian world-system. In Globalization as Evolutionary Process: Modeling Global Change. Edited by George Modelski, Tessaleno Campos Devezas and William R. Thompson. New York: Routledge, pp. 161–189. [Google Scholar]
  176. van Minnen, Peter. 2001. P. Oxy. LXVI 4527 and the Antonine Plague in the Fayyum. Zeitschrift für Papyrologie und Epigraphik 135: 175–77. [Google Scholar]
  177. Veyne, Paul. 1995. Do ventre materno ao testamento. In História da Vida Privada: Do Império Romano ao ano mil. Edited by Paul Veyne. São Paulo: Companhia das Letras, pp. 23–43. [Google Scholar]
  178. Visher, Emily, and Mike Boots. 2020. The problem of mediocre generalists: Population genetics and eco-evolutionary perspectives on host breadth evolution in pathogens. Proceedings of the Royal Society B 287: 20201230. [Google Scholar] [CrossRef]
  179. Wallerstein, Immanuel. 1974. The rise and future demise of the world capitalist system: Concepts for comparative analysis. Comparative Studies in Society and History 16: 387–415. [Google Scholar] [CrossRef]
  180. Wallerstein, Immanuel. 1976. A world-system perspective on the social sciences. The British Journal of Sociology 27: 343–52. [Google Scholar] [CrossRef]
  181. Wallerstein, Immanuel. 2007. World-Systems Analysis: An Introduction. Durham: Duke University Press. [Google Scholar]
  182. Wallerstein, Immanuel. 2011. The Modern World-System I: Capitalist Agriculture and the Origins of the European World-Economy in the Sixteenth Century. Berkeley: University of California Press. [Google Scholar]
  183. Wertheim, Joel O. 2017. Viral evolution: Mummy virus challenges presumed history of smallpox. Current Biology 27: R119–R120. [Google Scholar] [CrossRef]
  184. Whittow, Mark. 2015. How much trade was local, regional and inter-regional? A comparative perspective on the Late Antique Economy. In Local Economies? Production and Exchange of Inland Regions in Late Antiquity. Edited by Luke Lavan. Leiden: Brill, pp. 131–65. [Google Scholar]
  185. Williams, Tim. 2024. Silk Roads. In Encyclopedia of Archaeology, 2nd ed. Edited by Efthymia Nikita and Thilo Rehren. Amsterdam: Elsevier, pp. 618–28. [Google Scholar] [CrossRef]
  186. Wolfe, Nathan D., Claire Panosian Dunavan, and Jared Diamond. 2007. Origins of major human infectious diseases. Nature 447: 279–83. [Google Scholar] [CrossRef]
  187. Xu, Ruize. 2023. The impact for the countries joined the ancient Silk Road. Highlights in Business, Economics and Management 22: 411–15. [Google Scholar] [CrossRef]
  188. Yang, Liang Emlyn, Hans-Rudolf Bork, Xiuqi Fang, Steffen Mischke, Mara Weinelt, and Josef Wiesehöfer. 2019. On the paleo-climatic/environmental impacts and socio-cultural ystem resilience along the historical Silk Road. In Socio-Environmental Dynamics Along the Historical Silk Road. Edited by Liang Yang, Hans-Rudolf Bork, Xiuqi Fang and Steffen Mischke. Cham: Springer, pp. 3–24. [Google Scholar]
  189. Yin, Jun, Yun Su, and Xiuqi Fang. 2016. Climate Change and Social Vicissitudes in China over the Past Two Millennia. Quaternary Research 86: 133–43. [Google Scholar] [CrossRef]
  190. Young, Gary K. 2003. Rome’s Eastern Trade: International Commerce and Imperial Policy 31 BC–AD 305. Hoboken: Taylor and Francis. [Google Scholar]
  191. Zaviyeh, Seyyed Farhad Seyyed Ahmadi, and Seyyed Ali Reza Golshani. 2020. Recognition of the outbreak of deadly diseases of the Antonine or Galen Era in the second century AD (96–192 AD). Medical History 11: 81–92. [Google Scholar]
  192. Zhang, Shengda, and David Dian Zhang. 2019. Population-influenced spatiotemporal pattern of natural disaster and social crisis in China, AD 1–1910. Science China Earth Sciences 62: 1138–50. [Google Scholar] [CrossRef]
  193. Zhao, Haoyue, and Andrew Wilson. 2025. A case of osteomyelitis variolosa from Roman Britain, and the introduction of smallpox to the Roman World. Journal of Roman Archaeology, 1–32. [Google Scholar] [CrossRef]
  194. Zinkina, Julia, David Christian, Leonid Grinin, Ilya Ilyin, Alexey Andreev, Ivan Aleshkovski, Sergey Shulgin, and Andrey Korotayev. 2019. A Big History of Globalization: The Emergence of a Global World System. Cham: Springer. [Google Scholar]
  195. Zonneveld, Karin A. F., Kyle Harper, Andreas Klügel, Liang Chen, Gert De Lange, and Gerard J. M. Versteegh. 2024. Climate change, society, and pandemic disease in Roman Italy between 200 BCE and 600 CE. Science Advances 10: eadk1033. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Barreiros, D. Plague and Climate in the Collapse of an Ancient World-System: Afro-Eurasia, 2nd Century CE. Soc. Sci. 2025, 14, 536. https://doi.org/10.3390/socsci14090536

AMA Style

Barreiros D. Plague and Climate in the Collapse of an Ancient World-System: Afro-Eurasia, 2nd Century CE. Social Sciences. 2025; 14(9):536. https://doi.org/10.3390/socsci14090536

Chicago/Turabian Style

Barreiros, Daniel. 2025. "Plague and Climate in the Collapse of an Ancient World-System: Afro-Eurasia, 2nd Century CE" Social Sciences 14, no. 9: 536. https://doi.org/10.3390/socsci14090536

APA Style

Barreiros, D. (2025). Plague and Climate in the Collapse of an Ancient World-System: Afro-Eurasia, 2nd Century CE. Social Sciences, 14(9), 536. https://doi.org/10.3390/socsci14090536

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop