Next Article in Journal
A Blockchain-Driven Cyber-Systemic Approach to Hybrid Reality
Previous Article in Journal
Extended Application of Double Machine Learning in Corporate Financial Resilience Research: Based on Data Factor Marketization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Can Government Incentive and Penalty Mechanisms Effectively Mitigate Tacit Collusion in Platform Algorithmic Operations?

School of Economics and Management, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Systems 2025, 13(4), 293; https://doi.org/10.3390/systems13040293
Submission received: 10 March 2025 / Revised: 2 April 2025 / Accepted: 15 April 2025 / Published: 16 April 2025
(This article belongs to the Section Systems Practice in Social Science)

Abstract

:
Algorithmic collusion essentially constitutes a form of monopolistic agreement that utilizes algorithms as tools for signaling collusion, making it particularly challenging for both consumers and antitrust enforcement agencies to detect. Algorithmic collusion can be primarily categorized into two distinct types: explicit collusion and tacit collusion. This paper specifically investigates the phenomenon of platform-driven tacit algorithmic collusion within the platform economy. Employing an evolutionary game theory approach, we conduct a comprehensive simulation analysis of the economic system involving four key stakeholders: government regulators, platform operators, in-platform merchants, and consumers. This paper primarily investigates the conditions that may reduce the likelihood of platforms engaging in algorithmic tacit collusion, examines how government incentive–penalty mechanisms influence such collusive behaviors, and provides an in-depth analysis of the critical roles played by both in-platform merchants and consumers in detecting and exposing these practices.

1. Introduction

In the era of artificial intelligence, algorithms have become extensively integrated into the business operations of the platform economy, demonstrating a dual nature that simultaneously fosters and impedes market competition. Algorithms present a dual-edged impact: they serve as powerful instruments for value extraction from big data, substantially enhancing market information transparency and operational efficiency while effectively augmenting consumer welfare. Conversely, they simultaneously introduce novel risks that transcend traditional forms of collusion [1]. The use of digital technologies by platforms may undermine algorithmic fairness, foster data monopolies, and reinforce technological hegemony [2]. In recent years, algorithmic collusion has emerged as a critical antitrust concern in the platform economy. In 2019, a Federal Trade Commission (FTC) investigation revealed that multiple third-party sellers on Amazon employed algorithmic pricing tools to automatically match competitors’ prices. Although no explicit agreement existed among sellers, these algorithms facilitated a stable high-price equilibrium through real-time monitoring and adjustments, enabling sellers to sustain competitiveness and extract supercompetitive profits in a highly contested market. The implicated algorithm was ultimately discontinued in 2019. While such tools are ostensibly designed to optimize pricing strategies, their misuse, particularly when coupled with other anti-competitive practices, can distort market outcomes, harm consumer welfare, and erode economic efficiency [3]. Distinct from traditional collusion, algorithms possess not only the capacity to facilitate collusive outcomes under human direction but also demonstrate the ability to autonomously optimize and evolve through continuous trial-and-error processes and self-learning mechanisms [4]. Algorithms demonstrate significantly superior efficiency in coordination and collusion compared to human capabilities, thereby presenting unprecedented challenges to contemporary antitrust legal frameworks [5]. Of particular concern is the deepening involvement of algorithms in economic activities, which has led to a substantial escalation in their potential to foster collusive practices among market participants, thereby significantly intensifying uncertainties within the competitive market landscape [6].
Algorithmic collusion can be primarily categorized into two distinct types: explicit collusion and tacit collusion. Explicit algorithmic collusion typically involves clear subjective intent among colluding parties, where market participants establish monopolistic agreements through explicit communication channels such as oral or written arrangements, with algorithms serving primarily as implementation tools for executing these collusive practices. Algorithmic tacit collusion primarily occurs in scenarios where market participants maintain their competitive positions through algorithmic interactions without explicit written or verbal agreements. Calvano [4] and Cont et al. [7] define algorithmic tacit collusion as a phenomenon wherein pricing algorithms with learning capabilities, when engaged in repeated market interactions, may converge on prices that deviate from competitive benchmarks. Such collusion can emerge from the decentralized interactions of autonomous algorithms that adapt to market data [8]. While tacit collusion may arise from the synergistic behavior of self-learning algorithms, it also encompasses the platform-driven dimension: algorithms are not merely static, embedded technical artifacts but are enmeshed in socio-technical systems, shaped by platform governance. Far from being neutral tools for users and workers, these algorithms actively reconfigure competitive dynamics in society [9]. Sanchez and Katsamakas [10] further analyze managerial perspectives on algorithmic pricing in platform strategy, framing algorithm adoption as an endogenous strategic choice. Their work highlights how platforms balance multiple algorithms and how profits hinge on algorithmic design [11]. This paper thus examines how platforms’ algorithmic design or selection—even absent explicit coordination—can facilitate tacit collusion, with implications for market competition. Given platforms’ capacity to design operational rules, they can not only enhance their profitability and appropriate consumer surplus but can also develop and implement sophisticated monitoring mechanisms with minimal cost implications [12]. This advantage becomes particularly pronounced when platforms proactively collaborate with government regulators in formulating and executing comprehensive oversight mechanisms, thereby effectively mitigating the incidence of explicit algorithmic collusion among in-platform merchants [13]. Nevertheless, the inherent complexity and opacity of platform algorithms often hinder effective governmental oversight and intervention, potentially enabling platforms to exploit this regulatory gray zone for substantial market premium extraction [14]. In the absence of external regulatory intervention, in-platform merchants frequently demonstrate a propensity to acquiesce to algorithm-driven market dominance [15]. Consequently, the highly covert nature of platform-driven algorithmic tacit collusion makes it particularly challenging for consumers to discern the underlying manipulation mechanisms, thereby hindering the development of widespread social awareness and consumer vigilance regarding this issue. The government regulation of algorithmic tacit collusion in digital platforms (Figure 1) faces substantial challenges.
Both explicit and tacit algorithmic collusion in platform operations fall within the purview of traditional antitrust interventions; however, judicial outcomes have demonstrated significant divergence in these two distinct cases. The landmark case involving David Topkins’ utilization of algorithms to fix product prices serves as a paradigmatic example of explicit algorithmic collusion within platform ecosystems. The U.S. Department of Justice ruled that this practice constituted a violation of federal antitrust legislation, thereby establishing its applicability within the framework of traditional antitrust intervention against collusive practices. Algorithmic tacit collusion in platform operations is characterized by its operational feasibility and exceptional concealment, rendering it particularly challenging for detection in practical scenarios. Landmark cases involving platform-initiated algorithmic tacit collusion, including the Apple E-Books case, Meyer v. Kalanick, and Meyer v. Uber, have yielded contentious judicial outcomes. This controversy stems from the sophisticated ability of algorithmic tacit collusion to masquerade as legitimate competitive practices, thereby presenting substantial challenges for antitrust enforcement agencies and creating significant practical difficulties in regulatory implementation. Such algorithmic competition under the guise of normal market behavior not only obscures the boundary between healthy competition and monopolistic practices but also accelerates the homogenization of algorithmic technologies. This phenomenon has effectively transformed platforms into covert hub-and-spoke cartels [16], thereby intensifying market distortions and perpetuating unfair competitive practices. The existing antitrust legal framework demonstrates significant limitations in both institutional mechanisms and technical capabilities when addressing antitrust interventions against algorithmic tacit collusion. In 2022, China’s newly amended Anti-Monopoly Law of the People’s Republic of China added a new provision that states the following: “Operators shall not organize other operators to enter into monopoly agreements or provide substantial assistance to other operators to enter into monopoly agreements”. In this context, it is imperative to conduct comprehensive research on how governmental incentive–penalty mechanisms can effectively shape the behavioral patterns of platform-related entities, thereby mitigating algorithmic tacit collusion in platform operations. Such research holds significant potential for enhancing market competition fairness, protecting consumer welfare, and fostering sustainable development within the platform economy.
Consequently, investigating the formation mechanisms and restraining factors of algorithmic tacit collusion in platform ecosystems holds substantial practical significance for maintaining healthy competitive environments. Given that platform-driven algorithmic tacit collusion involves multiple stakeholders, diverse strategic interactions, and complex information exchanges, the multi-agent evolutionary game model proves particularly valuable. This approach effectively captures the inherent complexity, simulates algorithmic learning and adaptation processes across various environments, and elucidates the emergence conditions of tacit collusion. Furthermore, it reveals the intricate interplay between individual behavioral patterns and collective dynamics, thereby providing a more realistic representation of real-world evolutionary processes. Accordingly, this study develops a multi-agent evolutionary game model tailored for environments characterized by incomplete information and dynamic interactions, aiming to identify the conditions conducive to platform-driven algorithmic tacit collusion. Through the design of an optimized government incentive–penalty mechanism, we systematically explore effective intervention strategies by examining governmental regulatory approaches and the behavioral dynamics of relevant stakeholders. This research provides substantial decision-making support for antitrust policymakers and market regulatory authorities, contributing significantly to the development of more effective regulatory frameworks.

2. Literature Review

This study synthesizes the extant literature on algorithmic collusion, with particular emphasis on three critical dimensions: the formation mechanisms of collusive behaviors, governance strategies for algorithmic collusion, and antitrust regulatory frameworks.

2.1. Algorithmic Collusive Behavior

The study of collusion originated in the context of oligopolistic markets, where key facilitating factors include high market concentration, cost homogeneity, and restricted opportunities for covert price reductions, all of which significantly contribute to the emergence of collusive practices [17]. Recent scholarly investigations have demonstrated the feasibility of detecting collusive practices through the analysis of dynamic market indicators [18]. Firms’ collusive behavior facilitated the formation of stronger market power, which ultimately leads to market distortions [19,20], and early antitrust laws explicitly prohibited firms’ explicit collusion regarding prices and distribution areas. To dispel antitrust suspicions, collusive practices have progressively transitioned towards tacit forms, with contemporary research focusing on mechanisms such as price leadership [21] and common ownership structures [22]. Fines have emerged as the predominant intervention mechanism in antitrust legislation across jurisdictions, serving dual purposes of deterring anti-competitive practices and compensating affected parties. Current scholarly discourse has predominantly centered on identifying optimal fine levels that effectively deter violations while maintaining proportionality [23]. In traditional markets, both explicit and tacit collusion demonstrate limited long-term sustainability. Particularly, tacit collusion necessitates a greater degree of implicit coordination, presents higher implementation challenges, and incurs substantially greater stabilization costs.
While traditional collusion tends to be unstable due to high costs associated with agreement formation, coordination, and maintenance, the deployment of pricing algorithms significantly elevates collusion risks by enhancing market transparency and facilitating rapid information exchange [24]. Algorithm-driven collusion demonstrates technical efficiency and enhanced sustainability through automated processes. Within algorithmic collusion frameworks, data sharing emerges as a potent facilitator of collusive practices across various scenarios, typically requiring minimal to no human intervention [25]. Recent scholarly investigations have explored the potential for algorithmic collusion in digital marketplaces [26]. The OECD Competition Committee (2017) (OECD (2017). Big Data Bring Competition Policy to the Digital Era, (http://www.oecd.org/daf/competition/big-data-bringing-competition-policy-to-the-digital-era.html (accessed on 16 April 2024)) conducted a comprehensive analysis of algorithmic interactions, proposing a fourfold classification of algorithmic collusion: messenger-based collusion, hub-and-spoke collusion, predictive agent collusion, and autonomous learning collusion. Among the most critical legal challenges concerning algorithmic applications is the insufficiency of high price observations within platforms as conclusive evidence for establishing collusive practices [27]. The inherent opacity of in-platform algorithmic decision-making processes, often characterized as a “black box” phenomenon, renders the verification of platforms’ exclusionary or restrictive competitive intentions particularly challenging, thereby facilitating more concealed collusive practices. In-platform algorithmic tacit collusion is easier to implement [28], and algorithms are increasingly blurring the line between tacit and explicit collusion [29]. Moreover, algorithmic tacit collusion often uses uncomplicated algorithmic choices to increase joint profits beyond competitive levels [30]. Within the platform economy, hub-and-spoke algorithmic tacit collusion demonstrates greater feasibility, primarily due to the platforms’ inherent bilateral market characteristics that foster mutual interdependence among users who exchange value and generate reciprocal revenue streams [31].

2.2. Governance of Algorithm Collusion and Antitrust Regulation

Regarding the collaborative governance between government and platform enterprises, prominent Western scholars have articulated distinct perspectives. Farrell and Katz [32] emphasize platform enterprises’ fundamental responsibility in safeguarding public interests. Within algorithmic governance frameworks, ensuring algorithmic accountability constitutes an essential component that must complement efforts to enhance interpretability and optimize design parameters [33]. Spulber [34] and colleagues highlight platforms’ supervisory obligations. However, it is crucial to recognize that governmental oversight remains indispensable in this collaborative governance framework, as the complete delegation of regulatory authority to platforms would be imprudent. This vulnerability stems from the potential for platforms’ engagement in algorithmic collusion, which, in the absence of governmental antitrust interventions, significantly jeopardizes consumer welfare and economic surplus [35]. Liu et al. [36] conducted an in-depth investigation into responsible innovation within China’s bike-sharing sector, introducing a novel governmental dual-role framework as both an alliance facilitator and platform coordinator. Complementing this perspective, Wachhaus [37] emphasized a multi-stakeholder governance model. The rapid development of the contemporary platform economy remains fundamentally dependent on governmental guidance. Even in scenarios where platforms actively collaborate with governmental regulatory efforts, complete governmental disengagement from platform oversight is inadvisable. This necessity stems from the inherent risks associated with platforms’ data resource exploitation and utilization, which cannot be automatically mitigated without appropriate regulatory intervention.
The contemporary governance of algorithmic collusion confronts dilemmas across multiple dimensions, including value objective determination, identification of algorithm-facilitated monopolistic agreements, appropriate application of antitrust enforcement mechanisms, and accountability attribution of algorithmic tacit collusion. Algorithmic hub-and-spoke agreements, representing the most prevalent form of tacit collusion, remain subject to significant legal controversy within antitrust jurisprudence. Particularly contentious is the capacity of pricing algorithms to facilitate collusive practices, which substantially exceeds the regulatory scope delineated by the Sherman Act [38]. U.S. law regulates hub-and-spoke agreements as a separate type of monopoly agreement, but the method of identification applies by analogy to the relevant provisions of horizontal agreements. Meanwhile, the European Union has been progressively establishing essential criteria for recognizing hub-and-spoke agreements, although it has not yet produced any landmark judicial rulings or regulatory determinations in this domain [39]. Under China’s current legal framework, algorithmic signaling mechanisms and dynamic pricing adjustments present significant challenges in being classified as collusive practices under the Anti-Monopoly Law. While the legislation defines monopolistic agreements as “agreements, decisions, or other concerted practices that restrict competition”, it fails to establish specific methodologies or criteria for identifying such concerted behaviors. This legislative ambiguity creates substantial interpretive challenges, particularly in characterizing hub-and-spoke arrangements as a predominant form of tacit collusion.
Existing research indicates that algorithms provide favorable tools for the sustainability and feasibility of tacit collusion. Studies also highlight that while platforms bear responsibility for participating in government–enterprise collaborative governance, such collaborative behaviors exhibit inherent instability, and antitrust laws lack clear and effective intervention tools to address algorithmic tacit collusion. Against this backdrop, this paper conducts an in-depth analysis of how the government can influence and guide platform algorithmic tacit collusion by constructing direct and indirect penalty and incentive mechanisms, with the aim of reducing the occurrence of tacit collusion.

3. Game Modeling

3.1. Scenario Construction and Research Logic

Within the game-theoretic framework, platforms face a binary strategic choice between “collusion” and “non-collusion” strategies. The collusion strategy entails platforms leveraging algorithmic capabilities to engage in covert tacit collusion through mechanisms such as coordinated pricing, thereby pursuing additional economic gains. Conversely, the non-collusion strategy reflects platforms’ commitment to regulatory compliance, abstaining from any algorithmic facilitation of anti-competitive practices. In-platform merchants face two strategic behavioral choices: “acquiescence” and “reporting”. The acquiescence strategy reflects merchants’ non-opposition or unawareness of potential algorithmic collusion practices, potentially even participating in such activities, thereby enhancing the covert nature of platform-driven algorithmic collusion. In contrast, the reporting strategy demonstrates merchants’ proactive stance in collaborating with regulatory authorities to expose algorithmic collusion, aiming to protect their legitimate interests and maintain market integrity. As crucial market participants, consumers face a strategic choice between “complaints” and “non-complaint”. The complaint strategy reflects consumers’ proactive approach to addressing grievances or perceived rights infringements by reporting issues to relevant authorities and seeking resolution and compensation. Conversely, the non-complaint strategy indicates consumers’ decision to either remain passive or bear consequences independently, refraining from further protective actions. In platform governance, the government adopts either a “strong intervention” or “weak intervention” strategy. This strategic choice is informed by comprehensive assessments of in-platform transaction patterns, the platform’s internal governance enhancements, and consumer complaint trends.
Ideally, platforms would not only refrain from all forms of algorithmic collusion but also proactively collaborate with governmental authorities, thereby significantly alleviating regulatory burdens and facilitating an efficient governance framework under minimal governmental intervention. Under such conditions, in-platform merchants have no need to file reports, and consumer complaints are substantially reduced, collectively contributing to the stability of the platform economy. However, the reality often presents greater complexity. Certain platforms may ostensibly demonstrate proactive collaborative governance while simultaneously employing sophisticated algorithmic technologies to facilitate tacit collusion. This covert behavior not only evades external detection but also, when coupled with merchants’ tacit approval and the absence of consumer complaints, poses significant challenges for governmental oversight and intervention. Given the above dilemma, this paper conducts a comprehensive examination of strategic interactions among platforms and relevant stakeholders, with particular emphasis on analyzing how governmental incentive–penalty mechanisms influence algorithmic tacit collusion in platform ecosystems. Through systematic investigation, the research aims to develop targeted policy recommendations and strategic interventions.

3.2. Parameterization and Underlying Assumptions

3.2.1. Game Subjects and Their Behavioral Strategies

The game model constructed in this paper incorporates four key stakeholders: government regulators, platform operators, in-platform merchants, and consumers. Under the assumption of limited rationality, the set of behavioral strategies of the government is S 1 = { G 1 strong intervention, G 2 weak intervention}. Platform operators face the strategic choice between S 2 = { E 1 collusion, E 2 non-collusion}. The set of behavioral policies for merchants selling products on the platform is S 3 = { A 1 tacit acquiescence, A 2 reporting}. The set of behavioral policies for consumers is S 4 = { B 1 complain, B 2 do not complain}. This study develops a four-stage sequential dynamic game model where all agents act according to the principle of sequential rationality. The decision sequence follows a government–platform–merchants–consumers cyclical pattern with the following stages:
(1) Government Intervention Stage: The regulator first determines intervention intensity (strong vs. weak) based on historical collusion data and market monitoring intelligence. This initial move establishes market participants’ expectations about the regulatory environment. (2) Platform Strategy Stage: Upon observing regulatory signals, platform firms (as followers) conduct cost–benefit analyses, weighing compliance expenditures against potential collusion gains to decide whether to implement collusive algorithms. (3) Merchant Response Stage: After detecting the platform’s strategy, merchants perform their own cost–benefit calculations: either acquiesce to algorithmic collusion for supra-competitive profits or report the misconduct for regulatory incentives. (4) Consumer Reaction Stage: Consumers identify price anomalies through comparison tools or shopping experiences, then decide whether to file complaints based on expected costs and benefits of reporting. The model features a complete cyclical decision chain: government–platform–merchants–consumers–government, with each agent’s actions contingent on prior moves in the sequence.

3.2.2. Probability of Behavioral Strategy Adoption

This study models four key economic agents: government, platform operators, in-platform merchants, and consumers. Each agent probabilistically selects optimal strategies during sequential decision-making to maximize its respective utility.
First, as regulators, government departments face a binary choice of intervention strategies: implementing strong intervention with probability x or weak intervention with probability 1 x . Strong intervention encompasses measures like strict regulatory enforcement and substantial financial penalties, while weak intervention may consist merely of warnings, minimal fines, or regulatory lapses. Second, platforms as regulated entities similarly confront two strategic options: adopting algorithmic tacit collusion with probability y or maintaining non-collusion with probability 1 y . Algorithmic tacit collusion could manifest through price manipulation, data abuse, or other anti-competitive practices, whereas non-collusion signifies adherence to fair competition principles. Third, merchants selling products within the platform choose to acquiesce to the platform’s algorithmic collusion with probability z and to report the platform’s algorithmic collusion with probability 1 z . Crucially, the model assumes that merchants necessarily participate in collusion when platforms employ algorithmic tacit collusion. Finally, consumers as end-users demonstrate behavioral responses: they may complain (with probability m ) or refrain from complaining (with probability 1 m ) about perceived algorithmic collusion. All four agents—government, platforms, merchants, and consumers—are risk-neutral actors seeking to maximize their respective benefits, with all probability parameters x , y , z , and m ∈ [0, 1].

3.2.3. Parameter Assumptions and Meanings in the Model

Assumption 1. 
The cost of strong government intervention is C x h  and the cost of weak intervention is  C x l  ( C x h  >  C x l ). Strong governmental intervention occurs exclusively when platforms engage in algorithmic tacit collusion and in-platform merchants choose to acquiesce, coupled with the receipt of consumer complaints. Given the inherent covertness of algorithmic tacit collusion, such interventions necessitate additional scrutiny costs, denoted as  C x y , due to increased regulatory complexity. First, strong intervention exhibits inherent limitations. When the government implements strong intervention, a critical scenario emerges: despite platforms engaging in tacit collusion and in-platform merchants choosing acquiescence, the absence of consumer complaints renders the algorithmic tacit collusion undetectable, thereby rendering the strong intervention strategy ineffective. Secondly, weak intervention demonstrates distinct effectiveness through a specific revelation mechanism: in-platform merchants’ reporting behavior, which operates when the government implements weak intervention strategies. In this case, even if the government does not adopt more stringent censorship measures, it can still discover the tacit collusion of algorithms within the platform through the reports of operators, and it does not need to pay the additional censorship cost of  C x y . This demonstrates that weak intervention strategies may prove more efficient and cost-effective under specific conditions.
Assumption 2. 
Since the marginal cost difference between algorithmic tacit collusion and competitive strategies is negligible (assumed to be zero) for platforms, this study primarily focuses on analyzing the differential in total payoffs across strategy choices, while abstracting from inter-strategy cost considerations. When platforms engage in algorithmic collusion, algorithmic tacit collusion inevitably occurs regardless of in-platform merchants’ choices, resulting in higher total benefits  Π h  for both platforms and their operators. Conversely, when platforms refrain from collusion and actively collaborate in governmental governance, algorithmic tacit collusion is absent, leading to lower total benefits  Π l  ( Π h > Π l ) for all parties involved. Platform revenues primarily originate from commissions and transaction management fees imposed on in-platform merchants, constituting proportion  a  ( 0 < a < 1 ) of the platform’s total revenue.
Assumption 3. 
Consumers incur cost  C m  when filing complaints. If algorithmic tacit collusion is substantiated within the platform, consumers receive compensation equivalent to  d  times the platform’s revenue ( d > 0 ); they are compensated with  a d Π h . When platforms engage in algorithmic tacit collusion and consumers choose to complain, both platforms and in-platform merchants experience negative reputational effects, quantified as  B y  and  B z  ( B y B x ), respectively.
Assumption 4. 
The design of penalty and incentive mechanisms incorporates three key components. First, the government implements a penalty mechanism for platforms, imposing fines equivalent to  b  times the platform’s revenue ( b > 0 ), and the penalty amount is  a b Π h . Second, an incentive scheme is established for in-platform merchants, providing rewards  S  for proactive reporting of algorithmic collusion, thereby facilitating governmental oversight. Third, the government streamlines consumer complaint channels and reduces associated costs to enhance consumer participation in regulatory processes.
Table A1 in Appendix A shows the specific model parameters and their meanings.
In the game-theoretic analysis of platform algorithmic tacit collusion, the best response of each player is defined as the strategy that maximizes its expected utility, given the strategies adopted by other players in the game [40]. Table A2 in Appendix A shows the payoff payment matrix for the four-way game. The game-theoretic framework depicting strategic interactions among the four key actors is illustrated in Figure 2.

4. Model Analysis

4.1. Strategy Evolutionary Stabilization Strategy and Analysis

Within the game-theoretic framework of platform algorithmic tacit collusion, all participants demonstrate bounded rationality. The expected payoffs for each game participant can be derived through the replication of dynamic equations. Detailed formulations of the four parties’ expected payoffs, corresponding replication dynamic equations, and proposition derivation processes are provided in Appendix B. The subsequent analysis focuses on propositions derived from these dynamic equations.
Proposition A1 indicates that when platforms exhibit a propensity toward algorithmic tacit collusion, governmental authorities are inclined to implement strong intervention strategies. Conversely, when platforms demonstrate proactive cooperation with regulatory bodies and actively avoid algorithmic collusion practices, a weak intervention approach becomes the preferred governmental strategy. Proposition A1 further elucidates that under specific conditions—characterized by low probabilities of in-platform merchant reporting and consumer complaints, significant cost differentials between strong and weak governmental interventions, exorbitant scrutiny costs for tacit collusion, relatively lenient penalties for platform collusion, yet substantial positive incentives for in-platform merchants—weak governmental intervention emerges as an evolutionarily stable strategy. Conversely, strong governmental intervention constitutes an evolutionarily stable strategy under alternative conditions.
Proposition A2 demonstrates that strong governmental intervention incentivizes platforms to eschew collusion and actively participate in collaborative market governance. Conversely, weak governmental intervention may encourage platforms to adopt algorithmic tacit collusion strategies. Proposition A2 further elucidates the multiple drivers underlying platforms’ tacit collusion behavior: low reporting rates among in-platform merchants, minimal consumer complaint rates, substantial revenue gains from tacit collusion, and relatively lenient governmental penalties and consumer compensation standards, coupled with limited reputational damage. These factors collectively reinforce platforms’ propensity for tacit collusion. Conversely, when these conditions are reversed, they incentivize platforms to abandon collusive practices and actively participate in market governance and order maintenance.
Proposition A3 demonstrates that in-platform merchants’ strategic decisions are shaped by dual influences: platform strategies and governmental incentive policies. More specifically, when merchants can accurately identify platform-driven tacit collusion and are presented with sufficiently attractive governmental rewards or when consumer complaints are present, they exhibit a propensity to adopt proactive measures by reporting algorithmic collusion to regulatory authorities.
Proposition A4 indicates that the probability of consumer complaints exhibits a positive correlation with the increasing likelihood of platform algorithmic tacit collusion. Moreover, when the probability of strong governmental intervention reaches a critical threshold, in-platform operators significantly reduce their acquiescence to collusive practices, thereby substantially improving consumers’ compensation prospects. When compensation amounts are sufficiently high and complaint costs remain low, consumer complaints emerge as an evolutionarily stable strategy. Conversely, the absence of complaints becomes the stable strategy under alternative conditions.

4.2. Stability Analysis of Equilibrium Points of a Four-Way Evolutionary Game System

Through F ( x ) = 0 , F ( y ) = 0 , F ( z ) = 0 , and F ( m ) = 0 , the stability of the strategy combination system of the subject of the four-way game can be discriminated according to Lyapunov’s first law. Initially, the Jacobian matrix J is constructed based on the four replication dynamic differential equations.
J = F ( x ) x F ( x ) y F ( x ) z F ( x ) m F ( y ) x F ( y ) y F ( y ) z F ( y ) m F ( z ) x F ( z ) y F ( z ) z F ( z ) m F ( m ) x F ( m ) y F ( m ) z F ( m ) m
Referring to Table A3 in Appendix C, when all eigenvalues of the Jacobian matrix possess negative real parts, indicating an asymptotically stable equilibrium point, three pure-strategy Nash equilibrium solutions emerge: E 1 ( 0 , 1 , 0 , 1 ) , E 2 ( 1 , 1 , 0 , 0 ) , and E 3 ( 1 , 1 , 0 , 1 ) .
Corollary 1. 
When  C m a d Π h < 0  and  a ( Π l Π h ) + a ( b + d ) Π h + B y < 0 , if  C x l C x h S + a b Π h < 0 , there exists a stabilization point  E 1 ( 0 , 1 , 0 , 1 )  of the replication dynamics system.
Corollary 1 indicates that platforms exhibit a propensity for algorithmic collusion when the resultant profits substantially exceed the potential costs, including regulatory fines, consumer compensation, and reputational losses associated with such violations. Furthermore, consumers demonstrate a higher propensity to file complaints as a protective measure when the anticipated compensation from platforms significantly exceeds the associated complaint costs. In-platform merchants exhibit an increased probability of adopting whistleblowing strategies when confronted with potential platform-driven algorithmic tacit collusion, coupled with the repercussions of consumer complaints and associated reputational risks. The government’s strategic selection involves a cost–benefit calculus between strong and weak intervention. When the total costs of strong intervention prove prohibitively high relative to its opportunity benefits, the rationally optimizing government will adopt the weak intervention strategy as the cost-minimizing equilibrium choice.
Corollary 2. 
When  C m a d Π h < 0  and  a ( Π l Π h ) + a ( b + d ) Π h + B y < 0 , if  C x h C x l a b Π h + S < 0 , there exists a stabilization point  E 3 ( 1 , 1 , 0 , 1 )  of the replication dynamics system.
Corollary 2 indicates that the strategic choices of platforms, in-platform merchants, and consumers remain consistent with the conclusions drawn in Corollary 1; the government’s strategic selection follows a cost–benefit analysis framework, exhibiting a propensity to adopt strong intervention strategies when comparative analysis reveals that the total cost of strong intervention is low relative to the opportunity benefit.
Corollary 3. 
A stabilization point  E 2 ( 1 , 1 , 0 , 0 )  exists for the replica dynamical system when  a d Π h C m < 0 ,  a ( Π l Π h ) + a b Π h < 0 , and  C x h C x l a b Π h + S < 0 .
Corollary 3 demonstrates that when consumer complaint costs substantially exceed platform-provided compensation, consumers opt against filing complaints. Similarly, when the revenue from platform-driven algorithmic tacit collusion significantly surpasses governmental fines, platforms favor collusion strategies. Given that strong intervention entails lower total costs than weak intervention, governmental authorities prefer the former approach. Concurrently, government incentives motivate in-platform merchants to actively collaborate in reporting platform violations.

5. Simulation Analysis

5.1. Validation of the Balanced Results

To better visualize the impact of critical factors on the evolutionary dynamics and outcomes within the multi-agent game system, numerical simulations were conducted using MATLAB R2023a. The simulation parameters were initialized with t = 0 as the start time and t = 100 as the end time, while all four game participants’ initial strategy probabilities were set at 0.5 (the uniform initialization of strategy probabilities at 0.5 for all agents serves dual methodological purposes: First, this symmetric initialization eliminates prior strategy biases, ensuring all emergent outcomes are purely driven by the payoff structure and game-theoretic dynamics. Second, while real-world agents may exhibit asymmetric behavioral predispositions, the simulation framework (available upon request) permits examination of alternative initial distributions. This modeling choice strictly ensures observed results reflect the game’s inherent properties rather than initialization artifacts.). Three distinct parameter sets were configured to satisfy the equilibrium conditions outlined in the corollaries, thereby validating the equilibrium analysis. The first parameter configuration, satisfying conditions C m a d Π h < 0 , C x l C x h S + a b Π h < 0 , and a ( Π l Π h ) + a ( b + d ) Π h + B y < 0 , is as follows:
C x h = 20 ,   C x l = 5 ,   C x y = 10 ,   Π h = 200 , Π l = 0 ,   a = 0 ,   b = 0.1 ,   d = 0.1 ,   B y = 5 ,   B z = 5 ,   S = 5 ,   C m = 1 ; this yielded the simulation results presented in Figure 3, demonstrating convergence to equilibrium state E 1 ( 0 , 1 , 0 , 1 ) .
Following the same methodology, parameter values were assigned based on equilibrium conditions a d Π h C m < 0 , a ( Π l Π h ) + a b Π h < 0 :
( C x h = 1 ,   C x l = 1 ,   C x y = 10 ,   Π h = 200 ,   Π l = 0 ,   a = 0.1 ,   b = 0.5 ,   d = 0.5 ,   B y = 5 ,   B z = 5 ,   S = 5 ,   C m = 1 ). The corresponding simulation results, presented in Figure 4, demonstrate convergence to equilibrium state E 2 ( 1 , 1 , 0 , 0 ) . Subsequently, parameters were configured according to equilibrium conditions C m a d Π h < 0 , a ( Π l Π h ) + a ( b + d ) Π h + B y < 0 and C x h C x l a b Π h + S < 0 : ( C x h = 2 ,   C x l = 1 ,   C x y = 10 ,   Π h = 200 ,   Π l = 0 ,   a = 0.1 ,   b = 0.1 ,   d = 0.1 ,   B y = 5 ,   B z = 5 ,   S = 0 ,   C m = 1 ), with the simulation outcomes illustrated in Figure 5 showing convergence to equilibrium state E 3 ( 1 , 1 , 0 , 1 ) . All three parameter configurations yielded simulation results consistent with the theoretical analysis derived from the corollaries.

5.2. Impact of Governmental Rewards and Incentives

To analyze the effectiveness of governmental incentive–penalty mechanisms in curbing platform algorithmic tacit collusion, this study examines stability changes in local equilibrium points through parameter variations: b (penalty multiplier for platform collusion revenue), S (reward for successful merchant reporting), and C m (consumer complaint cost influenced by governmental facilitation of complaint channels). The analysis maintains constant values for all other parameters, with initial strategy probabilities uniformly set at 0.5 across all game participants.
Firstly, in this paper, the impact of the governmental penalty multiplier on platform collusion revenues, while keeping the other parameters constant at specified values, is as follows:
C x h = 20 ,   C x l = 5 ,   C x y = 10 ,   Π h = 100 ,   Π l = 50 ,   a = 0.1 ,   d = 0.1 ,   B y = 5 ,   B z = 5 ,   S = 0 ,   C m = 5 . Subsequently, we analyze four distinct penalty multiplier scenarios: b = 0 ,   b = 0.5 ,   b = 1 ,   b = 2 . The penalty multiplier primarily influences strategic decisions among governmental authorities, platforms, and in-platform merchants. As illustrated in Figure 6’s simulation results, increased governmental penalties ( b ) for algorithmic tacit collusion leads to the following: (1) heightened probability of strong governmental intervention, (2) reduced platform propensity for algorithmic collusion, and (3) decreased merchant reporting frequency due to perceived lower collusion risks. A deeper analysis of parameter configurations reveals that when the governmental penalty multiplier for platform revenue remains below 1, platforms may exhibit increased collusion tendencies, driven by substantial collusion-derived profits despite potential penalties. This scenario prompts in-platform merchants to enhance their reporting probabilities. However, the combination of high implementation costs for strong intervention and limited fiscal returns from collusion-related fines may incentivize governmental authorities to adopt more lenient regulatory approaches. Conversely, when the governmental penalty multiplier reaches or exceeds 1, platforms demonstrate a substantial reduction in algorithmic collusion. This reduction not only effectively mitigates unfair competitive practices but also motivates governmental authorities to intensify strong intervention measures. Simultaneously, the significantly diminished risk of platform collusion leads in-platform merchants to decrease their reporting activities. These findings clearly demonstrate that stronger governmental penalties for collusive behavior more effectively deter platforms from engaging in algorithmic tacit collusion. Notably, the government’s emphasis on strict regulatory measures to safeguard consumer welfare and maintain competitive equity, while effectively enhancing consumer protections and mitigating systemic risks, may simultaneously impose substantial operational constraints on platform enterprises [2]. Crucially, disproportionate penalty severity could potentially suppress platform innovation and market development, necessitating the careful calibration of regulatory sanctions relative to both consumer welfare considerations and broader market risk exposures.
Secondly, this paper considers the influence of governmental rewards for the successful reporting of platform algorithmic collusion by in-platform merchants, while keeping other parameters constant at specified values: C x h = 20 ,   C x l = 5 ,   C x y = 10 ,   Π h = 100 ,   Π l = 50 ,   a = 0.1 ,   b = 1 ,   d = 1 ,   B y = 5 ,   B z = 5 ,   C m = 1 . Subsequently, we analyze four distinct reward scenarios: S = 0 ,   S = 20 ,   S = 40 ,   S = 60 .
The reward mechanism fundamentally influences strategic interactions among governmental authorities, platforms, and in-platform merchants. As demonstrated in Figure 7’s simulation results, increased governmental rewards for merchant reporting enhance their cooperation in governance initiatives and the proactive disclosure of potential algorithmic collusion, thereby accelerating risk mitigation. This positive development not only reduces governmental intervention pressures but also fosters a harmonious governance framework characterized by minimal intervention, high efficiency, and low collusion prevalence. These findings clearly indicate that strengthening governmental incentives for in-platform merchants serves as a crucial mechanism for promoting active collaboration, effectively deterring algorithmic collusion, and optimizing overall governance efficacy.
Finally, this paper considers how governmental facilitation of consumer complaint channels affects complaint costs, while keeping other parameters constant at specified values:
C x h = 20 ,   C x l = 5 ,   C x y = 10 ,   Π h = 100 ,   Π l = 50 ,   a = 0.1 ,   b = 1 ,   d = 1 ,   B y = 5 ,   B z = 5 ,   S = 0 ,   C m = 0 . Subsequently, we examine three distinct complaint cost scenarios: C m = 0 ,   C m = 5 ,   C m = 10 ,   C m = 20 .
The cost of consumer complaints fundamentally influences strategic interactions among governmental authorities, platforms, and consumers. As demonstrated in Figure 8’s simulation results, reduced complaint costs significantly enhance consumer willingness to file complaints, thereby accelerating the decline in platform algorithmic collusion probabilities. This development subsequently encourages the governmental adoption of weak intervention strategies. The optimization and expansion of consumer complaint channels not only increases complaint probabilities but also strengthens governmental monitoring and response capabilities regarding algorithmic collusion, ultimately achieving the effective deterrence of such practices.

5.3. A Practical Analysis of Regulatory Cases of Algorithmic Tacit Collusion

Building on empirical findings demonstrating that enhanced governmental incentive–penalty mechanisms and streamlined consumer complaint channels effectively deter algorithmic collusion, this paper conducts a comparative case analysis of EU v. Meta and FTC v. Amazon to elucidate how these regulatory and participatory measures suppress tacit collusion in platform economies.

5.3.1. The Role of Algorithmic Transparency and Penalty Intensity

The EU’s penalty cases against Meta (2022–2024) provide crucial empirical evidence for assessing the effectiveness of government fines in suppressing algorithmic tacit collusion and anti-competitive practices. These enforcement actions demonstrate that platform-driven monopolistic behavior can be effectively restrained through progressively intensified penalties. The EU’s sanctions against Meta stemmed from policy conflicts between U.S. government intelligence collection practices and European fundamental requirements for personal privacy protection and market competition fairness. The core violation involved Meta’s advertising algorithms, which facilitated tacit collusion among advertisers through data sharing and dynamic pricing strategies without adequate user disclosure, resulting in persistent user data breaches and unfair market competition.
In 2022, the European Commission fined Meta EUR 265 million for GDPR violations related to a data breach affecting over 500 million users, while mandating specific corrective measures to ensure compliant data processing practices. Facing persistent regulatory pressures on data protection, Meta has continually adapted its business model in response to subsequent investigations and rulings. In 2023, authorities imposed a EUR 1.2 billion administrative fine and suspended Meta’s cross-border data transfers due to the inadequate protection of data subjects, prompting accelerated algorithmic compliance reforms. The 2024 EUR 798 million penalty addressed Meta’s anti-competitive integration of Facebook Marketplace, where real-time ad bidding algorithm adjustments created unfair market conditions for competitors. The EU’s Meta penalties exhibit three key characteristics: proportionality between penalty severity and infringement gravity; the progressive escalation of fine amounts; precise rectification requirements. These enforcement actions yield important regulatory insights: fines should surpass anticipated monopoly gains; penalties must incorporate ongoing algorithmic oversight; consistent multi-case enforcement generates cumulative deterrence.

5.3.2. Synergistic Governance of Incentives and Complaint Channels

The Amazon case demonstrates that enhancing merchant incentives and streamlining complaint mechanisms can effectively mitigate insufficient penalty severity while fostering a more competitive market environment.
In 2023, the U.S. Federal Trade Commission (FTC) initiated an investigation into Amazon’s algorithmic pricing practices, alleging the company leveraged its dominant market position to impose unfair trading conditions on third-party sellers through its pricing algorithms and “Buy Box” recommendation system, effectively encouraging price parallelism. The FTC further charged Amazon with employing anti-discounting measures that mandated that third-party sellers maintain prices on Amazon no higher than those on competing platforms under the threat of penalties including reduced search visibility or the loss of “Buy Box” eligibility. The FTC concluded that these practices restrained market competition while harming both consumers and third-party sellers. Amazon contested these allegations, maintaining its pricing strategies actually benefited consumers through competitive discounting, and that the “Buy Box” algorithm served to identify the most favorable consumer deals rather than artificially inflate prices. Rather than imposing monetary penalties, the FTC mandated three corrective measures: revising seller incentives to decrease emphasis on low-price rankings, creating an independent merchant complaint portal for reporting algorithmic discrimination, and submitting periodic algorithmic compliance reports to regulators.
Post-implementation, these interventions reduced the stability of Amazon’s algorithmic coordination while enhancing market competition fairness. This case illustrates how enhanced platform governance mechanisms, particularly incentive restructuring and complaint resolution systems, can effectively disrupt algorithmic collusion stability and improve market competitiveness, even when regulatory penalties face political or legal constraints.

6. Conclusions and Discussions

6.1. Main Conclusions

This study investigates the primary conditions underlying the stability of platforms’ algorithmic tacit collusion and evaluates how governmental measures including penalty intensity for platforms, reporting incentives for in-platform merchants, and consumer complaint mechanisms affect platforms’ collusion strategies. The key findings are as follows:
Platforms’ algorithmic tacit collusion demonstrates strategic stability when the derived benefits substantially outweigh the associated costs and penalties. The highly covert nature of such collusion makes detection challenging for governmental authorities, in-platform merchants, and consumers, thereby increasing platforms’ opportunistic tendencies toward collusion. The effective mitigation of algorithmic tacit collusion requires robust governmental incentive–penalty mechanisms. First, direct governmental intervention through substantial penalties significantly increases the potential costs of collusion, creating a strong deterrent effect and reducing platforms’ collusion incentives. Second, the government can indirectly enhance collusion transparency by influencing the strategic behaviors of two key platform stakeholders: in-platform merchants and consumers. Specifically, increasing rewards for merchant reporting and streamlining consumer complaint channels can improve information flow within platforms, thereby enhancing collusion transparency and reducing governmental intervention complexity. These measures enable a more timely and accurate monitoring of platform behaviors, facilitating more effective detection and the prevention of algorithmic tacit collusion.

6.2. Marginal Contributions

The extensive commercial application of algorithms has enhanced economic efficiency while simultaneously reducing barriers to platform-driven algorithmic tacit collusion, thereby increasing both the risk and probability of successful collusion. Unlike explicit collusion mechanisms, such as monopoly agreements, which are constrained by oligopolistic market structures, algorithmic tacit collusion presents significant challenges for antitrust intervention and regulatory application.
This study makes three primary contributions to the existing literature. First, it provides a comprehensive analysis of the conditions facilitating algorithmic tacit collusion, its governance challenges, and potential regulatory solutions. While previous research has primarily focused on how algorithms increase collusion risks [25], this investigation extends the analysis by identifying specific conditions that influence platforms’ decisions to engage in or refrain from algorithmic tacit collusion.
Second, the study investigates potential effective approaches for mitigating algorithmic tacit collusion through enhanced incentive–penalty mechanisms and multi-stakeholder governance. Unlike previous studies that have primarily examined government-platform collusion or government–enterprise collaborative governance [35,37], our approach addresses the complexity of platform-driven algorithmic tacit collusion, which involves intricate interactions among multiple stakeholders. By developing an evolutionary game model and conducting numerical simulations, this study provides more robust insights into how parameter variations influence both the nature of algorithmic tacit collusion and participants’ strategic choices.
Finally, the study contributes to antitrust policy formulation and market regulation by exploring effective intervention strategies from both governmental and multi-stakeholder perspectives, thereby providing robust decision-making support for regulatory authorities.

6.3. Practical Implications

This study offers valuable insights into addressing algorithmic tacit collusion in platform governance. When strong governmental intervention, merchant reporting, and consumer complaints prove insufficient to prevent platform-driven algorithmic tacit collusion, enhanced regulatory measures become imperative. Governments should implement stricter penalties, including substantial fines and license revocations for platforms engaging in tacit collusion, thereby increasing violation costs and destabilizing collusive practices. Additionally, establishing robust information-sharing mechanisms through whistleblower reward systems and streamlined consumer complaint channels can improve transparency and governance efficiency. Such measures facilitate information exchange among governmental agencies, in-platform merchants, and consumers, enhancing oversight capabilities. Consequently, the effective detection and prevention of algorithmic tacit collusion necessitates transitioning from government-centric governance to a collaborative multi-stakeholder approach, actively engaging merchants and consumers in supervisory roles while maintaining strong governmental oversight.
Recommendation 1. 
The government should establish an intelligent regulatory framework combining dynamic compliance audits, real time algorithmic monitoring of price synchronization (e.g., dynamic time warping analysis), and tiered warning systems. This system should incorporate escalating penalties tied to violation frequency, including fines, algorithm suspension periods, and executive bans. Additionally, implementing a tiered algorithm transparency system requiring open source core pricing algorithms and white papers for medium-risk algorithms coupled with a compliance credit scoring mechanism would create a closed-loop “monitoring-penalty-guidance” governance structure.
Recommendation 2. 
A multi-stakeholder governance ecosystem should be developed, featuring encrypted whistleblower channels with uncapped rewards for merchants and multiplied damages plus fixed bonuses for consumers, all secured through blockchain technology. Concurrently, a quadripartite Algorithm Governance Committee (government–platforms–merchants–consumers) should oversee quarterly algorithm reviews and case adjudications while managing a shared data platform integrating regulatory data, price benchmarks, and complaint analytics. Pilot implementations through regulatory sandboxes in free trade zones should balance innovation protection with risk control.

6.4. Limitations and Future Research

(1) This study has several limitations that warrant future research attention. First, the current analysis focuses exclusively on algorithmic tacit collusion within individual platforms, neglecting inter-platform algorithmic collusion and competitive dynamics. Future research should incorporate a more comprehensive examination of both intra-platform and inter-platform collusion and competition to extend the current findings. (2) While the study addresses platforms’ intentional tacit collusion, it does not account for algorithmic learning behaviors that may lead to unintentional collusion. The legal implications of such algorithmically induced tacit collusion present an important avenue for future investigation. (3) While appropriately strengthened regulatory penalties can effectively curb algorithmic tacit collusion on platforms, this study primarily examines their deterrent effects while overlooking how excessive penalties may stifle platform innovation and sustainable development. Future research should develop an automated “penalty-incentive” balancing mechanism that dynamically adjusts penalty severity according to platforms’ innovation levels, with empirical work needed to identify the critical threshold where penalties become counterproductive. (4) Our risk-neutral, profit-maximization framework represents an idealized scenario that insufficiently incorporates welfare maximization considerations, behavioral economic factors like risk aversion and psychological costs [41], and prospect theory applications. Subsequent studies could employ prospect theory to reformulate utility functions by separating gain/loss domains and applying dynamic optimal control methods [42] to model long-term strategic interactions. (5) The zero marginal cost difference assumption between collusive and competitive algorithmic strategies, while justified by current platform economics where price-monitoring functions typically repurpose existing infrastructure, requires refinement. Future modeling should incorporate scenario-specific algorithm modification costs and hidden expenses, developing dynamic cost functions that distinguish fixed (framework development) from variable (data-sharing frequency) costs to examine structural threshold effects.

Author Contributions

Conceptualization, Y.W. and Y.Z.; methodology, Y.W.; software, Y.W.; validation, Y.W.; formal analysis, Y.W.; investigation, Y.W.; resources, Y.W.; data curation, Y.W.; writing—original draft preparation, Y.W.; writing—review and editing, Y.W. and Y.Z.; visualization, Y.W.; supervision, Y.Z.; project administration, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by “Research on Patent Transformation and New Quality Productivity Formation Mechanism” (24BJY018), the National Social Science Fund of China.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors are grateful to the editor and anonymous referees for their constructive comments and suggestions, which have sufficiently helped the authors to improve the presentation of this manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Behavioral strategy combinations of game subjects and their payoff matrices.
Table A1. Behavioral strategy combinations of game subjects and their payoff matrices.
Strategy
Combination
Governmental RevenuePlatform RevenueRevenue of In-Platform MerchantsConsumer Surplus
( G 1 , E 1 , A 1 , B 1 ) a b Π h C x h C x y a ( 1 b d ) Π h B y ( 1 a ) Π h B z ( a d 1 ) Π h C m
( G 1 , E 1 , A 1 , B 2 ) C x h a Π h ( 1 a ) Π h Π h
( G 1 , E 1 , A 2 , B 1 ) a b Π h C x h S a ( 1 b d ) Π h B y ( 1 a ) Π h + S ( a d 1 ) Π h C m
( G 1 , E 1 , A 2 , B 2 ) a b Π h C x h S a ( 1 b ) Π h ( 1 a ) Π h + S Π h
( G 1 , E 2 , A 1 , B 1 ) C x h a Π l ( 1 a ) Π l Π l C m
( G 1 , E 2 , A 1 , B 2 ) C x h a Π l ( 1 a ) Π l Π l
( G 1 , E 2 , A 2 , B 1 ) C x h a Π l ( 1 a ) Π l Π l C m
( G 1 , E 2 , A 2 , B 2 ) C x h a Π l ( 1 a ) Π l Π l
( G 2 , E 1 , A 1 , B 1 ) C x L Π h B y ( 1 a ) Π h B z Π h C m
( G 2 , E 1 , A 1 , B 2 ) C x L a Π h ( 1 a ) Π h Π h
( G 2 , E 1 , A 2 , B 1 ) a b Π h C x l S a ( 1 b d ) Π h B y ( 1 a ) Π h + S ( a d 1 ) Π h C m
( G 2 , E 1 , A 2 , B 2 ) a b Π h C x l S a ( 1 b ) Π h ( 1 a ) Π h + S Π h
( G 2 , E 2 , A 1 , B 1 ) C x L a Π l ( 1 a ) Π l Π l C m
( G 2 , E 2 , A 1 , B 2 ) C x L a Π l ( 1 a ) Π l Π l
( G 2 , E 2 , A 2 , B 1 ) C x L a Π l ( 1 a ) Π l Π l C m
( G 2 , E 2 , A 2 , B 2 ) C x L a Π l ( 1 a ) Π l Π l
Table A2. Model parameters and meanings.
Table A2. Model parameters and meanings.
Model ParameterMeaning
x The   probability   of   strong   governmental   intervention .   0 x 1
y The   probability   of   platform   algorithmic   tacit   collusion .   0 y 1
z Probability   of   in - platform   merchant   acquiescence .   0 z 1
m The   probability   of   consumer   complaint   filing .   0 m 1
C x h The costs of strong governmental intervention.
C x l The costs of weak governmental intervention.
C x y The covert nature of platform algorithmic collusion increases governmental intervention complexity, necessitating additional scrutiny costs.
Π h The presence of algorithmic collusion within the platform generates total revenue for both the platform and its operators.
Π l Total revenue for platforms and operators in the absence of algorithmic collusion.
a Platform commission rate.
b Governmental penalty multiplier for platform collusion revenue.
d Multiplicative compensation provided by platforms to consumers derived from collusive advantages.
B y Negative reputational effects of consumer complaints on platforms.
B z Negative reputational effects of consumer complaints on in-platform merchants.
C m Cost of consumer complaints.
S Governmental rewards for in-platform merchant reporting of algorithmic collusion.

Appendix B

Appendix B.1. Analysis of the Stability of the Government’s Strategy

V 11 and V 12 denote the expected returns for strong governmental and weak intervention strategies, respectively, with V 1 representing the average expected return. These are defined as follows:
V 11 = y ( 1 z z m ) a b Π h C x h y z m C x y C x h V 12 = ( 1 y ) ( 1 z ) a b Π h C x l V 1 = x V 11 + ( 1 x ) V 12
The replication dynamic equation for governmental strategy evolution is formulated as follows:
F ( x ) = d x d t = x ( V 11 V 1 ) = x ( 1 x ) ( V 11 V 12 ) = x ( 1 x ) [ ( 2 y + z + y z m 2 y z 1 ) a b Π h ( C x h C x l ) y z m C x y y ( 1 z ) S ]
According to the stability theorem of differential equations, governmental strategy selection reaches a stable state when the following conditions are satisfied: F ( x ) = d x d t = 0 and d F ( x ) d x < 0 . Proposition A1 portrays this.
Proposition A1. 
When  y = y 0 ,  F ( x ) = d x d t = 0 , any  x  represents an evolutionarily stable state.
When  y > y 0 , the government evolutionary stable strategy is  x = 1 , and the government chooses strong intervention. (3) When  y < y 0 , the government evolutionary stabilization strategy is  x = 0 , and the government chooses weak intervention.
Included among these is  y 0 = ( 1 z ) a b Π h + ( C x h C x l ) ( 2 + z m 2 z ) a b Π h z m C x y ( 1 z ) S .
Proof of Proposition A1. 
The first derivative of the governmental strategy replication dynamic equation is as follows: F ( x ) = ( 1 2 x ) [ ( 2 y + z + y z m 2 y z 1 ) a b Π h ( C x h C x l ) y z m C x y y ( 1 z ) S ]
The following is knowable:
When ( 2 y + z + y z m 2 y z 1 ) a b Π h ( C x h C x l ) y z m C x y y ( 1 z ) S = 0 , solve for y 0 = ( 1 z ) a b Π h + ( C x h C x l ) ( 2 + z m 2 z ) a b Π h z m C x y ( 1 z ) S . When y = y 0 , F ( x ) = d x d t = 0 is constant, and any government strategy is an evolutionary stabilization strategy. When y > y 0 , d F ( x ) d x x = 1 < 0 , d F ( x ) d x x = 0 > 0 , so x = 1 is the equilibrium point, and the government department choosing strong intervention is an evolutionary stabilization strategy. When y < y 0 , d F ( x ) d x x = 1 > 0 , d F ( x ) d x x = 0 < 0 , so x = 0 is the equilibrium point, and the government choosing the weak intervention strategy is the ESS. □

Appendix B.2. Strategic Stability Analysis of the Platform

V 21 and V 22 denote the expected returns for platform algorithmic tacit collusion and non-collusion with active governmental collaboration, respectively, with V 2 representing the average return. These are defined as follows:
V 21 = a Π h ( 1 + x z m z ) a b Π h ( m + x z m z m ) a d Π h m B y V 22 = a Π l V 2 = y V 21 + ( 1 y ) V 22
The replication dynamic equation for constructing the platform behavioral strategy is as follows:
F ( y ) = d y d t = y ( V 21 V 2 ) = y ( 1 y ) [ V 21 V 22 ] = y ( 1 y ) [ a ( Π h Π l ) ( 1 + x z m z ) a b Π h ( m + x z m z m ) a d Π h m B y ]
According to the stability theorem of the differential equation, the platform strategy selection that is in a stable state must satisfy F ( y ) = d y d t = 0 and d F ( y ) d y < 0 . Proposition A2 portrays this.
Proposition A2. 
When  x = x 0 , any  y  is in an evolutionary stable state.
When  x > x 0  is in place, the platform’s stable strategy is to refrain from collusion and engage in active and collaborative government governance.
When  x < x 0 , the platform’s stabilization strategy is to choose the algorithm to tacitly collude, which is  x 0 = 1 ( 1 z ) ( b + m d ) z m ( b + d ) a Π l + m B y z m a ( b + d ) Π h .
Proof of Proposition A2. 
The first derivative of the platform strategy replication dynamic equation is as follows: F ( y ) = y ( 1 2 y ) [ a ( Π h Π l ) ( 1 + x z m z ) a b Π h ( m + x z m z m ) a d Π h m B y ] .
When a ( Π h Π l ) ( 1 + x z m z ) a b Π h ( m + x z m z m ) a d Π h m B y = 0 , x 0 = 1 ( 1 z ) ( b + m d ) z m ( b + d ) a Π l + m B y z m a ( b + d ) Π h  is constant.
When x = x 0 , F ( y ) = d y d t = 0 ; at this time, any strategy of the platform is an evolutionary stable strategy.
When x > x 0 , d F ( y ) d y y = 1 > 0 , d F ( y ) d y y = 0 < 0 , and so y = 0 is the equilibrium point, and the platform chooses not to collude; actively collaborating with government governance is an evolutionarily stable strategy. When x < x 0 , d F ( y ) d y y = 1 < 0 , d F ( y ) d y y = 0 > 0 ; so, y = 1 is the equilibrium point, and the platform choosing the algorithm to implicitly collude is an evolutionarily stable strategy. □

Appendix B.3. Analysis of the Strategic Stability of Merchants Selling Products on the Platform

V 31 and V 32 denote the expected returns for in-platform merchants choosing either acquiescence or reporting algorithmic collusion, respectively, with V 3 representing the average expected return. These are defined as follows:
V 31 = y ( 1 a ) Π h + ( 1 y ) ( 1 a ) Π l y m B z V 32 = y ( 1 a ) Π h + ( 1 y ) ( 1 a ) Π l + x y S V 3 = z V 31 + ( 1 z ) V 32
The replication dynamic equation for in-platform merchant strategy evolution is formulated as follows:
F ( z ) = d z d t = z ( V 31 V 3 ) = z ( 1 z ) ( V 31 V 32 ) = z ( 1 z ) ( y m B z x y S )
According to the stability theorem of differential equations, the in-platform merchant’s strategy selection reaches a stable state when the following conditions are satisfied: F ( z ) = d z d t = 0 and d F ( z ) d z < 0 . Proposition A3 portrays this.
Proposition A3. 
When  y = 0  or ( m = 0  and  x S = 0 ), any  z  is in an evolutionary steady state.
When  y > 0 , the evolutionarily stable strategy for in-platform merchants is to report platform collusion.
Proof of Proposition A3. 
The first derivative of the in-platform merchant strategy replication dynamic equation is as follows: F ( z ) = ( 1 2 z ) ( y m B z x y S ) .
When y m B z x y S = 0 , solve for y = 0  or ( m = 0  and x S = 0 ).
When y = 0  or ( m = 0  and x S = 0 ), F ( z ) = d z d t = 0  is constant, and all in-platform merchant strategies represent evolutionarily stable strategies. When  y > 0  and ( m 0  or x S 0 ), the solution is d F ( z ) d z z = 1 > 0 , d F ( z ) d z z = 0 < 0 , so z = 0  is the equilibrium point. Reporting constitutes an evolutionarily stable strategy for in-platform merchants, thus completing the proof. □

Appendix B.4. Analysis of the Strategic Stability of Consumers

V 41 and V 42 denote the expected returns for consumer complaint and non-complaint strategies, respectively, with V 4 representing the average expected return. These are defined as follows:
V 41 = ( y y z + x y z ) a d Π h y Π h ( 1 y ) Π l C m V 42 = y Π h ( 1 y ) Π l V 4 = m V 41 + ( 1 m ) V 2
The replication dynamic equation for consumer strategy evolution is formulated as follows:
F ( m ) = d m d t = m ( V 41 V 42 ) = m ( 1 m ) ( V 41 V 42 ) = m ( 1 m ) [ y ( 1 z + x z ) a d Π h C m ]
According to the stability theorem of differential equations, the consumer complaint strategy reaches a stable state when the following conditions are satisfied: F ( m ) = d m d t = 0 and d F ( m ) d m < 0 . Proposition A4 portrays this.
Proposition A4. 
When  y = y 0 , any  m  is in an evolutionary steady state.
When  y > y 0 , the consumer’s choice to lodge complaints constitutes an evolutionarily stable strategy.
When  y < y 0 , the decision of consumers to refrain from lodging complaints represents an evolutionarily stable strategy. Included among these is  y 0 = C m ( 1 z + x z ) a b Π h .
Proof of Proposition A4. 
The first-order derivative of the equation for the replication dynamics of consumer behavior strategies is F ( m ) = ( 1 2 m ) [ y ( 1 z + x z ) a d Π h C m ] .
When y ( 1 z + x z ) a d Π h C m = 0 , the solution is y 0 = C m ( 1 z + x z ) a b Π h . When y = y 0 , F ( m ) = d m d t = 0  is constant; then, any strategy of the consumer is an evolutionary stable strategy. When y > y 0 , d F ( m ) d m m = 1 < 0 , d F ( m ) d m m = 0 > 0 , so m = 1  is the equilibrium point, and the consumer choosing to complain is an evolutionarily stable strategy. When y < y 0 , d F ( m ) d m m = 1 > 0 , d F ( m ) d m m = 0 < 0 , so m = 0  is an equilibrium point, and the consumer choosing not to complain is an evolutionarily stable strategy. □

Appendix C

Table A3. Asymptotic stability analysis of equilibrium points.
Table A3. Asymptotic stability analysis of equilibrium points.
Balance PointJacobian Matrix EigenvaluesStabilityPrerequisite
λ 1 ,   λ 2 ,   λ 3 ,   λ 4 Real Symbol
(0, 0, 0, 0) 0 , C m ,   C x l C x h a b Π h ,   a ( Π h Π l ) a b Π h 0, −, −, XPoint of
Instability
--
(0, 1, 0, 0) 0 ,   a d Π h C m ,   a ( Π l Π h ) + a b Π h ,   C x l C x h S + a b Π h 0, X, X, XPoint of
Instability
--
(0, 0, 1, 0) 0 ,   C x l C x h , C m ,   a ( Π h Π l ) 0, −, −, +Point of
Instability
--
(0, 0, 0, 1) 0 ,   C m ,   C x l C x h a b Π h ,   a ( Π h Π l ) a ( b + d ) Π h B y 0, +, −, XPoint of
Instability
--
(0, 1, 1, 0) 0 ,   C x l C x h , C m ,   a ( Π l Π h ) 0, −, −, −Point of
Instability
--
(0, 1, 0, 1) C m a d Π h , B z ,   C x l C x h S + a b Π h ,   a ( Π l Π h ) + a ( b + d ) Π h + B y X, −, X, XESS①②③
(0, 0, 1, 1) 0 ,   C m ,   C x l C x h ,   a ( Π h Π l ) B y 0, +, −, XPoint of
Instability
--
(0, 1, 1, 1) B z ,   C m ,   B y a ( Π h Π l ) ,   C x l C x h C x y + a b Π h +, +, X, XPoint of
Instability
--
(1, 0, 0, 0) 0 , C m ,   C x h C x l + a b Π h ,   a ( Π h Π l ) a b Π h 0, −, +, XPoint of
Instability
--
(1, 1, 0, 0) a d Π h C m , S ,   a ( Π l Π h ) + a b Π h ,   C x h C x l a b Π h + S X, −, X, XESS④⑤⑥
(1, 0, 1, 0) 0 ,   C x h C x l , C m ,   a ( Π h Π l ) 0, +, −, +Point of
Instability
--
(1, 0, 0, 1) 0 ,   C m ,   C x h C x l + a b Π h ,   a ( Π h Π l ) a ( b + d ) Π h B y 0, +, +, XPoint of
Instability
--
(1, 1, 1, 0) S ,   C x h C x l ,   a d Π h C m ,   a ( Π l Π h ) +, +, X, −Point of
Instability
--
(1, 1, 0, 1) C m a d Π h , B z S ,   C x h C x l a b Π h + S ,   a ( Π l Π h ) + a ( b + d ) Π h + B y X, −, X, XESS①③⑥
(1, 0, 1, 1) 0 ,   C m ,   C x h C x l ,   a ( Π h Π l ) a ( b + d ) Π h B y 0, +, +, XPoint of
Instability
--
(1, 1, 1, 1) B z + S ,   C m a d Π h ,   C x h C x l + C x y a b Π h ,   a ( Π l Π h ) + a ( b + d ) Π h + B y +, X, X, XPoint of
Instability
--
Note: “+” means that the eigenvalue is positive; “−” means that the eigenvalue is negative; “X” means that the eigenvalue is positive or negative depending on the specific parameter value; “--” means that no preconditions for the equilibrium point. ① C m a d Π h < 0 ; ② C x l C x h S + a b Π h < 0 ; ③ a ( Π l Π h ) + a ( b + d ) Π h + B y < 0 ; ④ a d Π h C m < 0 ; ⑤ a ( Π l Π h ) + a b Π h < 0 ; ⑥ C x h C x l a b Π h + S < 0 .

References

  1. Hanspach, P.; Galli, N. Collusion by Pricing Algorithms in Competition Law and Economics; Robert Schuman Centre for Advanced Studies Research Paper No. 2024_06; European University Institute: Fiesole, Italy, 2024; Available online: https://hdl.handle.net/1814/76558 (accessed on 20 October 2024).
  2. Kim, J.; Ahn, S. The platform policy matrix: Promotion and regulation. Policy Internet 2025, 17, e414. [Google Scholar] [CrossRef]
  3. Yang, Z.; Fu, X.; Gao, P.; Chen, Y.J. Fairness regulation of prices in competitive markets. Manuf. Serv. Oper. Manag. 2024, 26, 1897–1917. [Google Scholar] [CrossRef]
  4. Calvano, E.; Calzolari, G.; Denicolo, V.; Pastorello, S. Artificial intelligence, algorithmic pricing, and collusion. Am. Econ. Rev. 2020, 110, 3267–3297. [Google Scholar] [CrossRef]
  5. Deng, A. What do we know about algorithmic tacit collusion. Antitrust 2018, 33, 88. [Google Scholar] [CrossRef]
  6. Gata, J.E. Controlling algorithmic collusion: Short review of the literature, undecidability, and alternative approaches. SSRN 2018. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3334889 (accessed on 20 October 2024). [CrossRef]
  7. Cont, R.; Xiong, W. Dynamics of market making algorithms in dealer markets: Learning and tacit collusion. Math. Financ. 2024, 34, 467–521. [Google Scholar] [CrossRef]
  8. Assad, S.; Calvano, E.; Calzolari, G.; Clark, R.; Denicolò, V.; Ershov, D.; Johnson, J.; Pastorello, S.; Rhodes, A.; Xu, L.; et al. Autonomous algorithmic collusion: Economic research and policy implications. Oxf. Rev. Econ. Policy 2021, 37, 459–478. [Google Scholar] [CrossRef]
  9. Ferrari, F.; Graham, M. Fissures in algorithmic power: Platforms, code, and contestation. Cult. Stud. 2021, 35, 814–832. [Google Scholar] [CrossRef]
  10. Sanchez, C.J.M.; Katsamakas, E. AI pricing algorithms under platform competition. Electron. Commer. Res. 2024, 1, 1–28. [Google Scholar] [CrossRef]
  11. Abada, I.; Lambin, X. Artificial intelligence: Can seemingly collusive outcomes be avoided? Manag. Sci. 2023, 69, 5042–5065. [Google Scholar] [CrossRef]
  12. Johnson, J.P.; Rhodes, A.; Wildenbeest, M. Platform design when sellers use pricing algorithms. Econometrica 2023, 91, 1841–1879. [Google Scholar] [CrossRef]
  13. Schwalbe, U. Algorithms, machine learning, and collusion. J. Compet. Law Econ. 2018, 14, 568–607. [Google Scholar] [CrossRef]
  14. Tripathy, M.; Bai, J.; Heese, H.S. Driver collusion in ride-hailing platforms. Decis. Sci. 2023, 54, 434–446. [Google Scholar] [CrossRef]
  15. Huang, Y.S.; Wu, T.Y.; Fang, C.C.; Tseng, T.L. Decisions on probabilistic selling for consumers with different risk attitudes. Decis. Anal. 2021, 18, 121–138. [Google Scholar] [CrossRef]
  16. Sharma, A. Algorithmic Cartels and Economic Efficiencies: Decoding the Indian & EU Perspective. Nuals Law J. 2023, 17, 137. Available online: https://heinonline.org/HOL/P?h=hein.journals/nualsj17&i=169 (accessed on 20 October 2024).
  17. Frass, A.G.; Greer, D.F. Market structure and price collusion: An empirical analysis. J. Ind. Econ. 1977, 26, 21–44. [Google Scholar] [CrossRef]
  18. Barta, A.; Molnar, M. Indication of organizational collusion by examining dynamic market indicators. GRADUS 2021, 8, 160–165. [Google Scholar] [CrossRef]
  19. Aune, F.R.; Mohn, K.; Osmundsen, P.; Rosendahl, K.E. Financial market pressure, tacit collusion and oil price formation. Energy Econ. 2010, 32, 389–398. [Google Scholar] [CrossRef]
  20. Shimizu, K. “Pricing Game” for Tacit Collusion and Passive Investment. In Proceedings of the CERC 2019; pp. 323–334. Available online: https://www.researchgate.net/publication/332820909_Pricing_Game_for_tacit_collusion_and_Passive_Investment (accessed on 20 April 2024).
  21. Rock, E.B.; Rubinfeld, D.L. Common ownership and coordinated effects. Antitrust Law J. 2020, 83, 201–252. Available online: https://www.jstor.org/stable/27006859 (accessed on 20 April 2024). [CrossRef]
  22. Antón, M.; Ederer, F.; Giné, M.; Schmalz, M. Common ownership, competition, and top management incentives. J. Political Econ. 2023, 131, 1294–1355. [Google Scholar] [CrossRef]
  23. Allain, M.L.; Boyer, M.; Kotchoni, R.; Ponssard, J.P. Are cartel fines optimal? Theory and evidence from the European Union. Int. Rev. Law Econ. 2015, 42, 38–47. [Google Scholar] [CrossRef]
  24. Ezrachi, A.; Stucke, M.E. Sustainable and unchallenged algorithmic tacit collusion. Northwestern J. Technol. Intellect. Prop. 2019, 17, 217. Available online: https://scholarlycommons.law.northwestern.edu/njtip/vol17/iss2/2/ (accessed on 20 October 2024). [CrossRef]
  25. Gautier, A.; Ittoo, A.; Van Cleynenbreugel, P. AI algorithms, price discrimination and collusion: A technological, economic and legal perspective. Eur. J. Law Econ. 2020, 50, 405–435. [Google Scholar] [CrossRef]
  26. Mazumdar, A. Algorithmic collusion. Columbia Law Rev. 2022, 122, 449–488. Available online: https://www.jstor.org/stable/27114356 (accessed on 20 October 2024).
  27. Epivent, A.; Lambin, X. On algorithmic collusion and reward-punishment schemes. Econ. Lett. 2024, 237, 111661. [Google Scholar] [CrossRef]
  28. Bernhardt, L.; Dewenter, R. Collusion by code or algorithmic collusion? When pricing algorithms take over. Eur. Compet. J. 2020, 16, 312–342. [Google Scholar] [CrossRef]
  29. Gata, J.E. Collusion between algorithms: A literature review and limits to enforcement. Eur. Rev. Bus. Econ. 2021, 1, 73–94. [Google Scholar] [CrossRef]
  30. Abada, I.; Lambin, X.; Tchakarov, N. Collusion by mistake: Does algorithmic sophistication drive supra-competitive profits? Eur. J. Oper. Res. 2024, 318, 927–953. [Google Scholar] [CrossRef]
  31. Evans, D.S. Basic principles for the design of antitrust analysis for multisided platforms. J. Antitrust Enforcemen 2019, 7, 319–338. [Google Scholar] [CrossRef]
  32. Farrell, J.; Katz, M.L. Innovation, rent extraction, and integration in systems markets. J. Ind. Econ. 2000, 48, 413–432. [Google Scholar] [CrossRef]
  33. Xu, Z. From algorithmic governance to govern algorithm. AI Soc. 2024, 39, 1141–1150. [Google Scholar] [CrossRef]
  34. Spulber, D.F. Unlocking technology: Antitrust and innovation. Compet. Law Econ. 2008, 4, 915–966. [Google Scholar] [CrossRef]
  35. Yenipazarli, A. On the effects of antitrust policy intervention in pricing strategies in a distribution channel. Decis. Sci. 2023, 54, 64–84. [Google Scholar] [CrossRef]
  36. Liu, Z.; Ma, L.; Huang, T.; Tang, H. Collaborative governance for responsible innovation in the context of sharing economy: Studies on the shared bicycle sector in China. J. Open Innov. Technol. Mark. Complex. 2020, 6, 35. [Google Scholar] [CrossRef]
  37. Wachhaus, A. Platform governance: Developing collaborative democracy. Adm. Theory Prax. 2017, 39, 206–221. [Google Scholar] [CrossRef]
  38. McSweeny, T.; O’Dea, B. The implications of algorithmic pricing for coordinated effects analysis and price discrimination markets in antitrust enforcement. Antitrust 2017, 32, 75. Available online: https://heinonline.org/HOL/P?h=hein.journals/antitruma32&i=76 (accessed on 20 October 2024).
  39. Balasingham, B. Hybrid restraints and Hybrid tests under US antitrust and EU competition law. World Compet. 2020, 43, 261–282. [Google Scholar] [CrossRef]
  40. Yekkehkhany, A.; Murray, T.; Nagi, R. Stochastic superiority equilibrium in game theory. Decis. Anal. 2021, 18, 153–168. [Google Scholar] [CrossRef]
  41. Sun, L.; Li, X.; Su, C.; Wang, X.; Yuan, X. Analysis of dynamic strategies for decision-making on retrofitting carbon capture, utilization, and storage technology in coal-fired power plants. Appl. Therm. Eng. 2025, 264, 125371. [Google Scholar] [CrossRef]
  42. Su, C.; Zha, X.; Ma, J.; Li, B.; Wang, X. Dynamic optimal control strategy of CCUS technology innovation in coal power stations under environmental protection tax. Systems 2025, 13, 193. [Google Scholar] [CrossRef]
Figure 1. Graphical summary.
Figure 1. Graphical summary.
Systems 13 00293 g001
Figure 2. Strategic interaction framework of the four agent sequential game model.
Figure 2. Strategic interaction framework of the four agent sequential game model.
Systems 13 00293 g002
Figure 3. ESS Equalization 1.
Figure 3. ESS Equalization 1.
Systems 13 00293 g003
Figure 4. ESS Equalization 2.
Figure 4. ESS Equalization 2.
Systems 13 00293 g004
Figure 5. ESS Equalization 3.
Figure 5. ESS Equalization 3.
Systems 13 00293 g005
Figure 6. Impact of b .
Figure 6. Impact of b .
Systems 13 00293 g006
Figure 7. Impact of S .
Figure 7. Impact of S .
Systems 13 00293 g007
Figure 8. Impact of C m .
Figure 8. Impact of C m .
Systems 13 00293 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Zhou, Y. Can Government Incentive and Penalty Mechanisms Effectively Mitigate Tacit Collusion in Platform Algorithmic Operations? Systems 2025, 13, 293. https://doi.org/10.3390/systems13040293

AMA Style

Wang Y, Zhou Y. Can Government Incentive and Penalty Mechanisms Effectively Mitigate Tacit Collusion in Platform Algorithmic Operations? Systems. 2025; 13(4):293. https://doi.org/10.3390/systems13040293

Chicago/Turabian Style

Wang, Yanan, and Yaodong Zhou. 2025. "Can Government Incentive and Penalty Mechanisms Effectively Mitigate Tacit Collusion in Platform Algorithmic Operations?" Systems 13, no. 4: 293. https://doi.org/10.3390/systems13040293

APA Style

Wang, Y., & Zhou, Y. (2025). Can Government Incentive and Penalty Mechanisms Effectively Mitigate Tacit Collusion in Platform Algorithmic Operations? Systems, 13(4), 293. https://doi.org/10.3390/systems13040293

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop