Next Article in Journal
Feature Selection for Data Classification in the Semiconductor Industry by a Hybrid of Simplified Swarm Optimization
Previous Article in Journal
IFSrNet: Multi-Scale IFS Feature-Guided Registration Network Using Multispectral Image-to-Image Translation
Previous Article in Special Issue
Gradient-Based Optimization for Intent Conflict Resolution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Applying “Two Heads Are Better Than One” Human Intelligence to Develop Self-Adaptive Algorithms for Ridesharing Recommendation Systems

Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung 413310, Taiwan
Electronics 2024, 13(12), 2241; https://doi.org/10.3390/electronics13122241
Submission received: 15 May 2024 / Revised: 3 June 2024 / Accepted: 5 June 2024 / Published: 7 June 2024

Abstract

:
Human beings have created numerous laws, sayings and proverbs that still influence behaviors and decision-making processes of people. Some of the laws, sayings or proverbs are used by people to understand the phenomena that may take place in daily life. For example, Murphy’s law states that “Anything that can go wrong will go wrong.” Murphy’s law is helpful for project planning with analysis and the consideration of risk. Similar to Murphy’s law, the old saying “Two heads are better than one” also influences the determination of the ways for people to get jobs done effectively. Although the old saying “Two heads are better than one” has been extensively discussed in different contexts, there is a lack of studies about whether this saying is valid and can be applied in evolutionary computation. Evolutionary computation is an important optimization approach in artificial intelligence. In this paper, we attempt to study the validity of this saying in the context of evolutionary computation approach to the decision making of ridesharing systems with trust constraints. We study the validity of the saying “Two heads are better than one” by developing a series of self-adaptive evolutionary algorithms for solving the optimization problem of ridesharing systems with trust constraints based on the saying, conducting several series of experiments and comparing the effectiveness of these self-adaptive evolutionary algorithms. The new finding is that the old saying “Two heads are better than one” is valid in most cases and hence can be applied to facilitate the development of effective self-adaptive evolutionary algorithms. Our new finding paves the way for developing a better evolutionary computation approach for ridesharing recommendation systems based on sayings created by human beings or human intelligence.

1. Introduction

Artificial intelligence (AI) is an umbrella term that refers to the technologies that perform tasks that require human intelligence previously based on the simulation of human intelligence by machines. These tasks include the recognition of speech, the identification of patterns, recommendations of goods, the prediction of trends and the optimization of making decisions. Traditionally, these tasks rely on human beings performing them manually based on human intelligence. With the prevailing AI techniques, the above tasks can be performed more efficiently based on AI-enabled software. For this reason, it is anticipated that AI may pose a threat to the job market and disrupt the livelihoods of workers in many sectors [1,2]. In spite of the negative impact on the job market of AI, AI may create new kinds of work [3]. These divided opinions indicate that human beings and AI will co-exist, complement each other and get jobs done more cost effectively. Therefore, many studies indicate that a collaboration between AI and human beings is an important trend in an AI-prevailing era [4]. For example, a shared control interaction paradigm between a human and an AI partner to collaboratively control a system was studied in [5].
Many studies related to AI primarily concentrate on the use of AI tools to assist workers in performing tasks more effectively. For example, the early study of [4] showed researchers’ aspiration to increase the impact of management science and operation research by incorporating AI tools in ill-structured, knowledge-rich and non-quantitative decision domains. With the progress of AI techniques such as ChatGPT [6] and Generative AI [7], many AI-related tools have been developed, including Microsoft Copilot [8] and Generative AI on Google Cloud [9]. These AI-enabled tools aim to make AI work for human beings and make human beings work more efficiently than ever. The above development focuses on the perspective to use AI as tools to work for human beings. In this paper, we study the possibility of facilitating the development of evolutionary algorithms for AI-enabled applications based on the intelligence of human beings or human intelligence. Evolutionary algorithms are a branch of evolutionary AI-based problem-solving approaches based on mimicking the behaviors of living things such as reproduction, mutation and recombination to solve problems. In this study, we study how to apply human intelligence to develop effective evolutionary algorithms to solve optimization problem.
Human beings have created numerous laws, sayings and proverbs that still influence behaviors and decision-making processes of people. Some of the laws, sayings or proverbs are used by people to understand the phenomena that may take place in daily life. For example, Murphy’s law states that “Anything that can go wrong will go wrong”. Murphy’s law is helpful for project planning with analysis and the consideration of risks. Similar to Murphy’s law, the old saying “Two heads are better than one” also influences the determination of the ways for people to get jobs done effectively. In this paper, we study the effectiveness of applying the old saying “Two heads are better than one” to develop effective evolutionary algorithms.
The old saying “Two heads are better than one” can be interpreted differently. Therefore, it has been extensively discussed in different contexts with different meaning. One interpretation of the old saying “Two heads are better than one” is that two people working in pairs is better than one person working alone. A study on the effectiveness of applying “Two heads are better than one” to code development based on this interpretation can be found in [10], where a collaborative approach for code development was studied. The study of [10] indicated that pair programming leads to increased quality. Another interpretation of the old saying “Two heads are better than one” is that since there is only one truth, assigning two teams to perform experiments should lead to the same conclusion even if the two teams may follow different paths to obtain to the conclusion. For example, in [11], two studies using different data collection processes and databases, NASS data and SART CORS data, with different data analysis demonstrate that the national trends detected are in general agreement. In [11], “Two heads are better than one” refers to the use of two different studies to confirm the trends. The old saying “Two heads are better than one” can also be interpreted as the performance of using two systems to perform tasks is better than that of using only one system. The study of [12] was based on this interpretation. In [12], dual systems were used in facial comparison instead of using only a single system. The experimental results indicate that the performance of a dual-system model is better than that of a single system. There is still another interpretation of the old saying “Two heads are better than one”: solving a problem with two stages may be better than a single stage. This interpretation is largely based on the divide-and-conquer strategy used in solving complex problems. An example of applying “Two heads are better than one” based on this interpretation can be found in [13], where a complex problem is divided into multiple sub-problems to reduce complexity.
Although the old saying “Two heads are better than one” has been extensively discussed in different contexts, there is a lack of studies about whether this saying is valid and can be applied in evolutionary computation. Evolutionary computation is an important optimization approach in artificial intelligence. Differential Evolution [14], Particle Swarm Optimization [15] and Firefly algorithms [16] are well-known evolutionary algorithms for solving problems. This study presents a proof of concept for verifying the feasibility to apply the old saying “Two heads are better than one” to facilitate the development of evolutionary algorithms based on Differential Evolution. To illustrate the proof of concept, the optimization problem of ridesharing systems with trust constraints is selected as the target problem in this paper due to the importance of the ridesharing mode in shared mobility systems [17]. In the literature, many research issues in ridesharing systems have been studied [18]. The reason that we choose the ridesharing problem as the target problem is due to the complexity and challenges in solving optimization problems [19,20].
Several solution methodologies developed in the literature are available for solving a variety of optimization problems. These include the rational decision theory [21] and the theory of stochastic global optimization [22] in the global optimization literature. In [22], Žilinskas and Zhigljavsky classify stochastic global optimization algorithms into three classes: (1) algorithms based on global random search, (2) algorithms based on stochastic assumptions about the objective function, and (3) algorithms based on heuristics or meta-heuristics. In the first class of stochastic global optimization algorithms, global random search algorithms choose the observation points based on random decisions. Several important properties of stochastic global optimization algorithms have been studied. For example, sufficient conditions for the convergence of global random search algorithms have been studied in [23,24,25,26]. The convergence theorem of global random search was proved in [27]. In the second class of stochastic global optimization algorithms, the objective functions are characterized by statistical models to capture uncertainty. For example, a stochastic function can be selected as a statistical model of the objective function. The algorithms in this class are based on rational decision-making theory and defined as sequences of decisions under uncertainty. Please refer to [28,29] for the algorithms based on the statistical models of objective functions. The third class refers to stochastic optimization algorithms in which heuristics or natural processes such as simulated annealing, genetic algorithms [30], the Particle Swarm Optimization approach and the Differential Evolution approach are used to create potential candidate solutions randomly in the solution-finding processes. In this study, we focus on a stochastic optimization algorithm based on the Differential Evolution approach. Therefore, the self-adaptive Differential Evolution-based algorithm proposed in this study belongs to this class.
In this paper, we attempt to study the validity of the saying “Two heads are better than one” in the context of the evolutionary computation approach to the decision making of ridesharing systems with trust constraints. We study the validity of the saying of “Two heads are better than one” by developing a series of self-adaptive evolutionary algorithms for solving the optimization problem of ridesharing systems with trust constraints based on the saying, conducting several series of experiments and comparing the effectiveness of these self-adaptive evolutionary algorithms. The series of self-adaptive evolutionary algorithms for solving the trust-based ridesharing optimization problem are based on the Differential Evolution approach [14] and extension of the algorithm proposed in [31]. The new finding is that the old saying “Two heads are better than one” is valid in most cases and hence can be applied to facilitate the development of effective self-adaptive evolutionary algorithms. Our new finding paves the way for developing better search strategies of the evolutionary computation approach based on sayings created by human beings or human intelligence.
The contributions of this study are summarized as follows.
First, we illustrated the way human beings apply the old saying “Two heads are better than one” in problem solving using an application scenario and showed that problem-solving processes could be divided into two phases: the performance assessment phase and the optimization phase. The performance assessment phase aims to assess the performance of the two heads in solving a problem, whereas the optimization phase aims to optimize the solution based on the knowledge learned from the performance assessment phase.
Second, we mimicked the performance assessment phase and optimization phase to develop and implement a self-adaptive evolutionary algorithm with two mutation strategies for solving the optimization problem of ridesharing systems with trust constraints, where the two mutation strategies could be arbitrarily selected from a given set of strategies.
Third, we studied the validity of the saying “Two heads are better than one” by performing a series of experiments for a set of test cases for each combination of two strategies of the set of given strategies and compared the performance of “two heads” (two strategies) with that of “one head” (a single strategy alone) based on the results of experiments. Our analysis of the experimental results show that “Two heads are better than one” holds for self-adaptive evolutionary algorithms with two mutation strategies. This indicates that “Two heads are better than one” can be used as an effective way to develop effective self-adaptive algorithms for ridesharing recommendation systems.
Fourth, although the optimization problem of ridesharing systems with trust constraints was studied in [31], only the results for one combination of two mutation strategies in the proposed self-adaptive algorithm were presented in [31]. This paper is different from [31] in that the effectiveness of different combinations of two mutation strategies were studied for the trust-based shared mobility problem.
The remainder of this paper is divided into four sections and is structured as follows. In Section 2 and Section 3, we review the trust-based shared mobility systems and the problem formulation. In Section 4, we present how the old saying “Two heads are better than one” can be applied in problem solving by human beings. We use flowcharts to illustrate details of the steps to apply “Two heads are better than one”. In Section 5, we develop Self-adaptive Neighborhood Search Differential Evolution algorithms based on the flowcharts presented in Section 4. The setting of the experiments conducted in this study and the results obtained in each experiment are presented in Section 6. We present the rules for effective combinations of mutation strategies based on an analysis of the experimental results in Section 7. This paper is concluded in Section 8.

2. A Review of Trust Based Shared Mobility Systems

Due to the potential advantages to reduce energy consumption, costs, greenhouse gas emissions and pollutions, shared mobility is an important transport model to achieve the sustainability of cities. Therefore, many research issues related to shared mobility systems have been studied [18]. These include challenges in optimizing ridesharing operations [20], models/algorithms for the optimization of ridesharing systems [19] such as the optimization of cost savings [32] and monetary incentives [33], barriers of shared mobility services [34], the incentives for ridesharing [35,36], guarantees of a discount in ridesharing systems [37], factors that influence users’ acceptance and confidence of ridesharing systems [38,39,40], a comparison of different solution algorithms [41], stable matching in ridesharing systems [42], dynamic ridesharing [43] and studies of the effectiveness of different cost savings allocation on the acceptability of ridesharing systems [44,45].
The strong demand of the emerging shared mobility services has led to the evolution of new practices in the transport industry [46]. However, there are barriers to the adoption of shared mobility services [34]. One of the barriers to the adoption of shared mobility services is the lack of trust between participants and users’ lack of confidence in the service quality. The factor of trust in shared mobility systems has become an interesting and important issue as it directly influences users’ acceptance and confidence in ridesharing services [38,39,40]. For this reason, studies of the influence of the trust factor on shared mobility systems have been attracting researchers’ attention. For example, shared mobility relies on initial trust for the acceptance by users. Initial trust refers to the situation that an emerging technology will be accepted once it is available. The paper reported in [47] studies the initial trust of shared mobility with shared autonomous vehicles and the effects of several factors including personality, transfer, and performance factors on the initial trust and adoption of shared mobility services. The study in [48] considered two types of mobility modes, including car mobility and sidewalk mobility, and investigated how trust in one of these two mobility modes may impact trust in another.
Users of shared mobility services are concerned about safety and trust. Social media provides a means to improve quality of shared mobility services in terms of safety and trust [49]. For example, the study [50] considered the problem for commuters in a social network to share rides. Paper [51] by Li et al. overcame the safety and trust barriers by proposing a method for social-aware ridesharing group queries. In the method proposed in [52], social connections and spatial distance were taken into consideration in retrieving a group of shared riders. In the study [53] by Xia et al., a method based on a social network and route network was proposed for carpooling systems. To avoid uncomfortable or untrustworthy ridesharing experiences with strangers, Shim et al. [54] formulated a cohesive ridesharing group query problem. In this problem formulation, the spatial factor, social factor, and temporal information were considered to retrieve a cohesive ridesharing group. In [31], Hsieh developed a trust-aware ridesharing recommender system considering trust requirements of drivers and passengers, vehicle capacities, timing and location factors. Due to non-linearity of the problem and discrete solution space, a self-adaptive neighborhood search method was combined with Differential Evolution (DE) to develop an algorithm to solve the problem. Hsieh assessed the effectiveness of the proposed algorithm by comparing it with several other evolutionary computation approaches.
Although the study in [31] demonstrated the effectiveness of the proposed self-adaptive neighborhood search variant of the Differential Evolution algorithm in comparison with several other metaheuristic algorithms, the proposed algorithm was based on the use of two mutation strategies in the process of self-adaptation. However, the effectiveness of other combinations of mutation strategies was not studied in [31] for the trust-based shared mobility problem. This raised two interesting questions: (1) Does any combination of mutation strategies work effectively? And (2) which combinations of mutation strategies perform better for the trust-based shared mobility problem? These unexplored research issues motivate this study. The goal of this study is to study the effectiveness of applying combinations of two mutation strategies in the Self-adaptive Neighborhood Search Differential Evolution algorithm to provide a guideline for selecting better mutation strategies used to improve the performance in solving the trust-based shared mobility problem.
In the DE approach, there are four steps to find a solution for an optimization problem through evolving the solutions. These steps include initialization, mutation, crossover and selection. The performance of DE-based algorithms is influenced by these steps. In particular, the method used to perform the mutation operations plays a pivotal role on the performance and efficiency of DE. In the literature, there are many studies of mutation strategies in Differential Evolution. For example, Opara and Arabas analyzed and compared mutation strategies in Differential Evolution in study [54] by examining expectation vectors and covariance matrices of the mutants’ distribution. The analysis indicated that the main difference between DE operators is the mutation range relevant to the generalized scaling factor defined in [55]. Although the analysis and results presented in [54] are interesting, it is based on standard DE for optimization problems without constraints and with a continuous solution space. Therefore, the results cannot be applied in the trust-based shared mobility problem, which consists of a large number of constraints and the solution space is discrete. In [55], Ye et al. proposed a variant of the DE algorithm which alternates between mutation strategies: the steady monopoly strategy and the transient competition of mutation strategies. However, the proposed DE method is for optimization problems without constraints and with a continuous solution space. In study [56], Wang et al. proposed an improved differential evolution for multimodal multi-objective optimization based on a novel distance indicator, a two-stage mutation strategy and a novel mutation strategy. However, the proposed DE method is for optimization problems without constraints and with a continuous solution space.
Due to the difficulty in performing a theoretical analysis of mutation strategies in the DE approach for the trust-based shared mobility problem, we conducted experiments to study the effectiveness of applying different combinations of two mutation strategies in the Self-adaptive Neighborhood Search Differential Evolution algorithm. The experimental results and experiences will pave the way for developing the rules for selecting mutation strategies in the Self-adaptive Neighborhood Search Differential Evolution algorithm.

3. A Review of Trust-Based Decision Model for Shared Mobility

In this section, we review the decision model for trust-based shared mobility. Table 1 summarizes the notations of symbols, variables and parameters used in the decision model. A shared mobility system refers to a trust-based shared mobility system (TBSMS) if it takes into account the trust factors/requirements of drivers and passengers in the determination of the driver and passengers in each shared ride.
In a TBSMS, a driver may request to share a ride with a set of passengers only if they satisfy the driver’s minimal trust-level requirements. Also, in a TBSMS, a passenger may request to share a ride with a set of passengers and the driver only if they satisfy the passenger’s minimal trust-level requirements.
To represent the drivers’ and passengers’ trust requirements, we first introduce a graph-based model S ( V , E , Θ ) for the social network, where V is the set of nodes associated with drivers and passengers, E denotes the set of edges representing the connection between nodes in terms of trust and Θ is the V by V matrix with element Θ i j specifying the level that v i trusts v j for all v i , v j V . Specifically, there is a directed edge e i j connecting v i to v j with weight Θ i j for each v i and v j in V . If Θ i j is zero, v i does not trust v j . The greater the Θ i j , the more v i trusts v j .
To represent the trust requirements of driver d and passenger p , we use Γ d to denote the minimal trust level requested by driver d and Λ p to denote the minimal trust level requested by passenger p . Based on the trust model S ( V , E , Θ ) , the trust requirements of driver d can be satisfied only if Θ d p Γ d . A passenger p can be a rider with driver d only if Θ p d Λ p . A passenger p can be on a shared ride with another passenger p only if Θ p p Λ p .
In addition to the trust requirements that must be satisfied, all the spatial constraints, timing constraints, capacity constraints and surplus constraints must be satisfied for a solution of the TBSMS. To formulate the decision problem, the operation of a TBSMS is described first. Each potential driver/passenger sends her/his request to a TBSMS to express the requirements of origin, destination, earliest departure time, latest arrival time, demanded number of seats, n p , and the minimal trust level requested. The information in the requests of drivers and passengers are extracted and used in the decision model of the TBSMS.
Suppose the number of potential passengers is P and the number of potential drivers is D in the TBSMS. The set of all potential passengers is denoted by { 1 , 2 , , P } and the set of all potential drivers is denoted by { 1 , 2 , , D } .
The request of passenger p is denoted by R p = ( L o p , L e p , ω p e , ω p l , n p , Λ p ) , where L o p is the origin, L e p is the destination, ω p e is the earliest departure time, ω p l is the latest arrival time, n p is the demand of seats, and Λ p is the requested minimal trust level. The request of driver d is represented by R d = ( L o d , L e d , ω d e , ω d l , a d , τ ¯ d , Γ d ) , where L o d is the origin, L o d , L e d is the destination, ω d e is the earliest departure time, ω d l is the latest arrival time, a d is the number of available seats, τ ¯ d is the maximum detour ratio, and Γ d is the minimal trust level.
The locations used in the problem formulation can be divided into the locations to pick up passengers and the locations to drop passengers off. As there are P passengers’ requests and there is a location to pick up passengers and a location to drop passengers off for each request, the total number of locations for picking up or dropping passengers off is K = 2 P . Without the loss of generality, it is assumed that the locations are numbered as follows. Locations in { 1 , 2 , , P } denote the locations to pick up passengers. For each location k { 1 , 2 , , P } to pick up passengers, the location P + k denotes the location to drop passengers off. In this paper, the location to pick up a passenger p is referred to as location p , where p { 1 , 2 , , P } , and the location to drop a passenger off p is referred to as location P + p , where p { 1 , 2 , , P } .
Without the loss of generality, it is assumed that each passenger p { 1 , 2 , , P } submits only one request, R p . For R p , the TBSMS will invoke a procedure to generate bid P _ B I D p = ( s p 1 1 , s p 2 1 , s p 3 1 , , s p K 1 , s p 1 2 , s p 2 2 , s p 3 2 , , s p K 2 , f p , Λ p ) on behalf of passenger p , where K is the number of locations, s p k 1 is the demand of seats requested to pick up passengers at location k , s p k 2 is the seats released after dropping passengers off at location P + k , f p is the bid price and Λ p is the requested minimal trust level.
It is assumed that a driver d { 1 , 2 , , D } submits only one request, R d . For each R d , the TBSMS will invoke a procedure to generate J d bids, which represent different ways to transport passengers by driver d . A bid generated for R d is represented by D _ B I D d j = ( q d j 1 1 , q d j 2 1 , , q d j k 1 , , q d j K 1 , q d j 1 2 , q d j 2 2 , , q d j k 2 , , q d j K 2 , π d j , o d j , c d j , a d , Γ d ) , where j is the index of the j -th bid of driver d with j { 1 , 2 , , J d } , K is the number of locations, q d j k 1 is the seats for picking up passengers at location k , q d j k 2 is the seats released after dropping passengers off at location P + k , o d j is driver d ’s original cost without ridesharing, c d j is driver d ’s cost to transport passengers in the bid, a d is the total available seats and Γ d is the requested minimal trust level.
To formulate the decision problem for a TBSMS, decision variables x d j   d { 1 , 2 , , D } and j { 1 , 2 , , J d } are defined for the bids, D _ B I D d j , d { 1 , 2 , , D } and j { 1 , 2 , , J d } , submitted by drivers and decision variables, y p p { 1 , 2 , 3 , , P } are defined for the bids P _ B I D p and p { 1 , 2 , , P } submitted by passengers.
The j -th bid D _ B I D d j of driver d is a winning bid if x d j = 1 and is not a winning bid if x d j = 0. If y p = 1, the bid P _ B I D p of passenger p is a winning bid, and if y p = 0, P _ B I D p is not a winning bid.
The objective function is defined in (1) to maximize the cost savings, where
F ( x , y ) = p = 1 P y p f p + d = 1 D j = 1 J d x d j ( o d j c d j )
Based on the discussions above, the decision problem for a TBSMS is formulated as follows:
max x , y   F ( x , y ) s . t .
d = 1 D j = 1 J d x d j q d j k 1 = y p s p k 1   p { 1 , 2 , , P }   k { 1 , 2 , , P }
d = 1 D j = 1 J d x d j q d j k 2 = y p s p k 2   p { 1 , 2 , , P }   k { 1 , 2 , , P }
p = 1 P y p f p + d = 1 D j = 1 J d x d j o d j d = 1 D j = 1 J d x d j c d j
j = 1 J d x d j 1     d { 1 , , D }
k = 1 K q d j k 1 s p k 1 x d j y p ( Θ d p x d j y p Γ d ) 0 d { 1 , , D } j { 1 , , J d } p { 1 , 2 , 3 , , P }
k = 1 K q d j k 2 s p k 2 x d j y p ( Θ d p x d j y p Λ p ) 0   d { 1 , , D }   j { 1 , , J d }   p { 1 , 2 , 3 , , P }
x d j { 0 , 1 } d { 1 , , D } j { 1 , , J d }
y p { 0 , 1 } p { 1 , 2 , 3 , , P }

4. Problem Solving with “Two Heads Are Better Than One” Human Intelligence

In this section, we briefly introduce how the old saying “Two heads are better than one” can be used as an approach to problem solving. “Two heads are better than one” is based on the intuition that one may be more likely to come to a better decision by collaborating with another person than working alone.
The rationale that two people working together may outperform the decision made by any one of them is due to the fact that the capability of any one of them is not sufficient to come up with a decision that is good enough. If the capability of one of the two people complements with that of the other one, the two people may produce a better decision. If the capability of one people is the same as the other one, the two people working together may not be able to create a better decision. If the capability of one people is completely the same as the other person’s capability, the two people working together may not be able to make a better decision. In the real world, no two people are the same. The capabilities of the two people are usually not the same. Therefore, two people working together might obtain a better decision. But, in case that the capability of one person is the same as the other one or the capability of one people is completely included in the other one, it is still possible for the two people to work together and generate a better decision. This situation may be due to the synergy effect of cooperation, which leads to the outcome: one plus one is greater than two.
The above reasoning provides an analysis about why the old saying “Two heads are better than one” works. Next, we briefly describe how the old saying “Two heads are better than one” can be applied in solving a problem. Consider a problem related to a project is to be solved. The project manager, M, believes that the old saying “Two heads are better than one” can work to find a better solution to the problem. Therefore, project manager M arbitrary selects two people, X and Y, from the team members to work together and solve the problem. As project manager M does not know whether X or Y should be assigned to solve the problem first, he asks X and Y to flip a coin in each round to determine whether X or Y should be assigned. If heads is the outcome, X is assigned to solve the problem. If tails is the outcome, Y is assigned to solve the problem. Although project manager M roughly knows the capability of X and Y, he does not know the performance of X and Y in solving the problem exactly. So project manager M divides the problem solving processes into two phases: the performance assessment phase and the optimization phase. A flowchart of the above problem-solving scheme with “Two heads are better than one” human intelligence is shown in Figure 1. The flowchart for the performance assessment phase is shown in Figure 2. The flowchart for the optimization phase is shown in Figure 3. The performance assessment phase consists of L rounds to assess the performance of X and Y in solving the problem. The optimization phase consists of T rounds and aims to optimize the solution based on the knowledge learned from the performance assessment phase.
To assess the performance of X and Y in solving the problem, project manager M asks X and Y to iteratively solve the problem in L rounds; either X or Y is assigned to solve the problem in each round one time and the outcome of each round is recorded. The outcome of each round may be either a success or failure. Here, success refers to the situation that the solution is improved in that round, whereas failure refers to the situation that the solution is not improved in that round. That is, there are L rounds used to collect the performance data of X and Y in terms of solving the problem. To record the outcomes of each round, project manager M asks X and Y to use four counters, Sx, Fx, Sy and Fy, to store the number of successful outcomes of X, the number of failures of X, the number of successful outcomes of Y and the number of failures of Y, respectively. These four counters are updated at the end of each round.
A coin is flipped to determine whether X or Y is assigned to solve the problem in each round. If X is assigned to solve the problem and the solution is improved, the success counter of X, S x , is increased by one. Otherwise, the failure counter of X, F x , is increased by one. If Y is assigned to solve the problem and the solution is improved, the success counter of Y, S y , is increased by one. Otherwise, the failure counter of Y, F y is increased by one. Based on the values of the four counters, S x , F x , S y and F y , at the end of the L rounds in the performance assessment phase, project manager M calculates the success rates of X and Y as S x S x + F x and S y S y + F y , respectively.
After the performance assessment phase, the optimization phase starts. Project manager M asks X and Y to solve the problem in the optimization phase with T rounds with probability S x S x + F x S x S x + F x + S y S y + F y and 1 − S x S x + F x S x S x + F x + S y S y + F y , respectively. Each round of the performance optimization phase is performed by generating a random number ( r ) from [0 1]. If r < S x S x + F x S x S x + F x + S y S y + F y , X is assigned to solve the problem. Otherwise, Y is assigned to solve the problem.
The problem that manager M attempts to solve is one that can be solved based on the rational decision theory [21] or the theory of stochastic global optimization [22] in the global optimization literature, depending on the characteristics of the problem. As the target problem to be solved in this paper is the optimization problem of the TBSMS and the objective function is not a stochastic function, we focus on the development of stochastic optimization algorithms based on the Differential Evolution approach in this study. As the decision problem for the TBSMS is formulated as an integer-programming problem with binary decision variables, we apply the flowcharts of “Two heads are better than one” to develop a self-adaptive Differential Evolution-based algorithm proposed, which is in the next section.

5. Self-Adaptive Differential Evolution Algorithms Based on Two Mutation Strategies

To study the effectiveness of variants of self-adaptive Differential Evolution Algorithms that are based on two mutation strategies, we extended the self-adaptive Differential Evolution Algorithm with two fixed mutation strategies proposed in [31] to allow the use of any two of the four well-known strategies in the mutation operation.
Just like any other evolutionary algorithms, all the variants of self-adaptive Differential Evolution Algorithms with two mutation strategies developed in this study require the definitions of proper fitness functions to deal with constraints, mutation strategies, the mechanism to update algorithmic parameters and the way to select the strategy for mutation. In this section, we present the details of all the points mentioned above.

5.1. Fitness Function

The role of a fitness function is to provide a quantitative way to assess the quality of a solution in solution search processes. To effectively find a good solution, the fitness function used in evolutionary algorithms must be designed properly. The value of a fitness function must provide useful information by jointly taking into account the objective function value as well as the violation of the constraints of a solution. One of the well-known concepts used to design a fitness function is to take the effects of constraint violation into account by introducing several penalty functions. However, classical penalty methods usually suffer from the problem of needing to properly set weighting parameters/coefficients for penalty functions. For this reason, an alternative approach [57] to differentiating feasible solutions and infeasible ones was adopted in this study. The advantage of this approach is that it works effectively without parameters/coefficients. To describe this approach, S f denotes the set of all feasible solutions in the current population. We define S f min = min ( x , y ) S f F ( x , y ) to characterize the object function value of the worst feasible solution in the current population. Based on the above notation, we define the fitness function F 1 ( x , y ) as follows:
F 1 ( x , y ) = F ( x , y )   i f   ( x , y )   S f U ( x , y )   o t h e r w i s e
where
U ( x , y ) = S f min + U 1 ( x , y ) + U 2 ( x , y ) + U 3 ( x , y ) + U 4 ( x , y ) + U 5 ( x , y ) ,
U 1 ( x , y ) = p = 1 P k = 1 K ( ( d = 1 D j = 1 J d x d j q d j k 1 y p s p k 1 ) + ( d = 1 D j = 1 J d x d j q d j k 2 y p s p k 2 ) ) ,
U 2 ( x , y ) = d = 1 D ( 1 j = 1 J d x d j ) ,
U 3 ( x , y ) = min ( p = 1 P y p f p + d = 1 D j = 1 J d x d j o d j d = 1 D j = 1 J d x d j c d j ) , 0.0 ) ,
U 4 ( x , y ) = d = 1 D j = 1 J d p = 1 P min ( k = 1 K q d j k 1 s p k 1 x d j y p ( Θ d p x d j y p Γ d ) , 0.0 ) ,
and
U 5 ( x , y ) = d = 1 D j = 1 J d p = 1 P min ( k = 1 K q d j k 2 s p k 2 x d j y p ( Θ d p x d j y p Λ p ) , 0.0 ) .
Note that U 1 ( x , y ) is for constraints (3) and (4), U 2 ( x , y ) is for constraint (6), U 3 ( x , y ) is for constraint (5), U 4 ( x , y ) is for constraint (7) and U 5 ( x , y ) is for constraint (8). These functions are used to handle constraints.

5.2. Operation of Mutation Strategies in Standard DE

In the DE approach, there are four steps to find a solution for an optimization problem through evolving the solutions. These steps include initialization, mutation, crossover and selection. The performance and efficiency of DE-based algorithms are influenced by these steps. In particular, the method used to perform the mutation operation plays a pivotal role on the performance and efficiency of DE. Mutation strategies in the DE approach are applied to create potential solutions in the search processes. In the DE literature, there are several well-known mutation strategies that have been developed and widely used in solving a variety of problems in different domains. In this study, we consider the four mutation strategies listed in Table 2. These mutation strategies were proposed by combining either an individual or the best individual in the current population with other weighted terms such as the difference between two or more pairs of individuals arbitrarily randomly selected from the current population.
To explain the operations of a mutation strategy in DE, let us suppose the dimension of the problem to be solved by DE is N . A standard DE algorithm starts with the creation of a population of individuals in the initialization step and updates the population of individuals in each generation. Supposing the number of individuals in the population is N P , we use z t i = ( z t i n ) to denote the i -th individual in the population of the t -th generation, where i { 1 , 2 , , N P } and n { 1 , 2 , , N } . In the mutation step, a mutant vector v t i = ( v t i n ) is generated for the i -th individual in the t -th generation by applying a mutation strategy. For example, supposing Mutation Strategy 1 in Table 2 is used to create a mutant vector, in this case, the mutant vector created for the i -th individual in the population is based on randomly selected three individuals, r 1 , r 2 , r 3 , from the current population and the calculation of the mutation function μ 1 = v ( t + 1 ) i n = z t r 1 n + F i ( z t r 2 n z t r 3 n ) , where r 1 , r 2 and r 3 are three distinct randomly integers generated between 1 and N P . Similarly, if Mutation Strategy s in Table 2 is used to create a mutant vector, the mutant vector created for the i -th individual in the population is based on the calculation of mutation function μ s .
Following the mutation strategy, the standard DE algorithm proceeds to the crossover operation to create a trial vector u t i . The crossover operation simply selects the element between the mutant vector v t i and the original individual z t i according to the crossover probability. After performing the crossover operation, the standard DE algorithm replaces z t i with u t i if u t i is better than z t i .

5.3. Mechanism to Update Algorithmic Parameters and Select the Mutation Strategy

The scale factor and crossover rate are two important parameters that jointly influence the performance and efficiency of DE algorithms. How to adapt these two parameters in solution-searching processes effectively is vital for the development of a self-adaptive DE algorithm. In the evolution processes of the DE approach, the success or failure of applying a mutation strategy provides some clues and valuable information for updating these two parameters. The number of successes or failures of applying a specific mutation strategy in the DE approach can be easily recorded in a counter, and the value of the counter can be used to calculate the success rate of the applied mutation strategy. The success rate of each mutation strategy used in the DE approach can then be used to calculate the probability to select a mutation strategy. For a self-adaptive DE algorithm with two mutation strategies, four counters are used to record the success or failure events of applying the two mutation strategies. Supposing Mutation Strategy s 1 and Mutation Strategy s 2 are used in the self-adaptive DE algorithm, in this case, four counters, S s 1 , S s 2 , F s 1 and F s 2 , will be used to record the success or failure events of applying Mutation Strategy s 1 and Mutation Strategy s 2 , where S s 1 , S s 2 , F s 1 and F s 2 are the counters used to record the success event of Mutation Strategy s 1 , the success event of Mutation Strategy s 2 , the failure event of Mutation Strategy s 1 , and the failure event of Mutation Strategy s 2 . Based on S s 1 , S s 2 , F s 1 and F s 2 , we calculate the probability f p = S s 1 ( S s 2 + F s 2 ) S s 2 ( S s 1 + F s 1 ) + S s 1 ( S s 2 + F s 2 ) for selecting Mutation Strategy s 1 as the mutation strategy.
For the parameter that is used to influence the generation of the crossover rate, the self-adaptive DE algorithm developed in this paper records the values of a “good” historical crossover rate in C R r e c . The “good” historical crossover rate values stored in C R r e c are used to adapt the parameter C R m by C R m = k = 1 C R r e c C R r e c ( k ) C R r e c . The generation of the crossover rate is based on the parameter C R m as N ( C R m , σ 2 2 ) , with C R m as the mean value of the Gaussian distribution.
As the original DE approach was proposed for problems with a continuous solution space, to make it work for problems with a discrete solution space, we modify the DE approach as follows. To ensure the values of the decision variables to be binary in the evolution processes, the function B i n a r y T r a s   f o r m in Table 3 is invoked in our solution algorithm to transform the continuous values of decision variables to binary values.
Given two strategies, s 1 and s 2 , arbitrarily selected from a set ( S ) of strategies, the algorithm in Table 4 is obtained based on the flowcharts in Figure 1, Figure 2 and Figure 3 by applying “Two heads are better than one”.

6. Results

In this section, we study whether “Two heads are better than one” holds for the evolution computation approach based on Differential Evolution. We study the validity of the saying of “Two heads are better than one” by developing a series of self-adaptive evolutionary algorithms for solving the optimization problem of ridesharing systems with trust constraints based on the saying, conducting several series of experiments and comparing the effectiveness of these self-adaptive evolutionary algorithms.
The data for the test cases used in the experiments of this study are available for download from the following link: https://drive.google.com/drive/folders/1W4OdMQ6z8O0fX38TpSVFcVbsbzJkjHy3?usp=sharing (accessed on 8 May 2024).
We perform tests to determine whether the performance of the algorithm based on strategy s 1 and strategy s 2 in Table 4 is better than the performance of applying only strategy s 1 or strategy s 2 alone, where strategy s 1 and strategy s 2 are arbitrarily selected from a set S of distinct strategies. To validate whether “Two heads are better than one” holds for any two strategies selected from a set S of distinct strategies, we perform tests on two combination ( s 1 , s 2 )of S . The number of all the two combinations of the set S is C 2 S , which grows with the number of elements in S . In this study, we consider four well-known strategies, DE1, DE2, DE3 and DE4, in the Differential Evolution approach. To concisely present the experiments and results, we refer to the ID of the strategies DE1, DE2, DE3 and DE4 as 1, 2, 3 and 4, respectively, and define the set of strategies as S = {1, 2,3,4}. The set of two combinations of S considered in our experiments is Ω = {(1,2), (1,3), (1,4), (2,3), (2,4), (3,4)}. We conduct experiments by applying the proposed algorithm with each two combinations ( s 1 and s 2 ) in Ω as well as the standard algorithm with a single strategy, s 1 , alone and the standard algorithm with a single strategy, s 2 , alone to solve the problems of several test cases. We record the results and compare the performance of the proposed algorithm with each two combinations ( s 1 and s 2 ) in Ω with that of the standard algorithm with strategy s 1 or strategy s 2 alone.
For the standard algorithms with mutation strategies DE1, DE2, DE3, DE4, the parameters used are as follows:
T = 10000;
Population size N P = 30;
C R = 0.5;
F : a value arbitrarily selected from uniform (0, 2);
V max = 4.
For the Self-adaptive Neighborhood Search Differential Evolution (SaNSDE) algorithm, the parameters used are as follows:
T = 9000;
Population size N P = 30;
C R = 0.5;
F i = 0.3 r + 0.5, where r is generated with a Gaussian distribution N ( 0 , 1 ) ;
L = 1000 V max = 4.
Note that the total number of generations in SaNSDE is T + L = 10,000, which is the same as that of DE1, DE2, DE3 and DE4.
To concisely present the results of our experiments, we use SaNSDE ( s 1 and s 2 ) to represent the proposed SaNSDE algorithm with the given two combinations ( s 1 and s 2 ) of strategies. For the two combinations ( s 1 , s 2 ) = (1,2) of strategy DE1 and DE2, the results obtained by applying SaNSDE( s 1 , s 2 ) = SaNSDE(1,2), standard DE1 and standard DE2 to ten test cases are summarized in Table 5. The results are shown in the bar chart of Figure 4. The results indicate that SaNSDE(1,2) either outperforms or performs as well as DE 1 and DE2 for all test cases.
For the two-combination ( s 1 , s 2 ) = (1,3) of strategy DE1 and DE3, the results obtained by applying SaNSDE( s 1 , s 2 ) = SaNSDE(1,3), standard DE1 and standard DE3 to ten test cases are summarized in Table 6. The results are shown in the bar chart of Figure 5. The results indicate that SaNSDE(1,3) either outperforms or performs as well as DE1 and DE3 for all test cases.
For the two-combination ( s 1 , s 2 ) = (1,4) of strategy DE1 and DE4, the results obtained by applying SaNSDE( s 1 , s 2 ) = SaNSDE(1,4), standard DE1 and standard DE4 to ten test cases are summarized in Table 7. The results are shown in the bar chart of Figure 6. The results indicate that SaNSDE(1,4) either outperforms or performs as well as DE1 and DE4 for all test cases.
For the two-combination ( s 1 , s 2 ) = (2,3) of strategy DE2 and DE3, the results obtained by applying SaNSDE( s 1 , s 2 ) = SaNSDE(2,3), standard DE2 and standard DE3 to ten test cases are summarized in Table 8. The results are shown in the bar chart of Figure 7. The results indicate that SaNSDE(2,3) either outperforms or performs as well as DE2 and DE3 for all test cases.
For the two-combination ( s 1 , s 2 ) = (2,4) of strategy DE2 and DE4, the results obtained by applying SaNSDE( s 1 , s 2 ) = SaNSDE(2,4), standard DE2 and standard DE4 to ten test cases are summarized in Table 9. The results are shown in the bar chart of Figure 8. The results indicate that SaNSDE(2,4) either outperforms or performs as well as DE2 and DE4 for all test cases.
For the two-combination ( s 1 , s 2 ) = (3, 4) of strategy DE3 and DE4, the results obtained by applying SaNSDE( s 1 , s 2 ) = SaNSDE(3,4), standard DE3 and standard DE4 to ten test cases are summarized in Table 10. The results are shown in the bar chart of Figure 9. The results indicate that SaNSDE(3,4) either outperforms or performs as well as DE3 and DE4 for all test cases.
The results of our experiments for the two combinations of strategies in Ω = {(1,2), (1,3), (1,4), (2,3), (2,4), (3,4)} are summarized in Table 11. The results are shown in the bar chart of Figure 10. The results indicate that the average fitness function values for all the two combinations of strategies in Ω are the same for Case 1 through Case 7. The average fitness function values for all the two combinations of strategies in Ω are the very close for Case 8. The differences between the average fitness function values for all the two combinations of strategies in Ω become significant in Case 9 and Case 10. The maximum of the differences between the average fitness function values for all the two combinations of strategies is about (139.4738–133.6144)/139.4738 = 4.201% for Case 9. The maximum of the differences between the average fitness function values for all the two combinations of strategies is about (148.8646–137.7995)/148.8646 = 7.432% for Case 10. The variation in the average fitness function values for all the two combinations of strategies is acceptable.

7. Discussion

In this study, we designed and performed experiments to validate whether “Two heads are better than one” holds for any two strategies selected from a set of distinct strategies. In our experiments, we considered a set S = { 1 , 2 , 3 , 4 } of four distinct strategies. In each experiment, we arbitrarily selected two strategies, strategy s 1 and strategy s 2 from the set S of distinct strategies first. We then performed tests to determine whether the performance of the algorithm based on two strategies, strategy s 1 and strategy s 2 , was better than or equal to that of applying only strategy s 1 or strategy s 2 alone. The total number of two-combination ( s 1 , s 2 ) of S is equal to 4 ! 2 ! ( 4 2 ) ! = 6. Ten test cases were used in this study. We performed six tests for each of the ten test cases. The results of experiments indicated that the average fitness value of two-combination ( s 1 , s 2 ) of S is better than or equal to that of applying only strategy s 1 or strategy s 2 alone. Note that the results hold for all test cases. Therefore, the new finding is that the old saying “Two heads are better than one” is valid. This implies that the simple rule of “Two heads are better than one” can be applied to facilitate the development of effective self-adaptive evolutionary algorithms. Our new finding paves the way for developing better search strategies of the evolutionary computation approach based on the sayings created by human beings or human intelligence. The results of this study is based on the set S = { 1 , 2 , 3 , 4 } of four distinct strategies. An interesting question is whether the saying “Two heads are better than one” is valid for a set of more than four strategies or at set of arbitrary strategies.
Note that our research goal of this study is not to propose the “best” self-adaptive algorithm. Our research goal is to illustrate that combining two strategies in the self-adaptive algorithm is better than using a single strategy. In this study, a self-adaptive algorithm is used to verify the validity of “two heads are better than one” due to its effectiveness reported in [31]. However, adopting the fixed ratio obtained from the assessment phase tends to decrease flexibility in the evolution processes. An interesting research question is whether combining two strategies in a non-self-adaptive algorithm which randomly selects one of the two strategies is better than using a single strategy. If the abovementioned statement is true, combining two strategies in a non-self-adaptive algorithm which randomly selects one of the two strategies will provide another way to develop algorithms based on “two heads are better than one”. A comparative study of the effectiveness of two approaches, (1) combining two strategies in the self-adaptive algorithm and (2) combining two strategies in the non-self-adaptive algorithm, is also an interesting future research direction.
The experimental results presented in this study show that “Three heads are better than two” holds. This sparks related questions: does “Three heads are better than two” or “N heads are better than N-1 heads” hold? Obviously, these interesting questions call for an extended study to find the answers.

8. Conclusions

Human beings have created a lot of ancient idioms and old sayings based on their life experiences. Human wisdom is hidden in countless ancient idioms and old sayings. Many ancient idioms and old sayings have been used as rules of thumbs to support the decision making and daily operations of businesses. Despite the fact that ancient idioms and old sayings are useful in our lives, they are rarely applied in the development of new solution methods in evolutionary computation. In this study, we illustrate that the old saying “Two heads are better than one” is valid for evolutionary computation approaches to solving an optimization problem in ridesharing systems. We designed and performed experiments to validate whether “Two heads are better than one” holds for any two strategies selected from a set of distinct strategies. We showed that the performance of the algorithm based on two strategies s 1 or s 2 is better than or equal to that of applying either strategy s 1 or strategy s 2 alone by experiments. This result enables the application of the old saying “Two heads are better than one” to develop new effective evolutionary algorithms through combining two strategies. Note that the validity of the old saying “Two heads are better than one” confirmed in this study is based on experiments of four well-known strategies in DE. An interesting future research direction is to study the validity of the old saying “Two heads are better than one” for more strategies. Note that the results of this are based on computational experiences. This motivates another challenging research issue to prove that the performance obtained by arbitrarily combining two of the four DE mutation strategies in the self-adaptive algorithm will be better than or equal to that of applying only one of the two strategies alone.

Funding

This research was supported in part by the National Science and Technology Council, Taiwan, under Grant NSTC 111-2410-H-324-003.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available in a publicly accessible repository described in the article.

Acknowledgments

The author would like to thank the anonymous reviewers and the editors for their comments and suggestions, which are invaluable for the author to improve the presentation, clarity and quality of this paper.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. How AI Could Change the Highly-Skilled Job Market? Available online: https://www.bloomberg.com/news/articles/2019-11-20/how-ai-could-change-the-highly-skilled-job-market?utm_medium=cpc_search&utm_campaign=NB_ENG_DSAXX_DSAXXXXXXXXXX_EVG_XXXX_XXX_Y0469_EN_EN_X_BLOM_GO_SE_XXX_XXXXXXXXXX&gad_source=1&gclid=CjwKCAjwrvyxBhAbEiwAEg_KgsYFeR2X-zGN0YGx8Wwtf53-_KwUwFz-yBPYsUCW2eTom-V7HUlkHxoCvzkQAvD_BwE&gclsrc=aw.ds (accessed on 10 May 2024).
  2. Will Robots and AI Cause Mass Unemployment? Not Necessarily, but They Do Bring Other Threats. Available online: https://www.un.org/en/desa/will-robots-and-ai-cause-mass-unemployment-not-necessarily-they-do-bring-other (accessed on 10 May 2024).
  3. Artificial Intelligence will Create New Kinds of Work. Available online: https://www.economist.com/business/2017/08/26/artificial-intelligence-will-create-new-kinds-of-work?utm_medium=cpc.adword.pd&utm_source=google&ppccampaignID=21228634515&ppcadID=&utm_campaign=a.22brand_pmax&utm_content=conversion.direct-response.anonymous&gad_source=1&gclid=CjwKCAjwrvyxBhAbEiwAEg_Kgjd8LrCAldy-t_NyZ1l8qB0JJiQ85SlWfNMVh-5Dayzrh66KsJsiOxoCXGoQAvD_BwE&gclsrc=aw.ds (accessed on 10 May 2024).
  4. Simon, H.A. Two Heads Are Better than One: The Collaboration between AI and OR. Interfaces 1987, 17, 8–15. Available online: http://www.jstor.org/stable/25060980 (accessed on 10 May 2024). [CrossRef]
  5. Cimolino, G.; Graham, T.N. Two heads are better than one: A dimension space for unifying human and artificial intelligence in shared control. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 30 April–5 May 2022; pp. 1–21. [Google Scholar]
  6. ChatGPT. Available online: https://chatgpt.com/ (accessed on 10 May 2024).
  7. Generative Artificial Intelligence. Available online: https://en.wikipedia.org/wiki/Generative_artificial_intelligence (accessed on 10 May 2024).
  8. Microsoft Copilot. Available online: https://copilot.microsoft.com/ (accessed on 10 May 2024).
  9. Generative AI on Google Cloud. Available online: https://cloud.google.com/ai/generative-ai (accessed on 10 May 2024).
  10. Dyba, T.; Arisholm, E.; Sjoberg, D.; Hannay, J.; Shull, F. Are Two Heads Better than One? On the Effectiveness of Pair Programming. IEEE Softw. 2007, 24, 12–15. [Google Scholar] [CrossRef]
  11. Jindal, S.; Sparks, A. Synergy at work: Two heads are better than one. J. Assist. Reprod. Genet. 2018, 35, 1227–1228. [Google Scholar] [CrossRef] [PubMed]
  12. Li, Z.; Xie, L.; Song, H. Two heads are better than one: Dual systems obtain better performance in facial comparison. Forensic Sci. Int. 2023, 353, 111879. [Google Scholar] [CrossRef] [PubMed]
  13. Li, A.; Liu, W.; Zheng, C.; Fan, C.; Li, X. Two Heads are Better Than One: A Two-Stage Complex Spectral Mapping Approach for Monaural Speech Enhancement. IEEE/ACM Trans. Audio Speech Lang. Process. 2021, 29, 1829–1843. [Google Scholar] [CrossRef]
  14. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  15. Kennedy, J.; Eberhart, R.C. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  16. Yang, X.S. Firefly algorithms for multimodal optimization. LNCS 2009, 5792, 169–178. [Google Scholar]
  17. Shared Mobility: Where It Stands, Where It’s Headed, 11 August 2021. Available online: https://www.mckinsey.com/industries/automotive-and-assembly/our-insights/shared-mobility-where-it-stands-where-its-headed (accessed on 10 May 2024).
  18. Furuhata, M.; Dessouky, M.; Ordóñez, F.; Brunet, M.; Wang, X.; Koenig, S. Ridesharing: The state-of-the-art and future directions. Transp. Res. Part B Methodol. 2013, 57, 28–46. [Google Scholar] [CrossRef]
  19. Mourad, A.; Puchinger, J.; Chu, C. A survey of models and algorithms for optimizing shared mobility. Transp. Res. Part B Methodol. 2019, 123, 323–346. [Google Scholar] [CrossRef]
  20. Martins, L.C.; Torre, R.; Corlu, C.G.; Juan, A.A.; Masmoudi, M.A. Optimizing ride-sharing operations in smart sustainable cities: Challenges and the need for agile algorithms. Comput. Ind. Eng. 2021, 153, 107080. [Google Scholar] [CrossRef]
  21. Žilinskas, A.; Calvin, J. Bi-objective decision making in global optimization based on statistical models. J. Glob. Optim. 2019, 74, 599–609. [Google Scholar] [CrossRef]
  22. Žilinskas, A.; Zhigljavsky, A. Stochastic Global Optimization: A Review on the Occasion of 25 Years of Informatica. Informatica 2016, 27, 229–256. [Google Scholar] [CrossRef]
  23. Solis, F.J.; Wets, R.J.B. Minimization by random search techniques. Math. Oper. Res. 1981, 6, 19–30. [Google Scholar] [CrossRef]
  24. Pintér, J.N. Convergence properties of stochastic optimization procedures. Optimization 1984, 15, 405–427. [Google Scholar] [CrossRef]
  25. Auger, A.; Hansen, N. Theory of evolution strategies: A new perspective. In Theory of Randomized Search Heuristics: Foundations and Recent Developments; Auger, A., Doerr, B., Eds.; World Scientific Publishing: New York, NY, USA, 2010; pp. 289–325. [Google Scholar]
  26. Tempo, R.; Calafiore, G.; Dabbene, F. Randomized Algorithms for Analysis and Control of Uncertain Systems: With Applications; Springer Science & Business Media: Berlin, Germany, 2012. [Google Scholar]
  27. Zhigljavsky, A. Theory of Global Random Search; Kluwer: Dordrecht, The Netherlands, 1991. [Google Scholar]
  28. Žilinskas, A. Statistical models of multimodal functions and construction of algorithms for global optimization. Informatica 1990, 1, 141–155. [Google Scholar]
  29. Žilinskas, A. A review of statistical models for global optimization. J. Glob. Optim. 1992, 2, 144–153. [Google Scholar] [CrossRef]
  30. Katoch, S.; Chauhan, S.S.; Kumar, V. A review on genetic algorithm: Past, present, and future. Multimed. Tools Appl. 2021, 80, 8091–8126. [Google Scholar] [CrossRef] [PubMed]
  31. Hsieh, F.S. Trust-Based Recommendation for Shared Mobility Systems Based on a Discrete Self-Adaptive Neighborhood Search Differential Evolution Algorithm. Electronics 2022, 11, 776. [Google Scholar] [CrossRef]
  32. Hsieh, F.S.; Zhan, F.; Guo, Y. A solution methodology for carpooling systems based on double auctions and cooperative coevolutionary particle swarms. Appl. Intell. 2019, 49, 741–763. [Google Scholar] [CrossRef]
  33. Hsieh, F.S. A Comparative Study of Several Metaheuristic Algorithms to Optimize Monetary Incentive in Ridesharing Systems. ISPRS Int. J. Geo-Inf. 2020, 9, 590. [Google Scholar] [CrossRef]
  34. Franckx, L. Barriers to the Uptake of Shared Mobility, 26 July 2017. Available online: https://mobilitybehaviour.eu/2017/07/26/barriers-to-the-uptake-of-shared-mobility/ (accessed on 10 May 2024).
  35. Van Der Waerden, P.; Lem, A.; Schaefer, W. Investigation of Factors that Stimulate Car Drivers to Change from Car to Carpooling in City Center Oriented Work Trips. Transp. Res. Procedia 2015, 10, 335–344. [Google Scholar] [CrossRef]
  36. Shaheen, S.A.; Chan, N.D.; Gaynor, T. Casual carpooling in the San Francisco Bay Area: Understanding user characteristics, behaviors, and motivations. Transp. Policy 2016, 51, 165–173. [Google Scholar] [CrossRef]
  37. Hsieh, F.-S. A Self-Adaptive Meta-Heuristic Algorithm Based on Success Rate and Differential Evolution for Improving the Performance of Ridesharing Systems with a Discount Guarantee. Algorithms 2024, 17, 9. [Google Scholar] [CrossRef]
  38. The Mobility-as-a-Service (MaaS) Challenges: Trust between Players. Available online: https://www.ptolemus.com/insight/the-mobility-as-a-service-maas-challenges-trust-between-players/ (accessed on 10 May 2024).
  39. Hunter, J.G.; Ulwelling, E.; Konishi, M.; Michelini, N.; Modali, A.; Mendoza, A.; Snyder, J.; Mehrotra, S.; Zheng, Z.; Kumar, A.R.; et al. The future of mobility-as-a-service: Trust transfer across automated mobilities, from road to sidewalk. Front. Psychol. 2023, 14, 1129583. [Google Scholar] [CrossRef]
  40. Driving Social Change through Shared Mobility. Available online: https://www.idnow.io/blog/change-social-and-shared-mobility/ (accessed on 10 May 2024).
  41. Hsieh, F.-S. Comparison of a Hybrid Firefly–Particle Swarm Optimization Algorithm with Six Hybrid Firefly–Differential Evolution Algorithms and an Effective Cost-Saving Allocation Method for Ridesharing Recommendation Systems. Electronics 2024, 13, 324. [Google Scholar] [CrossRef]
  42. Wang, X.; Agatz, N.; Erera, A. Stable Matching for Dynamic Ride-Sharing Systems. Transp. Sci. 2018, 52, 850–867. [Google Scholar] [CrossRef]
  43. Agatz, N.A.H.; Erera, A.L.; Savelsbergh, M.W.P.; Wang, X. Dynamic ride-sharing: A simulation study in metro Atlanta. Transp. Res. Part B: Methodol. 2011, 45, 1450–1464. [Google Scholar] [CrossRef]
  44. Hsieh, F.-S. Improving Acceptability of Cost Savings Allocation in Ridesharing Systems Based on Analysis of Proportional Methods. Systems 2023, 11, 187. [Google Scholar] [CrossRef]
  45. Hsieh, F.-S. A Comparison of Three Ridesharing Cost Savings Allocation Schemes Based on the Number of Acceptable Shared Rides. Energies 2021, 14, 6931. [Google Scholar] [CrossRef]
  46. Guyader, H.; Friman, M.; Olsson, L.E. Shared Mobility: Evolving Practices for Sustainability. Sustainability 2021, 13, 12148. [Google Scholar] [CrossRef]
  47. Wu, M.; Yuen, K.F. Initial trust formation on shared autonomous vehicles: Exploring the effects of personality-, transfer- and performance-based stimuli. Transp. Res. Part A Policy Pract. 2023, 173, 103704. [Google Scholar] [CrossRef]
  48. Mehrotra, S.; Hunter, J.; Konishi, M.; Akash, K.; Zheng, Z.; Misu, T.; Kumar, A.; Reid, T.; Jain, N. Trust in Shared Automated Vehicles—Study on Two Mobility Platforms. 2023. Available online: https://www.researchgate.net/publication/369299624_TRUST_IN_SHARED_AUTOMATED_VEHICLES_-_STUDY_ON_TWO_MOBILITY_PLATFORMS (accessed on 10 May 2024).
  49. Tang, L.; Duan, Z.; Zhao, Y. Toward using social media to support ridesharing services: Challenges and opportunities. Transp. Plan. Technol. 2019, 42, 355–379. [Google Scholar] [CrossRef]
  50. Bistaffa, F.; Farinelli, A.; Ramchurn, S.D. Sharing rides with friends: A coalition formation algorithm for ridesharing. In Proceedings of the 29th AAAI Conference on Artificial Intelligence Pattern, Austin, TX, USA, 25–30 January 2015; pp. 608–614. [Google Scholar]
  51. Li, Y.; Chen, R.; Chen, L.; Xu, J. Towards Social-Aware Ridesharing Group Query Services. IEEE Trans. Serv. Comput. 2017, 10, 646–659. [Google Scholar] [CrossRef]
  52. Xia, J.; Curtin, K.M.; Huang, J.; Wu, D.; Xiu, W.; Huang, Z. A carpool matching model with both social and route networks. Comput. Environ. Urban Syst. 2017, 75, 90–102. [Google Scholar] [CrossRef]
  53. Shim, C.; Sim, G.; Chung, Y.D. Cohesive Ridesharing Group Queries in Geo-Social Networks. IEEE Access 2020, 8, 97418–97436. [Google Scholar] [CrossRef]
  54. Opara, K.; Arabas, J. Comparison of mutation strategies in Differential Evolution—A probabilistic perspective. Swarm Evol. Comput. 2018, 39, 53–69. [Google Scholar] [CrossRef]
  55. Ye, C.; Li, C.; Li, Y.; Sun, Y.; Yang, W.; Bai, M.; Zhu, X.; Hu, J.; Chi, T.; Zhu, H.; et al. Differential evolution with alternation between steady monopoly and transient competition of mutation strategies. Swarm Evol. Comput. 2023, 83, 101403. [Google Scholar] [CrossRef]
  56. Wang, Y.; Liu, Z.; Wang, G.-G. Improved differential evolution using two-stage mutation strategy for multimodal multi-objective optimization. Swarm Evol. Comput. 2023, 78, 101232. [Google Scholar] [CrossRef]
  57. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
Figure 1. A flowchart to apply “Two heads are better than one” to solve a problem with two people.
Figure 1. A flowchart to apply “Two heads are better than one” to solve a problem with two people.
Electronics 13 02241 g001
Figure 2. A flowchart for the performance assessment phase in applying “Two heads are better than one” to solve a problem with two people.
Figure 2. A flowchart for the performance assessment phase in applying “Two heads are better than one” to solve a problem with two people.
Electronics 13 02241 g002
Figure 3. A flowchart for the optimization phase in applying “Two heads are better than one” to solve a problem with two people.
Figure 3. A flowchart for the optimization phase in applying “Two heads are better than one” to solve a problem with two people.
Electronics 13 02241 g003
Figure 4. Comparison of SaNSDE(1,2) with DE1 and DE2.
Figure 4. Comparison of SaNSDE(1,2) with DE1 and DE2.
Electronics 13 02241 g004
Figure 5. Comparison of SaNSDE(1,3) with DE1 and DE3.
Figure 5. Comparison of SaNSDE(1,3) with DE1 and DE3.
Electronics 13 02241 g005
Figure 6. Comparison of SaNSDE(1,4) with DE1 and DE4.
Figure 6. Comparison of SaNSDE(1,4) with DE1 and DE4.
Electronics 13 02241 g006
Figure 7. Comparison of SaNSDE(2,3) with DE2 and DE3.
Figure 7. Comparison of SaNSDE(2,3) with DE2 and DE3.
Electronics 13 02241 g007
Figure 8. Comparison of SaNSDE(2,4) with DE2 and DE4.
Figure 8. Comparison of SaNSDE(2,4) with DE2 and DE4.
Electronics 13 02241 g008
Figure 9. Comparison of SaNSDE(3,4) with DE3 and DE4.
Figure 9. Comparison of SaNSDE(3,4) with DE3 and DE4.
Electronics 13 02241 g009
Figure 10. Comparison of the results of our experiments for the two combinations of strategies in Ω = {(1,2), (1,3), (1,4), (1,5), (1,6), (2,3), (2,4), (2,5) and (2,6)}.
Figure 10. Comparison of the results of our experiments for the two combinations of strategies in Ω = {(1,2), (1,3), (1,4), (1,5), (1,6), (2,3), (2,4), (2,5) and (2,6)}.
Electronics 13 02241 g010
Table 1. Notations of symbols, variables and parameters.
Table 1. Notations of symbols, variables and parameters.
VariableMeaning
D the number of drivers in the system.
P the number of passengers in the system.
K the number of locations for picking up or dropping off passengers.
d the index of a driver, where d { 1 , 2 , 3 , , , D } .
p the index of a passenger, where p { 1 , 2 , 3 , , P } .
k the index of a location, k { 1 , 2 , , K } .
J d the number of bids submitted by driver d { 1 , 2 , , D } .
j the index of the j -th bid submitted by driver d , where j { 1 , 2 , , J d } .
Λ p the passenger p ’s request for the minimal trust level.
Γ d the driver d ’s request for the minimal trust level.
V the set of nodes in the social network model of the drivers and passengers in the system.
v i the index of a node in V .
E the set of edges in the social network model of the drivers and passengers in the system.
e i j a directed edge connecting the node v i to the node v j , where v i , v j V and e i j E .
Θ a V by V trust level matrix representing the trust level Θ i j for all pairs of nodes v i , v j V ; Θ i j denotes the level that v i trusts v j .
S ( V , E , Θ ) a graph-based social network model defined by a set of nodes of drivers and passengers, V , a set of edges connecting nodes, E , and the trust level matrix, Θ .
R p the request issued by passenger p , where R p = ( L o p , L e p , ω p e , ω p l , n p , Λ p ) includes passenger p ’s origin L o p , destination, L e p , earliest departure time, ω p e , latest arrival time, ω p l , the number of seats requested, n p , and the requested minimal trust level, Λ p .
R d the request of driver d , where R d = ( L o d , L e d , ω d e , ω d l , a d , τ ¯ d , Γ d ) includes driver d ’s origin, L o d , destination, L e d , earliest departure time, ω d e , latest arrival time, ω d l , the number of available seats, a d , the maximum detour ratio, τ ¯ d , and the requested minimal trust level Γ d .
D _ B I D d j the j -th bid of driver d , where D _ B I D d j = ( q d j 1 1 , q d j 2 1 , , q d j k 1 , , q d j K 1 , q d j 1 2 , q d j 2 2 , , q d j k 2 , , q d j K 2 , π d j , o d j , c d j , a d , Γ d ) , q d j k 1 is the number of seats to pick up passengers at location k , q d j k 2 is the number of seats released after dropping passengers at location P + k , o d j is the original transport cost of driver d without ridesharing, c d j is the cost for driver d to transport passengers in the bid, a d is the total number of seats and Γ d is the requested minimal trust level of the driver.
P _ B I D p the bid of passenger p , where P _ B I D p = ( s p 1 1 , s p 2 1 , s p 3 1 , , s p K 1 , s p 1 2 , s p 2 2 , s p 3 2 , , s p K 2 , f p , Λ p ) , s p k 1 is the number of seats requested to pick up passengers at location k , s p k 2 is the number of seats released after dropping passengers at location P + k , f p is the bid price and Λ p is the requested minimal trust level of the passenger.
x d j a binary decision variable. If the j -th bid of driver d is a winning bid, the decision variable x d j will be set to 1. Otherwise, x d j will be set to 0.
y p a binary decision variable. If the bid of passenger p is a winning bid, the decision variable y p will be set to 1. Otherwise, y p will be set to 0.
F ( x , y ) overall cost savings, F ( x , y ) = p = 1 P y p f p + d = 1 D j = 1 J d x d j o d j   d = 1 D j = 1 J d x d j c d j .
Table 2. Four methods for mutation operations in DE.
Table 2. Four methods for mutation operations in DE.
Mutation Strategy s Mutation Function
1 μ 1 = v ( t + 1 ) i n = z t r 1 n + F i ( z t r 2 n z t r 3 n ) (11)
2 μ 2 = v ( t + 1 ) i n = z t b n + F i ( z t r 2 n z t r 3 n ) (12)
3 μ 3 = v ( t + 1 ) i n = z t r 1 n + F i ( z t r 2 n z t r 3 n ) + F i ( z t r 4 n z t r 5 n ) (13)
4 μ 4 = v ( t + 1 ) i n = z t b n + F i ( z t r 1 n z t r 2 n ) + F i ( z t r 3 n z t r 4 n ) (14)
Table 3. The pseudo-code of function B i n a r y T r a s   f o r m .
Table 3. The pseudo-code of function B i n a r y T r a s   f o r m .
Function  B i n a r y T r a s f o r m
   Input: u
   Output: u ¯
   Step 1: Calculate s ( v ) = 1 1   +   exp v , where v = V max i f   u > V max u     i f   V max u V max V max   i f   u < V max
   Step 2: Calculate u ¯ = 1   r s i d < s ( v ) 0   o t h e r w i s e , where r s i d is a random variable with uniform distribution U ( 0 , 1 )
Table 4. The pseudo-code of SaNSDE algorithm with two mutation strategies s 1 and s 2 arbitrarily selected from a set ( S ) of strategies.
Table 4. The pseudo-code of SaNSDE algorithm with two mutation strategies s 1 and s 2 arbitrarily selected from a set ( S ) of strategies.
A Discrete Self-adaptive Neighborhood Search Differential Evolution (SaNSDE) Algorithm with Two Mutation Strategies  s 1  and  s 2
Step 1: Initialization of parameters for performance assessment phase
   Set L the number of rounds for performance assessment phase
   Set t the number of rounds that has been executed to zero
    C R m = 0.5
    f p = 0.5
   Initialize a population with N P individuals randomly
Step 2: Main loop for performance assessment phase
    t = 0
   While t < L
Step 2.1:
      For i = 1   t o   N P
Step 2.1.1:
       Generate a random variable r with uniform distribution U ( 0 , 1 )
       If r < f p
         F i = r 1 , where r 1 is a random variable generated based on Gaussian distribution N ( μ , σ 1 2 )
       Else
         F i = r 2 , where r 2 is a random variable generated based on uniform distribution U ( 0 , 1 )
       End If
Step 2.1.2:
       Generate a random number s from the set { s 1 , s 2 }
       Calculate v t i according to strategy s
Step 2.1.3:
       Generate a random variable c r i with Gaussian distribution N ( C R m , σ 2 2 )
       Create a trial vector u t i
       For n 1, 2,…, N
         u t i n = v t i n   i f   R a n d ( 0 , 1 ) < c r i z t i n   o t h e r w i s e
         u ¯ t i n B i n a r y T r a s f o r m ( u t i n )
       End For
Step 2.1.4: Select the trial vector if it outperforms individual i
        If F 1 ( u ¯ t i ) F 1 ( z t i )
           z ( t + 1 ) i = u t i
          Record c r i in C R r e c
         If h =1
           S 1 = S 1 + 1
         Else
           S 2 = S 2 + 1
         End If
        Else
         If h = 1
           F 1 = F 1 + 1
         Else
           F 2 = F 2 + 1
         End If
        End If
        End For
        t = t + 1
       End While
Step 3: Initialization of parameters for performance optimization phase
    Set T the number of rounds for performance optimization phase
     f p = S 1 ( S 2 + F 2 ) S 2 ( S 1 + F 1 ) + S 1 ( S 2 + F 2 )
     C R m = k = 1 C R r e c C R r e c ( k ) C R r e c
Step 4: Main loop for performance optimization phase
     t = 0
    While t < T
Step 4.1:
      For i = 1   t o   N P
Step 4.1.1:
        Generate a random variable r with uniform distribution U ( 0 , 1 )
        If r < f p
           F i = r 1 , where r 1 is a random variable generated based on Gaussian distribution N ( μ , σ 1 2 )
        Else
           F i = r 2 , where r 2 is a random variable generated based on uniform distribution U ( 0 , 1 )
        End If
Step 4.1.2: Generate r a n d i = U ( 0 , 1 )

        If r a n d i < f p
           s = 1
          Calculate v t i according to strategy s 1
        Else
           s = 2
          Calculate v t i according to strategy s 2
        End If
Step 4.1.3:
        Generate a random variable c r i with Gaussian distribution N ( C R m , σ 2 2 )
        Create a trial vector u t i
        For n 1, 2,…, N
           u t i n = v t i n   i f   R a n d ( 0 , 1 ) < c r i z t i n   o t h e r w i s e
           u ¯ t i n B i n a r y T r a s f o r m ( u t i n )
        End For
Step 4.1.4: Select the trial vector if it outperforms individual i
      If F 1 ( u ¯ t i ) F 1 ( z t i )
       z ( t + 1 ) i = u t i
      Record c r i in C R r e c
      End If
      End For
         t = t + 1
End While
Table 5. Combination (1,2). Average fitness values and average number of generations for SaNSDE with combined strategy (1,2).
Table 5. Combination (1,2). Average fitness values and average number of generations for SaNSDE with combined strategy (1,2).
Test CaseParticipant
(D/P)
SaNSDE (1,2)DE1DE2
14/1018.305/2.618.305/14.318.305/20.9
25/1123.518/7.223.518/54.621.1662/129.1
35/1224.79/824.79/63.324.79/334.9
46/1236.58/9.336.58/167.635.4095/419.5
57/1330.063/12.129.2442/968.525.482/73.4
68/1462.18/7.262.18/48.762.18/42.5
79/1579.64/145.575.1808/86070.481/165.5
820/2023.644/2424.2823.644/47515.4555/9261.1
930/30136.5409/7295.6126.2855/26,507.981.1175/211.6
1040/40140.6477/14,509.8120.0865/7556.179.7772/21,756.2
Table 6. Combination (1,3). Average fitness values and average number of generations for SaNSDE with combined strategy (1,3).
Table 6. Combination (1,3). Average fitness values and average number of generations for SaNSDE with combined strategy (1,3).
Test CaseParticipant
(D/P)
SaNSDE (1,3)DE1DE3
14/1018.305/2.318.305/14.318.305/19.2
25/1123.518/10.623.518/54.622.7885/514.3
35/1224.79/8.724.79/63.324.3962/69.6
46/1236.58/21.136.58/167.636.58/214.2
57/1330.063/19.629.2442/968.529.0007/181
68/1462.18/8.662.18/48.762.18/51.8
79/1579.64/95.775.1808/86078.1356/932.7
820/2023.644/1628.523.644/47523.644/1098.6
930/30139.1137/25,838126.2855/26,507.9133.3518/20,689.3
1040/40145.0836/18,991.4120.0865/7556.1122.8838/22,660.2
Table 7. Combination (1,4). Average fitness values and average number of generations for SaNSDE with combined strategy (1,4).
Table 7. Combination (1,4). Average fitness values and average number of generations for SaNSDE with combined strategy (1,4).
Test CaseParticipant
(D/P)
SaNSDE (1,4)DE1DE4
14/1018.305/3.418.305/14.318.305/35.9
25/1123.518/5.623.518/54.623.518/577.1
35/1224.79/6.324.79/63.324.3962/171.8
46/1236.58/8.236.58/167.635.9755/212.9
57/1330.063/10.329.2442/968.526.2379/139.1
68/1462.18/762.18/48.760.7055/1056.4
79/1579.64/47.675.1808/86076.2821/1139.3
820/2023.644/1101.223.644/47516.1718/5027
930/30137.1057/21,353.7126.2855/26,507.9108.7805/19,178
1040/40146.8872/21,980.7120.0865/7556.184.8821/18,684.8
Table 8. Combination (2–3). Average fitness values and average number of generations for SaNSDE with combined strategy (2–3).
Table 8. Combination (2–3). Average fitness values and average number of generations for SaNSDE with combined strategy (2–3).
Test CaseParticipant
(D/P)
SaNSDE (2,3)DE2DE3
14/1018.305/2.718.305/20.918.305/19.2
25/1123.518/6.121.1662/129.122.7885/514.3
35/1224.79/824.79/334.924.3962/69.6
46/1236.58/7.835.4095/419.536.58/214.2
57/1330.063/9.825.482/73.429.0007/181
68/1462.18/7.562.18/42.562.18/51.8
79/1579.64/2570.481/165.578.1356/932.7
820/2023.644/5433.615.4555/9261.123.644/1098.6
930/30139.4738/24,507.181.1175/211.6133.3518/20,689.3
1040/40148.8646/26,449.779.7772/21,756.2122.8838/22,660.2
Table 9. Combination (2–4). Average fitness values and average number of generations for SaNSDE with combined strategy (2–4).
Table 9. Combination (2–4). Average fitness values and average number of generations for SaNSDE with combined strategy (2–4).
Test CaseParticipant
(D/P)
SaNSDE (2,4)DE2DE4
14/1018.305/1.918.305/20.918.305/35.9
25/1123.518/521.1662/129.123.518/577.1
35/1224.79/7.724.79/334.924.3962/171.8
46/1236.58/10.435.4095/419.535.9755/212.9
57/1330.063/7.625.482/73.426.2379/139.1
68/1462.18/6.562.18/42.560.7055/1056.4
79/1579.64/172.970.481/165.576.2821/1139.3
820/2023.3851/2649.715.4555/9261.116.1718/5027
930/30134.7833/13,317.181.1175/211.6108.7805/19,178
1040/40137.7995/29,949.879.7772/21,756.284.8821/18,684.8
Table 10. Combination (3–4). Average fitness values and average number of generations for SaNSDE with combined strategy (3–4).
Table 10. Combination (3–4). Average fitness values and average number of generations for SaNSDE with combined strategy (3–4).
Test CaseParticipant
(D/P)
SaNSDE (3,4)DE3DE4
14/1018.305/2.718.305/19.218.305/35.9
25/1123.518/5.322.7885/514.323.518/577.1
35/1224.79/824.3962/69.624.3962/171.8
46/1236.58/12.236.58/214.235.9755/212.9
57/1330.063/11.229.0007/18126.2379/139.1
68/1462.18/8.862.18/51.860.7055/1056.4
79/1579.64/79.978.1356/932.776.2821/1139.3
820/2023.644/1428.423.644/1098.616.1718/5027
930/30133.6144/13,742.6133.3518/20,689.3108.7805/19,178
1040/40146.1111/18,187.1122.8838/22,660.284.8821/18,684.8
Table 11. Average fitness values and average number of generations for SaNSDE with the two combinations of strategies in Ω = {(1,2), (1,3), (1,4), (2,3), (2,4), (3,4)}.
Table 11. Average fitness values and average number of generations for SaNSDE with the two combinations of strategies in Ω = {(1,2), (1,3), (1,4), (2,3), (2,4), (3,4)}.
Test CaseParticipant
(D/P)
SaNSDE (1,2)SaNSDE (1,3)SaNSDE (1,4)SaNSDE (2,3)SaNSDE (2,4)SaNSDE (3,4)
14/1018.305/2.618.305/2.318.305/3.418.305/2.718.305/1.918.305/2.7
25/1123.518/7.223.518/10.623.518/5.623.518/6.123.518/523.518/5.3
35/1224.79/824.79/8.724.79/6.324.79/824.79/7.724.79/8
46/1236.58/9.336.58/21.136.58/8.236.58/7.836.58/10.436.58/12.2
57/1330.063/12.130.063/19.630.063/10.330.063/9.830.063/7.630.063/11.2
68/1462.18/7.262.18/8.662.18/762.18/7.562.18/6.562.18/8.8
79/1579.64/145.579.64/95.779.64/47.679.64/2579.64/172.979.64/79.9
820/2023.644/2424.2823.644/1628.523.644/1101.223.644/5433.623.3851/2649.723.644/1428.4
930/30136.5409/7295.6139.1137/25,838137.1057/21,353.7139.4738/24,507.1134.7833/13,317.1133.6144/13,742.6
1040/40140.6477/14,509.8145.0836/18,991.4146.8872/21,980.7148.8646/26,449.7137.7995/29,949.8146.1111/18,187.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hsieh, F.-S. Applying “Two Heads Are Better Than One” Human Intelligence to Develop Self-Adaptive Algorithms for Ridesharing Recommendation Systems. Electronics 2024, 13, 2241. https://doi.org/10.3390/electronics13122241

AMA Style

Hsieh F-S. Applying “Two Heads Are Better Than One” Human Intelligence to Develop Self-Adaptive Algorithms for Ridesharing Recommendation Systems. Electronics. 2024; 13(12):2241. https://doi.org/10.3390/electronics13122241

Chicago/Turabian Style

Hsieh, Fu-Shiung. 2024. "Applying “Two Heads Are Better Than One” Human Intelligence to Develop Self-Adaptive Algorithms for Ridesharing Recommendation Systems" Electronics 13, no. 12: 2241. https://doi.org/10.3390/electronics13122241

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop