1. Introduction
Over the years, the world has experienced several industrial revolutions, from the medieval era driven by handcraft through to simple mechanization (first revolution) around the 1780s, to the technical revolution era (second revolution), which saw many mechanical, chemical and electrical inventions. In the 20th century, digitalization (third revolution) driven by the growth in system automation as well as microelectronics and computer development preceded the fourth industrial revolution (Industry 4.0) characterized by information system autonomy enabled through the interconnection of systems. These revolutions relied on the technology of that time and increased job creation as well as provided better working conditions in agreement with Maslow’s hierarchy of need [
1] (
Figure 1).
However, a key demerit of Industry 4.0 and the revolutions preceding it is the increasing production wastage and reduced collaboration between human and smart systems. Sustainable development seeks to meet the needs of the present without compromising the ability of future generations to meet their own needs through effective human–machine interaction (Industry 5.0) [
2]. Reliability evaluations of eco-friendly products, such as electric vehicles or solar system inverters, often require extensive testing to facilitate end user satisfaction. An accelerated life test (ALT) plan is typically used to optimize the number of test samples and conditions during product reliability evaluation. ALT is becoming a mandatory requirement for product reliability estimation [
3]. The driving force for ALT adoption is based on the ability to use data or life models derived from higher stress levels to predict life metrics at normal operating conditions [
4]. Traditional ALT based on single stress factors is incapable of stimulating the actual stress environments experienced by many products where multiple stresses coexist. Hence, numerous ALT-based studies use multi-stress life models to capture these multiple stresses. [
5,
6,
7] Notably, tests like the highly accelerated life test (HALT) and highly accelerated stress test (HAST) have been used to create aggressive stress environments, but their life modeling has been elusive. Accelerated multi-stress life models can be generally classified into three types: proportional hazard model (PHM), polynomial acceleration model (PAM) and generalized linear logarithmic acceleration model (GLGAM).
The PHM uses a time-dependent basic failure rate function as well as a time-independent positive function as a basis for modeling the failure rate of a product. Elsayed and Zhang [
8] implemented a proportional hazard model in their ALT study. PAM is often used when there is a lack of a linear relationship between the transformed stresses and the logarithm of the lifetime characteristic [
9]. In GLGAM, the lifetime characteristic of a product is taken as a function of its stress factors. GLGAM is the most used model. In [
10], GLGAM was used to evaluate the reliable lifetime of a smart electricity meter. Similarly, GLGAM was used for the optimal design of a step-stress accelerated degradation test with multiple stresses [
11,
12].
The standard way of developing an ALT plan is to formulate the likelihood function and then derive the Fisher information matrix (FIM) for test planning. Following the derived FIM is the implementation of efficient optimization criteria such as D-optimal design [
4]. Other criteria include the asymptotic variance minimization of the expected product lifetime at the product’s use condition (
-optimal design); minimization of the average prediction variance over the design space (I-optimality); minimization of the maximum entry in the diagonal of the hat matrix (G-optimality); minimization of the trace of FIM (A-optimality); and minimization of the average prediction variance over a set of m-specific points (V-optimality) [
13]. The choice of optimal criteria is dependent on the optimization goal. For cases involving a precise estimate of model parameters, D-optimality is preferred. The objective in D-optimality is the maximization of the FIM determinant. The larger the determinant, the lower the variation, which leads to a higher joint precision of the estimated parameters.
Literature on optimal ALT designs is vast and a comprehensive review on ALT planning can be found in [
14]. Some notable research works in this research area are concisely described. In [
15], the maximum likelihood theory for designing optimal ALT plans based on the assumption that the product lifetime follows a Weibull or smallest extreme value distribution was presented. In [
16], ALT with Bayesian design criterion was implemented using a sequential design approach. The study described tests at a higher stress condition based on a single stress factor, and a test plan was sequentially determined at a lower stress condition with an additional stress factor. In [
17], a quasi-likelihood approach was used to develop the D-optimal ALT test plan with test chamber effects. Similarly, in [
17] a three-iterative-step optimization algorithm was implemented using the D-optimal criterion. The D-optimal test plan was obtained via a quasi-likelihood approach. Weaver et al. used a random-effects model to evaluate test plans for degradation studies and unit-to-unit variability [
18]. In [
19,
20], optimal progressive censoring plans were determined from expected FIM. The study’s asymptotic variance–covariance matrix of the maximum likelihood was computed via a progressively type-II censored sample based on Weibull distribution by direct computation. Tse, Ding and Yang [
19] provided optimal ALT designs under interval censoring with random removals. Other studies such as [
21,
22] implemented ALT plans based on s-independent competing risks.
A careful evaluation of past ALT studies described above shows little effort channeled towards the effect of stress interactions on ALT plan optimization. Furthermore, many of the studies are based on assumed model parameters. For instance, in [
8], model parameters which do not represent the stress and test conditions were used to generate ALT plans. Similarly, in [
6,
17], model parameters were used without consideration to the interaction that exists between stress factors. Although an assumed predictor life model which captures interactions for two stress factors was used in [
13], it seems not to represent the true interaction model for the example considered in their work. Artificial intelligence techniques, such as genetic algorithms, particle swarm optimization algorithms, etc., popularly used for model parameter estimation as reported in [
4,
23,
24], could facilitate fast and accurate determination of an ALT plan. Furthermore, a test plan generated without consideration to stress factor interactions or coupling common to multi-stress is likely not to be optimal, especially when stress interactions exist. To address this gap, in this paper, an accelerated life test model with and without interaction among stress factors was developed and used to optimize an ALT plan based on two stages of particle swarm optimization algorithm implementation. A case study, involving an experiment, designed to understand electromigration failure mechanisms in solder joints was used to validate the study. Since the optimization goal of this paper is focused on a precise estimate of model parameters as well as optimal stress levels for each factor, D-optimality criterion was adopted. The rest of this paper is organized as follows. In
Section 2, a brief description of PSO is provided. The study detailed methodology is provided in
Section 3. In
Section 4, the results obtained as well as inferences associated with the results are described.
Section 5 concludes the study.
2. Particle Swarm Optimization
PSO, initially introduced by Kennedy and Eberhart [
25] in 1995, was inspired by social behavior of flocking birds. In nature, a swarm of birds flies through a space following a leader who has the closest position to food. The attraction of PSO to researchers is mainly due to the following:
In PSO, a swarm particle flies in an
-dimensional search space seeking an optimal solution. Each particle
possesses a velocity vector
and position vector
, where
is the number of dimensions. PSO starts by randomly initializing
and
. Then, after every iteration, the best position that has been found by particle
= {
} and the global best position found by the entire swarm
= {
} direct particle
to update its velocity and position using (1) and (2), respectively.
where
is the iteration,
and
are the cognitive and social acceleration coefficients, and
and
are two uniform random values generated within a [0, 1] interval. Since its first introduction, several PSOs have been studied for binary and continuous problems. In [
28], an essential binary particle swarm optimization (EPSO) is proposed based on the idea of omitting the velocity component of PSO. Thus, there is no need to limit the velocity. Queen informants in ant colony optimization (ACO) were applied to the PSO. This resulted in a modified form of EPSO denoted as EPSOq. In [
29], a continuous PSO called Multi-swarm Self-adaptive CPSO (MSCPSO) was proposed. MSCPSO population is split into four sub-swarms that allow information sharing among sub-swarms. Cooperative, diversity and self-adaptive strategies were used in MSCPSO to prevent being stuck in local optima and to obtain better solutions. Hybrid PSOs, which combine the strength of PSO and other meta-heuristic algorithms, have also been studied. In [
30], gray wolf optimization and particle swarm optimization were combined to solve binary and continuous problems. In another study, PSO was combined with genetic algorithm (GA) for field development optimization. The resultant hybrid algorithm is called genetical swarm optimization (GSO) [
31]. In this hybrid algorithm, the population is split into two portions, and it is reconstructed by GA and PSO operations in every iteration.
In [
32], PSO topology (particle connection or interaction pattern) was shown to influence its behavior or performance. Experimental results revealed that some topologies perform better than others. Common PSO topologies include star topology, which allows particles to move towards the global optimal [
25], ring topology, which allows particles to move towards the local optimal [
32], Von Neumann topology, which uses a rectangular matrix to connect a particle to the particles below it [
33], and dynamic topology [
34]. Other topologies have been studied. For instance, in [
35], a unified topology, which combined the star and ring topologies, was proposed. In addition, cluster, pyramid and wheel topologies have been studied. In [
36], a comprehensive review on PSO with emphasis on the different topologies is presented. Notwithstanding PSO’s numerous benefits, studies have shown that it suffers from premature and slow convergence. To address this problem, in [
37], multiple scale self-adaptive cooperative mutation strategy-based particle swarm optimization algorithm (MSCPSO), which uses multi-scale Gaussian mutations with different standard deviations to promote the capacity of sufficiently searching the whole solution space, was implemented. Memory concept has also been used to prevent premature convergence. In [
38], memory was used to store promising historical values, which are later used to avoid premature convergence.
PSO has been applied to solve many optimization problems. In [
39], PSO is used to segment medical images to detect brain tumors. In [
40], PSO is used for beamforming optimization in IRSs to minimize the transmission power given so that the signal-to-noise ratio (SNR) does not go below a certain threshold. PSO has been widely applied to optimize the performance of electrical power systems including economic dispatch [
41], state estimation [
42] and power system controllers [
43]. Recently, Shuai [
44] developed an improved particle swarm optimization algorithm with a specific particle initialization approach (called PIPSO) to solve the global reliability allocation problem (gRRAP). Failure parameter estimations are essential in maintainability and reliability evaluation. In [
23] and [
24], PSO was used to estimate Weibull and maintainability parameters. These two studies implemented static weight. In PSO, extensive global search (exploration) is required at the early part of the process, while the latter part requires focus at the local search (exploitation). The static weight approach does not meet this requirement. In this study, time-varying inertia weight based on the model presented in [
45] was used in the double PSO implementation. In the first stage, model parameters are optimized, while in the second stage, a multi-stress ALT plan was obtained.