Next Article in Journal
Predefined-Time Tracking Control of Unmanned Surface Vehicle under Complex Time-Varying Disturbances
Previous Article in Journal
Deep Reinforcement Learning-Based Task Offloading and Load Balancing for Vehicular Edge Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural Network-Based Approach for Failure and Life Prediction of Electronic Components under Accelerated Life Stress

1
School of Integrated Circuit Science and Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu 611731, China
2
Institute of Guizhou Aerospace Measuring and Testing Technology, Guiyang 550009, China
3
State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China (UESTC), Chengdu 611731, China
4
Chongqing Institute of Microelectronics Industry Technology, University of Electronic Science and Technology of China (UESTC), Chongqing 401331, China
5
Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China (UESTC), Shenzhen 518110, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(8), 1512; https://doi.org/10.3390/electronics13081512
Submission received: 14 March 2024 / Revised: 10 April 2024 / Accepted: 12 April 2024 / Published: 16 April 2024

Abstract

:
Researchers worldwide have been focusing on accurately predicting the remaining useful life of electronic devices to ensure reliability in various industries. This has been made possible by advancements in artificial intelligence (AI), machine learning, and Internet of Things (IoT) technologies. However, accurately forecasting device life with minimal data sets, especially in industrial applications, remains a challenge. This paper aims to address this challenge by utilizing machine learning algorithms, specifically BP, XGBOOST, and KNN, to predict device reliability with limited data. The remaining life dataset of electronic components is obtained through simulation for training and testing the algorithms, and the experimental results show that the algorithms achieve a certain level of accuracy, with the error rates being as follows: BP algorithm, 0.01–0.02%; XGBOOST algorithm, 0.01–0.02%; and KNN algorithm, 0–0.07%. By benchmarking these algorithms, the study demonstrates the feasibility of deploying machine learning models for device life prediction with acceptable accuracy loss, and highlights the potential of AI algorithms in predicting the reliability of electronic devices.

1. Introduction

In recent years, electronic devices have been an indispensable factor in the development of modern technology [1], and their wide applications have promoted technological progress and innovation and are omnipresent in our lives. From smartphones [2], televisions [3], and computers to automobiles [4], aerospace [5] technology, medical equipment [6], and industrial machinery, the electrical device reliability of electronic devices has always been an important issue in electronic engineering [7]. Among electronic devices, power electronic devices have numerous and crucial applications across various fields. As to power electronic devices, multilayer ceramic capacitors (MLCCs) are versatile components widely used in various electronic applications, but their adoption in power electronics remains limited compared to other technologies like aluminum electrolytic capacitors (AECs). The AEC, in particular, is known for its widespread usage in power electronics due to its reliability and cost-effectiveness. However, AECs are also susceptible to certain issues such as aging, temperature variations, and voltage instabilities. Therefore, ensuring the reliability of power electronic devices is crucial to ensuring the regular operation of equipment and avoiding losses. Reliability issues involve the performance and lifespan of electronic devices under normal operation and extreme conditions [8]. In the design, production, and use of electronic devices, many factors can lead to reliability problems, including physical performance, material quality, process technology, environmental conditions [9], and so on. Artificial intelligence (AI) technology has rapidly advanced in recent years, revolutionizing numerous fields through its capabilities in automation, prediction, and decision-making. This transformative technology has found applications in a wide range of industries, from healthcare and finance to agriculture and transportation. These AI algorithms represent the latest advancements in the field of artificial intelligence and are opening up new possibilities. The rapid development of this technology is driving innovation in many fields, including healthcare, finance, transportation, agriculture, and more. Recently, researchers have begun to explore the applications of machine learning in engineering, one of which is predicting the lifetime of electronic devices [10]. However, there are still many technical challenges in predicting the lifetime of electronic devices.
During operation, electronic devices are subject to various environmental stresses, such as temperature, electrical power, and humidity, which collectively influence the product’s lifespan [11]. Accurately predicting the lifecycle of devices can better address faults in systems, circuits, and equipment. However, predicting the lifespan of capacitors is a challenging task due to the labor-intensive nature of lifespan data collection, as it consumes significant time and cannot yield large amounts of lifespan data [12]. Therefore, predicting the lifespan of capacitors faces challenges related to data quality and availability, feature selection, and engineering [13]. In recent years, the use of machine learning for predicting the performance of electrical devices and materials has garnered increasing attention from researchers worldwide due to its superiority in prediction accuracy [14], time efficiency, and cost-effectiveness. Wang [15] developed two physics-guided machine learning frameworks that combine physics-based models and ML algorithms to improve the ability to predict lifespan. Liu [16] developed a machine learning-based fatigue life prediction for ultra-high cycle fatigue. First, 173 sets of VHCF experimental data of high-strength steels were collected to train the ML model. Sensitivity coefficient analysis showed that inclusions size and maximum stress were the most strongly correlated parameters with fatigue life and were selected as input features for the final model training. The resulting ML model predicted S-N curves that were very close to the actual S-N curves. Among the three algorithmic models, random forest, XG boost and Gradient Boost, the Gradient Boost model performs well and has the highest accuracy in predicting the VHCF life of high-strength steels. Zhang [17] proposed a new machine learning prediction method where the training database contains ultra-high cycle fatigue lives of different metallic materials obtained from fatigue tests, and two fatigue life prediction models were constructed based on gradient boosting and random forest algorithms. The vanishing gradient problem is a common issue in deep learning, where the gradients of the loss function become extremely small as they propagate back through the layers of a neural network during training. This can hinder the convergence of the model and result in slower training or even complete failure to learn.
Compared with deep learning, traditional machine learning has the advantages of relatively less data requirements [18,19,20,21], low computing resource consumption, and generalization capabilities [22,23,24,25]. Among them, the BP (Back Propagation) algorithm [26,27], KNN (K-nearest neighbor) algorithm [28] and XGBOOST algorithm [29,30,31] are popular machine learning algorithms used for different purposes. First, the BP algorithm is an artificial neural network algorithm commonly used for supervised learning tasks such as classification and regression. It minimizes the overall error by backpropagating errors in the network and adjusting the weights of connections between neurons [32]. Secondly, the KNN algorithm is an instance-based non-parametric algorithm commonly used for classification and regression. It works by finding the k nearest neighbors of a given query point and then predicting a label or value based on the majority vote or average of the neighbors. Finally, the XGBOOST algorithm is a gradient boosting algorithm that has performed well in various machine learning competitions [33]. It combines multiple weak predictive models (usually decision trees) into a strong predictive model by iteratively adding new models, thereby minimizing the overall error. The algorithm is known for its excellent predictive performance and flexibility in using numerous hyperparameters [34]. The advantages of using a BP neural network, XGBOOST and KNN algorithms to predict equipment life are as follows: These algorithms can effectively extract features from input data [35], handle non-linear relationships, and consider complex patterns in data sets to achieve accurate life predictions [36]. Additionally, these algorithms can handle both numeric and categorical variables, ensuring robustness and flexibility in the forecasting process [37,38].
This paper studies the failure and life prediction of electronic components under complex environmental factors in view of the urgent problems that need to be solved in the prediction of electronic component failure and life. The focus of the research is to use three types of machine learning models to predict the life of electronic components, namely BP, KNN and XGBOOST, and conduct ablation experiments on the three machine learning models. Among them, the life span of CAK45A-series solid tantalum capacitors is predicted (regression problem). The experimental results show that the error between the predicted value and the simulated value using the BP model to predict the life of the CAK45A-series solid tantalum capacitors was controlled at 0.01–0.02%, the error between the predicted value and the simulated value using the XGBOOST model to predict the life of the CAK45A-series solid tantalum capacitors was also controlled at 0.01–0.02%, and the error between the predicted value and the simulation value obtained using the KNN model for the life of CAK45A-series solid tantalum capacitors was also controlled at 0.01–0.07%. Compared with the KNN model, using BP and XGBOOOST to predict the life of electronic components is more accurate. In short, by predicting the life of electronic components, electronic components can be maintained or replaced in a timely manner to avoid losses caused by capacitor failure. By predicting the life of a capacitor, appropriate measures can be taken before the end of its life is approached to extend the reliability and service life of the equipment.

2. Design and Analysis

In this experiment, we studied the typical failure mechanisms of capacitive electronic components, and analyzed and compared the principles and applicable conditions of three technologies: a BP multi-layer feedforward neural network, KNN algorithm and XGBOOST algorithm. Based on the impact of simulation errors, we determined which network prediction model to use as the life prediction model. For the simulation of electronic components, we tried two software tools, Comsol 6.0 and Icepak, and finally we chose Icepak for simulation. Comsol is an excellent multiphysics simulation software with versatile capabilities. However, ANSYS 2022 Icepak has the professional capability of multi-physics field coupling, taking into account the influences of heat conduction, convection, radiation, and heat transfer mediums simultaneously. This allows engineers to conduct comprehensive thermal analysis of complex systems, predicting more accurately the temperature distribution and heat dissipation performance of devices. We then tested the actual lifespan of the electronic components and saved the results of each test to create a self-constructed dataset. Finally, the study used self-built data sets as training data to construct three corresponding machine learning prediction models and make predictions. The methods used in the study and the construction process of the data set are shown in Figure 1.
Common failure modes of capacitors include breakdown, open circuit, changes in electrical parameters (such as excessive capacitance, increased dissipation factor, decreased insulation performance, or increased leakage current), leakage, corrupted or broken leads, and cracked or arced insulation. Capacitor failure can occur for various reasons, including differences in materials, structures, manufacturing processes, performance, and operating environments. In this study, we focus on changes in electrical parameters, specifically the decrease in capacitance and the increase in dissipation and leakage current.
In the case of electrolytic capacitors, the capacitance slowly decreases in the early stages of operation. The continuous repair and thickening of the anodic oxide film by the working electrolyte under load attributes to this. However, in later stages of usage, electrolytic capacitors experience a significant increase in dissipation due to the depletion of electrolytes, leading to the thickening of the solution. The increased viscosity results in an increase in the equivalent series resistance of the working electrolyte, causing noticeable capacitor loss. Additionally, the high viscosity of the electrolyte makes it difficult for the oxidized film layer to fully contact uneven surfaces after corrosion treatment, resulting in a decrease in effective plate area and a sharp drop in capacitance. These changes indicate that the capacitor is approaching the end of its service life. Furthermore, excessive viscosity of the working electrolyte at low temperatures can also lead to increased loss and a rapid decrease in capacitance. Figure 2 shows the main problems that currently affect the life of electronic components.
Factors such as low manufacturing process levels, inadequate formation of the oxide film, outdated slicing processes, significant damage to and contamination of the oxide film, poor formulation of the working electrolyte, low raw material purity, and difficulty in the long-term stability of the electrolyte’s chemical and electrochemical properties can all contribute to excessive leakage current and eventual failure. The severe contamination of chloride ions in electrolytic capacitors can cause decomposition of the oxide film, leading to perforation and further increasing the leakage current. Additionally, a high impurity content facilitates current conduction. The presence of copper and silicon impurities affects the transformation of aluminum oxide to a crystalline structure. In summary, metal impurities can increase leakage current in electrolytic capacitors, reducing their lifespan.
This study used Icepak for simulation to predict the life of the components and compared it with the life of the actual tested components, and the error between the results obtained through simulation and the real measurements was not more than 5%. Figure 3 shows the problems that occurred in the electronic components after the actual test. To examine whether there were any anomalies in the sample tantalum core, the tantalum capacitor core was solid-sealed and ground, and a metallographic section was made for observation. The metallographic section is shown in Figure 3a. Upon examination, a breakdown point was identified at a single corner of the specimen slice, depicted in thee right part of Figure 3a. This point of breakdown is situated on the surface of the tantalum core corner, with no notable irregularities present in other regions of the sample. For another failed tantalum capacitor device, after opening the sample, the internal overall appearance was revealed, with local discoloration and a small pinhole observed through scanning electron microscopy, as shown in Figure 3b. After magnifying the discolored areas above, obvious surface cracks can be observed, as shown in Figure 3c.

3. Results

This study utilized three models for predicting the lifespan of electronic components, namely the BP (Back Propagation) model, XGBoost model, and KNN (K -nearest neighbors) model. The sensitive parameter data, life and failure data were input into the prediction models for training. The sensitive parameter data included the Capacitance Value, Loss Angle, and leakage current. The sensitive variables input into the models consisted of raw data collected from a self-constructed dataset. The model dependent variable was the predicted life expectancy. The dataset was divided into training and testing sets in an 8:2 ratio for regression prediction of electronic component lifespan. PyCharm 2022.2.4 was employed as the training software.
The model is a nonlinear regression model based on the BP neural network (Figure 4a). The model has three layers, with 20 neurons in the input layer and relu activation function, 10 neurons in the hidden layer with relu activation function, and 1 neuron in the output layer representing a predicted value. The described nonlinear regression model based on the BP neural network with specific architecture details, including the number of layers, neurons per layer, and activation functions, showcased a well-designed framework for tackling regression problems. Through its sophisticated design and training mechanisms, the model can effectively learn from data, extract meaningful insights, and provide valuable predictions in various applications. The optimizer of the model is RMSprop, with a learning rate of 0.001. RMSprop may be computationally more efficient than Adam because it does not need to maintain additional momentum terms. In cases where computational resources are limited, it may be more appropriate to choose RMSprop. Also, in some cases, RMSprop may converge faster than SGD. Therefore, RMSprop was chosen over Adam or SGD to better fit the characteristics of the dataset and to perform better in experiments. The loss function of the model was the mean-squared error (mse), and the evaluation criteria were the mse and Mean Absolute Error (mae).
The model selects 10,000 iterations as the maximum number of data fittings. During the training process, the system will return a history object that stores information about the loss, mae, and mse. We observed the training effect by checking the mae, val_mae, mse, and val_mse data in history, and the predicted data of the test set. The output was all the results of the test set. Figure 4a shows the predicted results of the BP algorithm. The CAK45 solid tantalum capacitor has a total of 50 data points. We used 40 randomly selected data points as the training dataset and the remaining 10 as the test dataset for model training and testing. Figure 4b describes the histogram of the errors between the true values and the predicted values, Figure 4c describes the regression analysis graph of the true values and the predicted values, and Figure 4d describes the error values between the true values and the predicted values. The maximum error among the 10 data points was 0.02%. As seen in Figure 4e,f the average absolute value error and mean square error of the training data and validation data during the training process decreased gradually with the increase in the number of training times, and finally tended to zero.
The focal point of this study revolves around a sophisticated nonlinear regression framework hinged upon the XGBOOST neural network architecture, depicted eloquently in Figure 5a. Designed specifically for regression tasks, this model operates on the Mean Squared Error (MSE) loss function, steering its trajectory towards optimal predictive performance. Embarking on a journey of 10,000 iterations, the model meticulously crafted an ensemble of decision trees, each contributing to the collective wisdom of the algorithm. To safeguard against the perils of overfitting, the model’s complexity was meticulously controlled. A judicious choice was made to limit the maximum depth of each decision tree to 5, thereby fostering a leaner and more generalizable model architecture. This strategic decision to opt for shallower trees served as a bulwark against overfitting, ensuring that the model refrained from memorizing noise within the training data and instead focused on capturing underlying patterns. Furthermore, the model’s training regimen incorporated prudent measures to enhance its robustness and resilience. A random sampling ratio of 0.8 was judiciously selected, signifying that 80% of the training data were utilized in the training of each decision tree. This deliberate choice served to mitigate variance and fortify the model’s generalization capabilities, enabling it to perform admirably on unseen data. Delving deeper into the intricacies of model refinement, the gamma parameter emerged as a critical linchpin in controlling the growth dynamics of individual decision trees. With a carefully chosen value of 0.1, gamma exerted its influence by regulating the weights for further splits at the leaf nodes of the tree. By constraining tree expansion, larger gamma values played a pivotal role in staving off overfitting, thereby imbuing the model with a heightened degree of resilience against spurious correlations present in the training data. The training journey of this XGBOOST model was meticulously documented, capturing vital metrics such as the Mean Absolute Error (MAE) and MSE at every juncture. These metrics served as beacons, guiding the model towards convergence and offering insights into its performance trajectory. Central to this experimentation lay the CAK45 tantalum capacitor dataset, comprising 50 meticulously curated data points. Out of these, 40 were judiciously earmarked for training purposes, while the remaining 10 were held in reserve for rigorous testing. It was within this controlled environment that the model’s predictive prowess was put to the test, with the test set serving as a litmus test for its generalization capabilities and real-world applicability. In summary, this XGBOOST-based regression model represents a meticulous synthesis of cutting-edge techniques and prudent methodologies, poised to revolutionize the realm of predictive modeling. Its ability to navigate the complex terrain of regression tasks with precision and resilience underscores its potential as a stalwart ally in the quest for predictive excellence. Figure 5b depicts the error histogram between the true and predicted values, Figure 5c shows the regression analysis plot between the true and predicted values, and Figure 5d depicts the error value between the true and predicted values. The maximum error among the 10 data points was 0.02%.
The model at hand is a sophisticated nonlinear regression system founded on the K-nearest neighbors (KNN) neural network architecture, as illustrated in Figure 6a. This model is specifically designed for regression tasks, aimed at predicting continuous values. Within this framework, a crucial parameter, ‘n_neighbors’, is finely tuned to a value of 5, delineating the count of neighboring data points to be considered in the KNN model. Throughout the rigorous training regimen, meticulous attention was given to capturing crucial metrics such as the Mean Absolute Error (MAE) and Mean-Squared Error (MSE). The training efficacy was closely monitored by scrutinizing the MAE, validation MAE (val_mae), MSE, and validation MSE (val_mse) data logged in the historical records. Additionally, the predictive prowess of the model was rigorously evaluated against a dedicated test set. The dataset under study, the CAK45 solid tantalum capacitor dataset, boasts a total of 50 meticulously curated data points. For model training and validation, a prudent split was employed, with 40 data points earmarked for training and the remaining 10 reserved for rigorous testing. Figure 6b encapsulates the essence of the model’s predictive accuracy through an insightful error histogram, juxtaposing the true and predicted values. Complementing this, Figure 6c unveils the regression analysis, painting a vivid picture of the model’s efficacy in approximating the true values. Meanwhile, Figure 6d delves deeper into the nuances of prediction errors, offering a granular depiction of the disparities between the true and predicted values. Amongst the test dataset’s 10 meticulously selected data points, the maximum error is a minuscule −0.07%, a testament to the model’s remarkable precision and fidelity in capturing the underlying patterns within the dataset.

4. Discussion

In the realm of predictive modeling, precision is paramount, and the study under scrutiny meticulously curated three distinct sets of life result outputs from various machine learning models for comprehensive comparison, as elegantly illustrated in Figure 7. It is within this analytical center that the performance disparities amongst the backpropagation (BP) algorithm, the XGBOOST algorithm, and the K-nearest neighbors (KNN) algorithm come to light. Remarkably, both the BP algorithm and the XGBOOST algorithm showcased an impressive error range of 0.01% to 0.02%. This narrow margin of error underscores the finesse with which these models approximate the elusive concept of capacitor life expectancy. Their consistent and precise predictions instill confidence in their utility within real-world applications, where accuracy is paramount. In contrast, the KNN algorithm exhibited a slightly broader error range spanning from 0.01% to 0.07%. While still falling within the realms of acceptability, this variance hints at a slightly lower level of consistency and predictability compared to its counterparts. Despite this, the KNN algorithm remains a viable contender, especially in scenarios where interpretability and simplicity are prioritized over pinpoint accuracy. In summation, although all three algorithms yield predictions within the acceptable range, the XGBOOST algorithm and the BP algorithm emerged as the torchbearers of accuracy in the realm of capacitor life prediction. Their ability to navigate the intricate landscape of data with precision and finesse sets them apart, positioning them as stalwarts in the arsenal of predictive modeling tools. However, it is imperative to acknowledge the nuanced trade-offs between accuracy, interpretability, and computational complexity when selecting the optimal algorithm for a given task.

5. Conclusions

The crux of this research endeavored to tackle the formidable challenge of prognosticating electronic component failures and accurately estimating their lifespans amidst the labyrinthine intricacies of complex environmental conditions. Harnessing the power of machine learning, this study embarked on a journey to predict component lifetimes, leveraging meticulously collected real-world data on capacitor lifespans to construct a robust and self-contained dataset. Central to this endeavor was three distinct machine learning models: the backpropagation (BP) algorithm, the XGBOOST algorithm, and the K-nearest neighbors (KNN) algorithm. Each model was meticulously tailored to discern the lifespan of electronic components under specific environmental parameters. Through a rigorous comparative analysis, these models were pitted against each other to ascertain the most dependable and accurate predictor of component lifetimes. The experimental findings reflect a promising landscape wherein all three models demonstrate commendable performance, yielding predictions that fall within reasonable margins of error. This bears well for the application of machine learning in bolstering the safety and reliability of electronic components. By forecasting the lifespan of these components with a high degree of accuracy, machine learning models stand as stalwart sentinels, capable of preempting potential faults and enabling timely interventions to fortify equipment safety and operational reliability. Moreover, the integration of machine learning into decision-making processes heralds a paradigm shift, reducing the sway of human subjective biases and augmenting the objectivity and efficiency of decision-making frameworks. By sifting through copious amounts of capacitor operational data and erecting robust prediction models, this research opens avenues for more scientific and effective equipment management, operation, and maintenance practices. In essence, the fusion of machine learning with prognostic modeling heralds a new era of safety and reliability in electronic component utilization. Armed with predictive insights gleaned from comprehensive data analysis, stakeholders are empowered to chart a course towards enhanced equipment performance and longevity, thereby ushering in a future where operational uncertainties are mitigated, and equipment reliability is elevated to unprecedented heights.

Author Contributions

Conceptualization, Y.Q. and Z.L.; methodology, Y.Q.; software, Y.Q.; validation, Y.Q.; formal analysis, Y.Q.; investigation, Y.Q.; resources, Z.L.; data curation, Z.L.; writing—original draft preparation, Y.Q.; writing—review and editing, Z.L.; visualization, Y.Q.; supervision, Z.L.; project administration, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Young, G.O. Synthetic structure of industrial plastics. In Plastics, 2nd ed.; Peters, J., Ed.; McGraw-Hill: New York, NY, USA, 1964; Volume 3, pp. 15–64. [Google Scholar]
  2. Chen, W.-K. Linear Networks and Systems; Wadsworth: Belmont, CA, USA, 1993; pp. 123–135. [Google Scholar]
  3. Duncombe, J.U. Infrared navigation—Part I: An assessment of feasibility. IEEE Trans. Electron Devices 1959, 11, 34–39. [Google Scholar]
  4. Wigner, E.P. Theory of traveling-wave optical laser. Phys. Rev. 1965, 134, A635–A646. [Google Scholar]
  5. Miller, E.H. A note on reflector arrays. IEEE Trans. Antennas Propagat. 1967, 15, 692–693. [Google Scholar]
  6. Reber, E.E.; Michell, R.L.; Carter, C.J. Oxygen Absorption in the Earth’s Atmosphere; Technical Report TR-0200 (4230-46)-3; Aerospace Corp.: Los Angeles, CA, USA, 1988. [Google Scholar]
  7. Davis, J.H.; Cogdell, J.R. Calibration Program for the 16-Foot Antenna; Technical Memo. NGL-006-69-3; Electrical Engineering Research Laboratory, University of Texas at Austin: Austin, TX, USA, 1987. [Google Scholar]
  8. Transmission Systems for Communications, 3rd ed.; Western Electric Co.: Winston-Salem, NC, USA, 1985; pp. 44–60.
  9. Motorola Semiconductor Data Manual; Motorola Semiconductor Products Inc.: Phoenix, AZ, USA, 1989.
  10. Sawant, V.; Deshmukh, R.; Awati, C. Machine learning techniques for prediction of capacitance and remaining useful life of supercapacitors: A comprehensive review. J. Energy Chem. 2023, 77, 438–451. [Google Scholar] [CrossRef]
  11. Cordella, M.; Alfieri, F.; Clemm, C.; Berwald, A. Durability of smartphones: A technical analysis of reliability and repairability aspects. J. Clean. Prod. 2021, 286, 125388. [Google Scholar] [CrossRef] [PubMed]
  12. Riddle, K. Remembering Past Media Use: Toward the Development of a Lifetime Television Exposure Scale. Commun. Methods Meas. 2010, 4, 241–255. [Google Scholar] [CrossRef]
  13. Oda, H.; Noguchi, H.; Fuse, M. Review of life cycle assessment for automobiles: A meta-analysis-based approach. Renew. Sustain. Energy Rev. 2022, 159, 112214. [Google Scholar] [CrossRef]
  14. Banerjee, S.; Sharma, A.; Schmerling, E.; Spolaor, M.; Nemerouf, M.; Pavone, M. Data Lifecycle Management in Evolving Input Distributions for Learning-Based Aerospace Applications. In Proceedings of the 2023 IEEE Aerospace Conference, Big Sky, MT, USA, 4–11 March 2023. [Google Scholar] [CrossRef]
  15. Wang, L.; Zhu, S.-P.; Luo, C.; Liao, D.; Wang, Q. Physics-guided machine learning frameworks for fatigue life prediction of AM materials. Int. J. Fatigue 2023, 172, 107658. [Google Scholar] [CrossRef]
  16. Liu, X.; Zhang, S.; Cong, T.; Zeng, F.; Wang, X.; Wang, W. Very high-cycle fatigue life prediction of high-strength steel based on machine learning. Fatigue Fract. Eng. Mater. Struct. 2024, 47, 1024–1035. [Google Scholar] [CrossRef]
  17. Zhang, X.; Liu, F.; Shen, M.; Han, D.; Wang, Z.; Yan, N. Ultra-High-Cycle Fatigue Life Prediction of Metallic Materials Based on Machine Learning. Appl. Sci. 2023, 13, 2524. [Google Scholar] [CrossRef]
  18. Gong, Z.; Tong, Q.; Lu, F.; Feng, Z.; Wan, Q.; An, G.; Cao, J.; Guo, T. Life Prediction of Rolling Bearing Based on Bidirectional GRU. In Proceedings of the 3rd International Symposium on New Energy and Electrical Technology. ISNEET 2022, Anyang, China, 25–27 August 2022; Lecture Notes in Electrical Engineering. Cao, W., Hu, C., Chen, X., Eds.; Springer: Singapore, 2023; Volume 2017. [Google Scholar] [CrossRef]
  19. Qiu, H.; Wang, J.; Wang, D.; Yin, Y. Service-oriented multi-skilled technician routing and scheduling problem for medical equipment maintenance with sudden breakdown. Adv. Eng. Inform. 2023, 57, 102090. [Google Scholar] [CrossRef]
  20. Park, S.-O.; Jeong, H.; Park, J.; Bae, J.; Choi, S. Experimental demonstration of highly reliable dynamic memristor for artificial neuron and neuromorphic computing. Nat. Commun. 2022, 13, 2888. [Google Scholar] [CrossRef] [PubMed]
  21. Satpathy, P.R.; Bhowmik, P.; Babu, T.S.; Sain, C.; Sharma, R.; Alhelou, H.H. Performance and Reliability Improvement of Partially Shaded PV Arrays by One-Time Electrical Reconfiguration. IEEE Access 2022, 10, 46911–46935. [Google Scholar] [CrossRef]
  22. Lv, D.; Zhang, C.; Fei, H.; Zhao, W.; Dong, C.; Pang, Y. Life Prediction of Wind Turbine Based on Attention-BiGRU. In Proceedings of the TEPEN 2022. TEPEN 2022. Mechanisms and Machine Science; Zhang, H., Ji, Y., Liu, T., Sun, X., Ball, A.D., Eds.; Springer: Cham, Switzerland, 2023; Volume 129. [Google Scholar] [CrossRef]
  23. Zhang, J.; Zhang, C.; Xu, S.; Liu, G.; Fei, H.; Wu, L. Remaining Life Prediction of Bearings Based on Improved IF-SCINet. IEEE Access 2024, 12, 19598–19611. [Google Scholar] [CrossRef]
  24. Zhang, H.; Su, Y.; Altaf, F.; Wik, T.; Gros, S. Interpretable Battery Cycle Life Range Prediction Using Early Cell Degradation Data. IEEE Trans. Transp. Electrif. 2023, 9, 2669–2682. [Google Scholar] [CrossRef]
  25. Yang, Z.; Deng, S.; Zhang, J. Storage Life Prediction of Carbon Fiber Composites Based on Electrical Conductivity. Fibers Polym. 2024, 25, 347–355. [Google Scholar] [CrossRef]
  26. Li, Y.; Wei, P.; Xiang, G.; Jia, C.; Liu, H. Gear contact fatigue life prediction based on transfer learning. Int. J. Fatigue 2023, 173, 107686. [Google Scholar] [CrossRef]
  27. He, G.; Zhao, Y.; Yan, C. Uncertainty quantification in multiaxial fatigue life prediction using Bayesian neural networks. Eng. Fract. Mech. 2024, 298, 109961. [Google Scholar] [CrossRef]
  28. Shterev, V.; Momchev, E.; Asenov, V. Prediction of Life Expectancy of Electronic Components Estimated by Neural Network. In Proceedings of the 2023 58th International Scientific Conference on Information, Communication and Energy Systems and Technologies (ICEST), Nis, Serbia, 29 June–1 July 2023; pp. 215–218. [Google Scholar] [CrossRef]
  29. Xiao, W.; Chen, Y.; Guo, S.; Chen, K. Bearing Remaining Useful Life Prediction Using 2D Attention Residual Network. IEICE Trans. Inf. Syst. 2023, 106, 818–820. [Google Scholar] [CrossRef]
  30. Barzkar, A.; Ghassemi, M. Components of Electrical Power Systems in More and All-Electric Aircraft: A Review. IEEE Trans. Transp. Electrific. 2022, 8, 4037–4053. [Google Scholar] [CrossRef]
  31. Xu, Y.; Kohtz, S.; Boakye, J.; Gardoni, P.; Wang, P. Physics-informed machine learning for reliability and systems safety applications: State of the art and challenges. Reliab. Eng. Syst. Saf. 2022, 230, 108900. [Google Scholar] [CrossRef]
  32. Wojtowytsch, S. Stochastic Gradient Descent with Noise of Machine Learning Type Part I: Discrete Time Analysis. J. Nonlinear Sci. 2023, 33, 2023. [Google Scholar] [CrossRef]
  33. Alesemi, M.; Iqbal, N.; Botmart, T. Novel Analysis of the Fractional-Order System of Non-Linear Partial Differential Equations with the Exponential-Decay Kernel. Mathematics 2022, 10, 615. [Google Scholar] [CrossRef]
  34. Ma, Y.; Yao, M.; Liu, H.; Tang, Z. State of Health estimation and Remaining Useful Life prediction for lithium-ion batteries by Improved Particle Swarm Optimization-Back Propagation Neural Network. J. Energy Storage 2022, 52, 104750. [Google Scholar] [CrossRef]
  35. Ferreira, C.; Gonçalves, G. Remaining Useful Life prediction and challenges: A literature review on the use of Machine Learning Methods. J. Manuf. Syst. 2022, 63, 550–562. [Google Scholar] [CrossRef]
  36. Hanif, A.; Yu, Y.; DeVoto, D.; Khan, F. A Comprehensive Review Toward the State-of-the-Art in Failure and Lifetime Predictions of Power Electronic Devices. IEEE Trans. Power Electron. 2019, 34, 4729–4746. [Google Scholar] [CrossRef]
  37. Yousefian, P.; Sepehrinezhad, A.; van Duin, A.C.T.; Randall, C.A. Improved prediction for failure time of multilayer ceramic capacitors (MLCCs): A physics-based machine learning approach. APL Mach. Learn. 2023, 1, 036107. [Google Scholar] [CrossRef]
  38. Gao, J.; Heng, F.; Yuan, Y.; Liu, Y. A novel machine learning method for multiaxial fatigue life prediction: Improved adaptive neuro-fuzzy inference system. Int. J. Fatigue 2024, 178, 108007. [Google Scholar] [CrossRef]
Figure 1. Research methods and workflow diagram.
Figure 1. Research methods and workflow diagram.
Electronics 13 01512 g001
Figure 2. (a) Macroscopic defects on package; (b) X-ray morphology.
Figure 2. (a) Macroscopic defects on package; (b) X-ray morphology.
Electronics 13 01512 g002
Figure 3. Defect test analysis. (a) Material breakdown and burnout; (b) pin hole with an optical microscope; and (c) micro crack at the defect point.
Figure 3. Defect test analysis. (a) Material breakdown and burnout; (b) pin hole with an optical microscope; and (c) micro crack at the defect point.
Electronics 13 01512 g003
Figure 4. Lifetime prediction of cak45 tantalum capacitor using BP algorithm. (a) Structure diagram of BP algorithm. (b) Error histogram. (c) Prediction results (the red squares represent the predicted values). (d) Prediction error. (e) Mae change curves for the training and validation sets during the training process. (f) Mse variation curves of the training set and validation set during the training process.
Figure 4. Lifetime prediction of cak45 tantalum capacitor using BP algorithm. (a) Structure diagram of BP algorithm. (b) Error histogram. (c) Prediction results (the red squares represent the predicted values). (d) Prediction error. (e) Mae change curves for the training and validation sets during the training process. (f) Mse variation curves of the training set and validation set during the training process.
Electronics 13 01512 g004
Figure 5. (a) XGBOOST algorithm structure diagram. (b) Error histogram. (c) Prediction results (the blue circles represent the predicted values). (d) Prediction error.
Figure 5. (a) XGBOOST algorithm structure diagram. (b) Error histogram. (c) Prediction results (the blue circles represent the predicted values). (d) Prediction error.
Electronics 13 01512 g005
Figure 6. (a) KNN algorithm structure diagram. (b) Error histogram. (c) Prediction results (the orange triangle represents the predicted value). (d) Prediction error.
Figure 6. (a) KNN algorithm structure diagram. (b) Error histogram. (c) Prediction results (the orange triangle represents the predicted value). (d) Prediction error.
Electronics 13 01512 g006
Figure 7. Comparison among BP, KNN and XGBOOST algorithm.
Figure 7. Comparison among BP, KNN and XGBOOST algorithm.
Electronics 13 01512 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qiu, Y.; Li, Z. Neural Network-Based Approach for Failure and Life Prediction of Electronic Components under Accelerated Life Stress. Electronics 2024, 13, 1512. https://doi.org/10.3390/electronics13081512

AMA Style

Qiu Y, Li Z. Neural Network-Based Approach for Failure and Life Prediction of Electronic Components under Accelerated Life Stress. Electronics. 2024; 13(8):1512. https://doi.org/10.3390/electronics13081512

Chicago/Turabian Style

Qiu, Yunfeng, and Zehong Li. 2024. "Neural Network-Based Approach for Failure and Life Prediction of Electronic Components under Accelerated Life Stress" Electronics 13, no. 8: 1512. https://doi.org/10.3390/electronics13081512

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop