Next Article in Journal
Clutch Pressure Plate Temperature Prediction Based on Bi-LSTM and Migration Learning
Previous Article in Journal
The Impact of the Growing Substrate on Morphological and Biochemical Features of Salicornia europaea L.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep-Learning Software Reliability Model Using SRGM as Activation Function

1
Department of Computer Science and Statistics, Chosun University, 309 Pilmun-daero, Dong-gu, Gwangju 61452, Republic of Korea
2
Department of Industrial and Systems Engineering, Rutgers University, 96 Frelinghuysen Road, Piscataway, NJ 08855-8018, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(19), 10836; https://doi.org/10.3390/app131910836
Submission received: 3 September 2023 / Revised: 25 September 2023 / Accepted: 27 September 2023 / Published: 29 September 2023

Abstract

:
Software is widely used in various fields. There is no place where it is not used from the smallest part to the entire part. In particular, the tendency to rely on software is accelerating as the fields of artificial intelligence and big data become more widespread. Therefore, it is extremely important to evaluate the reliability of software because of the extensive damage that could occur if the software fails. Previously, software reliability models were developed based on mathematical and statistical grounds; however, immediate response was difficult. Therefore, in this study, a software reliability model was developed that depends on data using deep learning, and it was analyzed by replacing the activation function previously used in deep learning with the proposed software reliability model. Since the sigmoid function has a similar shape to the software reliability model, we utilized this to propose a deep learning software reliability model that replaces the activation function, the sigmoid function, with the software reliability function. Two datasets were compared and analyzed using 10 criteria, and the superiority of the proposed deep-learning software reliability model was proved. In addition, the results were compared by changing the parameters utilized in the proposed deep-learning software reliability model by −10%, −5%, 5%, and 10%, and it was found that the larger the parameters, the smaller the change.

1. Introduction

Software is widely used in many fields. There is no place where it is not used from the smallest part to the entire part. Currently, the fields of artificial intelligence and big data have gained considerable attention and are based on software along with high-performance hardware. These fields require many calculations. Therefore, to make the calculations as simple as possible and to learn quickly, the code, structure, and configuration of the software are extremely important. As the artificial intelligence and big data industries develop, they must rely heavily on software. If part of the developed software fails owing to a code error or an error in the calculation process, it can cause many problems. The error in the computation may produce unexpected results, or the error may cause a malfunction in a machine that operates in real time; therefore, research to improve software reliability has been steadily conducted [1,2,3,4].
There is plenty of study in the field of system reliability that extends beyond software reliability. The study is being conducted on how to ensure reliability and service in more complex systems than software. It has been shown that information about system failures and errors is important to increase the reliability of a system and to troubleshoot problems. In addition, sustainable development is becoming a very important topic in system reliability. Sustainable development means operating production processes and developing products in a way that is environmentally friendly and considers the environment and economic aspects of workers. To create a sustainable production environment, breakdowns and energy waste must be minimized or eliminated, and maintenance is essential. For sustainable development, technologies equipped with intelligent learning self-diagnosis and self-adaptive technologies that monitor, control, and process data are needed [5,6].
Software reliability is a field in which how well software works is measured, and software reliability growth model (SRGM) is developed based on software failure. Because software failure data follows a Poisson distribution that indicates the number of failures that occurred during a period of time, the SRGM currently being studied is based on the SRGM that assumes a nonhomogeneous Poisson process (NHPP) [7]. The SRGM assuming NHPP, which started with Goel-Okumoto [8], has been extended to SRGMs reflecting various assumptions such as independent assumptions, uncertain operating environments, and dependent assumptions. From a parametric perspective, SRGMs are developed by estimating optimal parameters. In addition, software reliability models from a non-parametric perspective that depend on data have been continuously studied, based on deep learning or other machine learning algorithms such as deep neural networks and recurrent neural networks [9,10]. However, in deep learning, the typical nonlinear activation function used can cause training difficulties due to the saturation behavior, so new activation functions such as noisy or self-regularized non-monotonic activation function are being studied [11,12].
In this study, based on the theoretical background of the problem of activation function, we would like to propose a deep learning software reliability model by combining a software reliability model from a parametric perspective and a software reliability model using deep learning from a non-parametric perspective. Because the structure of the existing software reliability model is similar to the sigmoid function, the proposed software reliability model utilizes the shape of the sigmoid function and applies it to learning instead of the activation function of deep learning. This is a deep learning software reliability model by using an activation function with mathematical assumptions to solve the problem of deep learning that only depends on data. The superiority of the model is demonstrated by comparing 10 criteria using two datasets. Furthermore, the performances of the proposed deep-learning software reliability model are compared when the parameters of the model are changed. This paper is composed as follows: In Section 2, we introduce related research. In Section 3, the theoretical background of software reliability models is provided and the proposed software reliability model is introduced. In Section 4, numerical examples are provided. The results of the sensitivity analysis are presented in Section 5. Discussions and conclusions are given in Section 6 and Section 7.

2. Related Research

The study of software reliability is essential for reducing software failures and managing them better. Past SRGMs were developed based on mathematical and statistical assumptions. Early SRGMs were studied by Goel and Okumoto [8] under the assumption that independent software failures are continuous problems. Based on this, the SRGM was proposed by assuming that software failures would occur in an S-shape [13,14,15]. Zhang et al. [16] proposed a software reliability model that assumes incomplete debugging with a functional form of the fault detection rate. Yamada et al. [17] developed a SRGM with a constant defect detection rate by assuming incomplete debugging, in which defects detected during the testing phase are not corrected or removed. An incomplete debugging about defect detection rate, in which the defect detection rate is a function, has also been studied [18,19,20]. Roy et al. [21] presented a software reliability model that considers the case of incomplete debugging and error detection in the testing phase, with a sharp increase in the early stages and then a constant form of defect detection. Because the software operating environment is different for software, comparison is hard; therefore, SRGMs that consider the uncertain operating environment have been developed [22,23,24]. Chang et al. [25] proposed a software reliability model with a fault detection rate function that considers the uncertainty of the operating environment and a new test coverage, and Song et al. [26] proposed an uncertain operating environment software reliability model using a fault detection rate function that includes three parameters.
In addition, SRGMs which assume that software failures occur dependently have been studied because the developed software is composed of complexity [27,28]. A dependent SRGM was developed that assumes that past errors or faults will continue to contribute to software failures if they are not corrected [29,30]. Furthermore, Lee et al. [31] proposed a dependent SRGM that assumes an uncertain operating environment.
SRGMs with mathematical and statistical assumptions make predictions based on failure trends. However, because it is not known when and how failures will occur, there is an ongoing debate about software reliability from a nonparametric perspective, as opposed to an immediate failure perspective [32]. Miyamoto [33] developed a software reliability model for software failures that occur in open-source software using deep neural networks (DNNs). In addition, because software failures have time-series characteristics, a software reliability model using long short-term memory (LSTM) was proposed [34,35]. Wu et al. [36] proposed a structure in which the weights of the software reliability model are learned through the weights of the DNN. Batool et al. [37] proposed a model to predict software errors and defects by synthesizing not only deep learning but also various machine learning techniques. Sreekanth et al. [38] improved current methodologies such as RE and MRE by evaluating the estimation required for software development. Bhuyan et al. [39] demonstrated the superiority of the proposed feedforward neural network through a comparison between the feedforward neural network and the existing parametric SRGM. Mittelman [40] proposed a convolutional neural network-based time series model utilizing software failure data with time series data characteristics. Pan et al. [41] proposed a convolution neural network model that solved problems such as limited data size and insufficient number of experiments. To solve these problems, they built and utilized the PROMISE Source Code (PSC) dataset, which is an extension of the original dataset.

3. Software Reliability Model

3.1. NHPP Software Reliability Growth Model

Software reliability is the ability of software to successfully perform its intended function for the intended duration under given conditions of use. The reliability function for evaluating software reliability is shown in Equation (1), which is the probability that the software will operate for at least t hours.
R t = P ( T > t ) = t f u d u
The probability density function f t assumes that the time to failure of the software is a random variable T . To measure the reliability function R t , the mean number of failures is assumed to follow an exponential distribution with parameter λ . The parameter λ follows the NHPP, a function that changes over time. The model is derived by Equation (2), where N t is the number of failures that have occurred by time t , and n is the number of failures.
Pr N t = n = { m t } n n ! e m t , n = 0 , 1 , 2 , ,   t 0
where a mean value function m t is the integral of λ t , the intensity function, which is the number of failures at time t , from time 0 to time t , which is the m t from 0 to time t and follows Equation (3). The intensity function λ t is the number of instantaneous failures at time t .
m t = 0 t λ s d s
The m t is used to develop a software reliability model through a differential equation, with a t representing the number of failures and b t representing the failure detection rate. It is expressed as Equation (4), which is the most basic SRGM.
d m t d t = b t a t m t
When Equation (4) is multiplied by η , it is defined as a SRGM with assumptions about an uncertain operating environment added. When the m t is multiplied once more, it is defined as a SRGM which assumes that software failures occur dependently. By solving each differential equation with respect to m(t), one can derive the SRGM that satisfies each condition. The a t , b t , and η are expressed in variable shape. The functions contain a number of parameters with meanings such as number of software failures, expected number of failures, failure detection rate, etc. The parameters included in each model can have different meanings for the same parameter, so please refer to the references for each model. The software models developed on this basis are as follows: Models 1–9 are SRGMs obtained through the basic form of differential equations, Models 10–12 are SRGMs assuming an uncertain operating environment, Models 13 and 14 are SRGMs assuming a dependent failure, and Model 15 is the SRGM assuming a dependent failure in an uncertain operating environment. Table 1 shows the descriptions for the 15 models.

3.2. Software Reliability Model Using Deep Learning

3.2.1. Deep Neural Network

An artificial neural network is organized through nonlinear connections and modeled after the biological system of human thinking. Based on neurons (nodes), the most basic unit of information, the artificial neural network comprises input, output, and hidden layers. The input layer takes the information, the output layer considers or makes judgments based on the information received, and the hidden layer transmits the information between the input and output layers. It not only transmits information but also performs information transmission through a nonlinear combination. After the weights and biases of each layer are combined, along with the information obtained, the sum and product are combined, and the result is derived through a nonlinear transformation using the activation function. When there are many hidden layers in the structure, it is a DNN, as shown in Figure 1. The circles in the figure represent neurons. The data put to the input layer is informed to the next hidden layer through a combination of sums and products with weights, biases, and activation functions. This process is repeated until the output layer [42,43].
When transferred from layer to layer, the sigmoid function, hyperbolic tangent function, or rectified linear unit (ReLU) function is used to convert the nonlinear functions. In addition, the result transmitted to the output layer uses a sigmoid or softmax function for categorical variables and an identity function for continuous variables. In the field of software reliability, the sigmoid function is used because the software failure itself is a structure that increases in a sigmoid form, and the identity function is used because the dependent variable is a continuous variable. If the final predicted value y is obtained through the above process, a loss function representing the difference between the actual value and the predicted value is learned based on the partial derivative value for a specific parameter, where θ represents the weight and bias parameters, and α represents the learning rate. It is expressed as Equation (5).
θ t + 1 = θ t α θ L o s s θ

3.2.2. Recurrent Neural Network

A recurrent neural network is a model with cells that remember information from previous points in time, and it is specialized for learning sequential data. It is designed so that information from past points in time can influence future points in time. Figure 2a shows the structure of the RNN model represented by Equation (6). h t 1 is information about the previous layer, x t is information about the current layer, θ h is the weight and bias of the previous layer, and θ x is the weight and bias of the current layer. The activation functions utilized are a hyperbolic tangent function (tanh) [44].
h t = t a n h θ h h t 1 + θ x x t
However, while recurrent neural networks can influence later time points, they suffer from the problem that, as the time horizon becomes longer, the information from farther back in time is not reflected as well. To solve this problem, a model that has the ability to train long-term dependence, LSTM, was developed. An LSTM has a memory cell c . Memory cell c consists of a forget gate ( f t ), an input gate ( i t ), a new memory cell ( g t ), and an output gate ( o t ). The f t determines how much previous information is reflected, the i t contains new information, the g t determines how much new information is reflected, and the o t indicates the amount of previous information passed to the next layer. The θ x ,   f , θ x ,   g , θ x ,   i , and θ x ,   o are the weight for each gate connected to the input data. The θ h , f , θ h , g , θ x ,   i , and θ x ,   o are the weight of each gate for the previous layer. The c t contains information about the long-term state by combining previous information and current information, and the h t contains information about the short-term state. The activation functions utilized are a sigmoid function ( σ ) and the tanh. Memory cells can be used to solve the problem of vanishing gradients. The structure of the LSTM follows Equation (7). Figure 2b shows the structure of the LSTM model [45,46].
f t = σ x t θ x ,   f + h t 1 θ h , f g t = tanh x t θ x ,   g + h t 1 θ h , g i t = σ x t θ x ,   i + h t 1 θ x ,   i o t = σ x t θ x ,   o + h t 1 θ x ,   o c t = f t c t 1 + g t i t h t = o t tanh c t
LSTM solved the problem of the vanishing gradient for RNN. However, the larger the data size, the more computational effort is required by multiplying the number of parameters, resulting in inefficiency. To improve this, a gated recurrent unit (GRU) model was proposed. It reduces the four operations of the LSTM to three, creates an update gate ( z t ) that combines the f t with the i t in the LSTM into one, and replaces the o t with a reset gate ( r t ). The r t and z t determine the size of previous information forgotten and how much previous information is retained. The g t is the gate that determines the information candidates at time t . θ x ,   r , θ x , z , and θ x ,   g are the weight for each gate connected to the input data. θ h , r , θ h , z , and θ h , g are the weight of each gate for the previous layer. The h t , which integrates the c t and h t of the LSTM, contains information from the previous and current layers together. It learns as it transfers to the next layer through Equation (8). Figure 2c shows the structure of the GRU model [47,48].
r t = σ x t θ x ,   r + h t 1 θ h , r z t = σ x t θ x , z + h t 1 θ h , z g t = t a n h x t θ x ,   g + r t h t 1 θ h , g h t = 1 z t g t + z t h t 1
The structure of the deep-learning model used in this study is as follows: The DNN consists of four hidden layers. The activation function is a sigmoid function, and the number of nodes in each layer is used as the training size of the dataset. The parameter size is equal to the number of nodes, and the number of nodes is defined by the data size. All of the recurrent neural network types have a first-order difference. All deep-learning models are trained equally with 100 epochs to minimize overfitting, and the optimization method utilizes Adam, which simultaneously considers the acceleration of the previous training and the structure in which the learning rate is not constant and changes with each training.

3.2.3. Proposed Software Reliability Model

Typical failure occurrences follow either an S shape, in which failures occur in increasing numbers and then decrease past a certain point, or a concave shape, in which many failures occur at the beginning and end. The SRGMs developed thus far are based on the assumption of these shapes. The S-shaped form is similar to the sigmoid function in deep-learning activation functions. The sigmoid function has an S-shaped or sigmoidal curve with 1 / 2 when x is 0 in Equation (9) [49,50,51].
f x = 1 1 + exp x
In this study, the sigmoid function was the activation function used because it most closely resembles the shape of the software reliability model, and a generalized software reliability model was developed with a b t function whose input value is not a simple indexing number but the failure detection rate of the software reliability model. The failure detection rate function b t utilizes b 2 t b t + 1 wherein parameter b is the shape parameter and has a structure in which the failure probability changes with time. The final model follows Equation (10).
m t = 1 1 + exp b 2 t b t + 1
As mentioned previously, this model is a modified sigmoid form. In this study, instead of using the well-known sigmoid or ReLU activation function in deep learning, the proposed model was employed by converting it into a modified form of the proposed software reliability model. This is the proposed deep-learning software reliability model, which applies mathematical and statistical assumptions instead of only data-dependent models, and the structure of this model is the same as that of the model defined in Section 3.2.1. There are a total of four hidden layers, and the number of nodes in the hidden layer is the same as the number of data. We do not have more nodes than the number of data, as this can lead to overfitting problems. The activation function utilizes the proposed software reliability model that is similar to the form of the sigmoid function. An optimizer method utilizes Adam, which simultaneously considers momentum and adaptive learning rate [52]. Based on this, we structure a deep learning software reliability model and train it for 100 epochs. Table 2 shows a description of the software reliability model using deep learning and the proposed deep learning software reliability model. In addition, 20 models (the four software reliability models using the deep learning introduced in Section 3.2.1 and Section 3.2.2, the proposed deep-learning software reliability model introduced in Section 3.2.3, and the 15 models introduced in Section 3.1, are compared.

4. Numerical Examples

4.1. Data Information

Two datasets were used. Dataset 1 contains the failure data of critical, major, and minor errors, representing the data of 231 faults over a period of 38 weeks [53]. Dataset 2 contains the failure data generated by AVRO using the Apache open-source software. AVRO is a program that provides communication technology with external modules and serialization functions to support external storage and input/output, and a total of five releases from version 1.0.0 to version 1.7.7 were considered. Monthly data were collected on bugs, new features, improved features, and other fixed issues in each AVRO version. The period was five years, from July 2009 to July 2014, with 949 cumulative failures per month [54]. The two datasets show that the number of failures remains more or less constant as the time to failure increases, rather than a trend of a significant decrease. For Datasets 1 and 2, 90% of the training data were used for training and parameter estimation, and the remaining 10% were used to compare the predictions of the trained and estimated models.

4.2. Criteria

Twenty SRGMs, software reliability models using deep learning and the proposed deep-learning software reliability model, were compared using 10 criteria to determine the superior model. These 10 criteria were based on the difference between the actual ( y i ) and predicted values ( m ^ t i ). The mean squared error (MSE) is defined as the sum of the squares of the difference between the predicted number and the actual number of failures [55]. The mean absolute error (MAE) measures the difference between the predicted number of failures and the observed value, considering the number of observations as the sum of absolute values [56]. Predictive ratio risk (PRR) is measured by dividing the distance from the predicted number of failures to the actual number of failures by the predicted value with respect to model estimation [57]. Predictive power (PP) is measured as the distance from the actual number of failures to the predicted number of failures divided by the actual value [58]. R2 measures the success of the fit in explaining the variation in the data [59]. Root mean square prediction error (RMSPE) estimates the closeness of the model to the predicted observations [60]. The mean error of prediction (MEOP) sums the absolute value of the deviation between the actual number of observations and the estimated curve and is defined elsewhere [61]. The Theil statistic (TS) is the ratio of the average deviation of all periods to the actual observations [62]. Pham’s criterion (PC) considers the tradeoff between the uncertainty of the model and the number of parameters in the model [63]. The prediction sum squared error (preSSE) represents the sum of the differences between the predicted and actual values [64]. Table 3 shows a description of the 10 criteria.
Based on the above criteria, the existing SRGMs and software reliability models using deep learning are compared with the proposed deep-learning software reliability model. The R2 value is closer to 1, and the other nine scales are closer to 0, indicating a better fit. Using R and MATLAB, we estimated the parameters of each model using the least-squares estimation (LSE) method, calculated the goodness of fit, and compared their performance. The LSE method estimates the parameters that minimize errors. We want to estimate the optimal parameters of each model through Equation (11) [65].
L S E = i = 1 n y i m ^ t i 2
The proposed deep learning software reliability model selects the parameter b through the LSE method, and the parameters of deep learning are trained by the given data. The model is trained 100 times for each selected parameter to present the best results.

4.3. Results of Dataset 1

Table 4 lists the estimated parameter values of the SRGM assuming an NHPP using Dataset 1 and the structure of the software reliability model using deep learning. It also shows the estimated parameter values and structure of the proposed deep-learning software reliability model. The parameter b of the proposed deep-learning software reliability model is estimated to be 0.52. Based on the estimated parameter b , the structure of the deep-learning software reliability model has four hidden layers. The Adam optimization method is used, and the learning rate is set to 0.000001. Figure 3 shows the estimated value of m t for each model and deep-learning structure.
Table 5 presents the results for the 10 criteria based on the estimated values. The proposed deep-learning software reliability model had the best results for MSE, PRR, PP, R2, RMSPE, MAE, MEOP, TS, and PC, with values of 1.5764, 0.0173, 010191, 0.9995, 1.255, 1.255, 1.2197, 0.9968, and 8.7077, respectively, for all criteria. The recurrent neural network types showed good results, as did the LSTM. In addition, among the previous SRGMs, PZ showed the best results in MSE, R2, RMSPE, MAE, MEOP, TS, and PC with 3.5923, 0.9977, 2.6764, 2.1584, 2.0967, 2.0933, and 33.9348, respectively, compared with other SRGM, and Vtub showed good results in PRR and PP (0.0578 and 0.0507, respectively). In terms of prediction, preSSE was the best model for the proposed deep-learning software reliability model with a result of 4.728, followed by LSTM with 12.845. The best model among the NHPP SRGMs was TC with a result of 52.177.

4.4. Results of Dataset 2

Table 6 lists the estimated parameter values of the SRGM assuming an NHPP using Dataset 2 and the structure of the software reliability model using deep learning. It also shows the estimated parameter values and structure of the proposed deep-learning software reliability model. The parameter b of the proposed software reliability model was estimated to be 0.76. The structure of the proposed deep-learning software reliability model is the same as that for Dataset 1. Figure 4 shows the computed estimates of m t for each model and deep-learning structure.
Table 7 presents the results for the 10 criteria based on the estimated values. The proposed deep-learning software reliability model had the best results of 34.9817, 0.9995, 5.9145, 5.9145, 5.9145, 5.9145, 5.9145, 5.8070, 0.9571, and 96.9617 for the MSE, R2, RMSPE, MAE, MEOP, TS, and PC criteria, respectively, for seven of the ten criteria. Among the recurrent neural networks, LSTM showed the second-best results of 265.2696, 1.6538, 0.9958, 16.2892, 16.1755, 15.8759, 2.6661, and 148.8709 for MSE, PP, R2, RMSPE, MAE, MEOP, TS, and PC, while RNN showed the best results in PRR. In addition, among the previously proposed SRGMs, PNZ exhibited the best results for all the criteria. For prediction, preSSE yielded the best result of 133.128 for RMD, followed by 146.460 and 174.911 for GO and the proposed model, respectively.

5. Sensitivity Analysis

Results of Variation for Changes in Parameters

The software reliability model estimates the number of failures m t by estimating a parameter that contains mathematical assumptions. Whenever the parameters change, the m t in the software reliability model also changes; therefore, it is extremely sensitive to small changes in the estimated parameters [66,67]. To evaluate the sensitivity of the model to changes in parameters, the parameters of the proposed model were varied by −10%, −5%, 5%, and 10%. Figure 5a shows the estimates of the time-varying parameters for Dataset 1. From −10% to 10%, the change in parameter b did not cause much change in m t , so the last eight time points are presented with a zoomed-in graph. Figure 5b shows the estimates of the time-varying parameters for Dataset 2, with the results for the last nine time points zoomed in because they are similar to those for Dataset 1.
Table 8 and Figure 6 show the results of comparing the MSE values based on the changed estimates for Datasets 1 and 2 with the MSE values of the proposed deep-learning software reliability model for a numerical comparison. The comparison criterion was the MSE of the proposed deep-learning software reliability model, and the MSE of the changed parameters was divided by that of the proposed deep-learning software reliability model. The results for Dataset 1 show that the MSE increases by factors of 3.378 (−10%) and 2.459 (−5%) as b becomes smaller, and by factors of 1.670 (5%) and 2.054 (10%) as b becomes larger compared with the previously estimated values. The results for Dataset 2 show that the MSE increases by factors of 1.587 (−10%) and 1.258 (−5%) as b becomes smaller, and by factors of 1.119 (5%) and 1.350 (10%) as b becomes larger compared with the original estimate. This demonstrates that the MSE increases more for smaller than for larger values of the estimated fault detection rate b , indicating a poor estimate. This is because Datasets 1 and 2 have a high percentage of failures that either decrease in number or recur, even as the number of time points increases.

6. Discussions

A deep-learning software reliability model was developed by combining an existing software reliability model developed from a parametric perspective with the software reliability model using deep learning developed from a nonparametric perspective. We compared with traditional software reliability models and demonstrated the superiority of our proposed model using two datasets. However, since the proposed deep learning software reliability model only depends on parameter b , we believe that additional assumptions about uncertain operating environments or dependent failures as described in Section 3.1 are needed. We believe that this will allow us to manage software failures for more general situations.

7. Conclusions

In this study, we developed a deep learning software reliability model to replace the existing software reliability model because software failures are similar in shape to sigmoid functions, and we compared the traditional SRGMs and the software reliability model using deep learning. The first dataset had the best results on 10 criteria, and the second dataset had the best results on all 10 criteria except PRR, PP, and preSSE. The proposed deep-learning software reliability model did not show much difference from the actual value as the time point became larger than the existing software reliability model, and it provided a better fit than the software reliability model using deep learning. In addition, the change in the estimation value according to the change in parameter b of the proposed model was examined, and the smaller the parameter b , which is the failure detection rate, the worse the estimation value. Even when b was larger, the estimation value was good compared with a smaller one. It is believed that this is because the number of failures did not decrease owing to the characteristics of the data used.
This study is meaningful because the previously developed software reliability model using deep learning does not provide a mathematical basis and relies solely on data, whereas the proposed deep-learning software reliability model is based on data and fits the mathematical basis. Therefore, further research is planned on other models that transform the software reliability model into an activation function based on mathematical and statistical assumptions that include multiple parameters rather than a single one.

Author Contributions

Conceptualization, H.P.; funding acquisition, I.H.C.; software and simulation, Y.S.K.; sensitivity analysis, I.H.C.; writing—original draft, Y.S.K.; writing—review and editing, I.H.C. and H.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program of the National Research Foundation of Korea (NRF), funded by the Ministry of Education (NRF-2021R1F1A1048592).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available in a publicly accessible repository.

Acknowledgments

This study was supported by the National Research Foundation of Korea (NRF).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hong, Y.; Lian, J.; Xu, L.; Min, J.; Wang, Y.; Freeman, L.J.; Deng, X. Statistical perspectives on reliability of artificial intelligence systems. Qual. Eng. 2023, 35, 56–78. [Google Scholar] [CrossRef]
  2. Bastani, F.; Chen, I.R. Assessment of the reliability of AI programs. Int. J. Artif. Intell. Tools 1993, 2, 235–247. [Google Scholar] [CrossRef]
  3. Sheptunov, S.A.; Larionov, M.V.; Suhanova, N.V.; Il’ya, S.K.; Alshynbaeva, D.A. Optimization of the complex software reliability of control systems. In Proceedings of the 2016 IEEE Conference on Quality Management, Transport and Information Security, Information Technologies (IT&MQ&IS), Nalchik, Russia, 4–11 October 2016; pp. 189–192. [Google Scholar]
  4. Ryan, M. In AI we trust: Ethics, artificial intelligence, and reliability. Sci. Eng. Ethics 2020, 26, 2749–2767. [Google Scholar] [CrossRef]
  5. Martyushev, N.V.L.L.; Malozyomov, B.V.; Sorokova, S.N.; Efremenkov, E.A.; Valuev, D.V.; Qi, M. Review Models and Methods for Determining and Predicting the Reliability of Technical Systems and Transport. Mathematics 2023, 11, 3317. [Google Scholar] [CrossRef]
  6. Antosz, K.; Machado, J.; Mazurkiewicz, D.; Antonelli, D.; Soares, F. Systems Engineering: Availability and Reliability. Appl. Sci. 2022, 12, 2504. [Google Scholar] [CrossRef]
  7. Jelinski, Z.; Moranda, P.B.; Freiberger, W. Statistical computer performance evaluation. Softw. Reliab. Res. 1972, 465–484. [Google Scholar]
  8. Goel, A.L.; Okumoto, K. Time-dependent error-detection rate model for software reliability and other performance measures. IEEE Trans. Reliab. 1979, 28, 206–211. [Google Scholar] [CrossRef]
  9. Kumar, P.; Singh, Y. An empirical study of software reliability prediction using machine learning techniques. Int. J. Syst. Assur. Eng. Manag. 2012, 3, 194–208. [Google Scholar] [CrossRef]
  10. Jaiswal, A.; Malhotra, R. Software reliability prediction using machine learning techniques. Int. J. Syst. Assur. Eng. Manag. 2018, 9, 230–244. [Google Scholar] [CrossRef]
  11. Misra, D. Mish: A Self Regularized Non-Monotonic Neural Activation Function. arXiv 2019, arXiv:1908.08681. [Google Scholar]
  12. Gulcehre, C.; Moczulski, M.; Denil, M.; Bengio, Y. Noisy activation functions. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; pp. 3059–3068. [Google Scholar]
  13. Hossain, S.A.; Dahiya, R.C. Estimating the parameters of a non-homogeneous Poisson-process model for software reliability. IEEE Trans. Reliab. 1993, 42, 604–612. [Google Scholar] [CrossRef]
  14. Yamada, S.; Ohba, M.; Osaki, S. S-shaped reliability growth modeling for software fault detection. IEEE Trans. Reliab. 1983, 32, 475–484. [Google Scholar] [CrossRef]
  15. Ohba, M. Inflexion S-shaped software reliability growth models. In Stochastic Models in Reliability Theory; Osaki, S., Hatoyama, Y., Eds.; Springer: Berlin, Germany, 1984; pp. 144–162. [Google Scholar]
  16. Zhang, X.M.; Teng, X.L.; Pham, H. Considering fault removal eciency in software reliability assessment. IEEE Trans. Syst. Man. Cybern. Part Syst. Hum. 2003, 33, 114–120. [Google Scholar] [CrossRef]
  17. Yamada, S.; Tokuno, K.; Osaki, S. Imperfect debugging models with fault introduction rate for software reliability assessment. Int. J. Syst. Sci. 1992, 23, 2241–2252. [Google Scholar] [CrossRef]
  18. Pham, H.; Zhang, X. An NHPP software reliability models and its comparison. Int. J. Reliab. Qual. Saf. Eng. 1997, 4, 269–282. [Google Scholar] [CrossRef]
  19. Pham, H.; Nordmann, L.; Zhang, X. A general imperfect software debugging model with S-shaped fault detection rate. IEEE Trans. Reliab. 1999, 48, 169–175. [Google Scholar] [CrossRef]
  20. Teng, X.; Pham, H. A new methodology for predicting software reliability in the random field environments. IEEE Trans. Reliab. 2006, 55, 458–468. [Google Scholar] [CrossRef]
  21. Roy, P.; Mahapatra, G.S.; Dey, K.N. An NHPP software reliability growth model with imperfect debugging and error generation. Int. J. Reliab. Qual. Saf. Eng. 2014, 21, 1–3. [Google Scholar] [CrossRef]
  22. Yang, B.; Xie, M. A study of operational and testing reliability in software reliability analysis. Reliab. Eng. Syst. Saf. 2000, 70, 323–329. [Google Scholar] [CrossRef]
  23. Huang, C.Y.; Kuo, S.Y.; Lyu, M.R.; Lo, J.H. Quantitative software reliability modeling from testing from testing to operation. In Proceedings of the International Symposium on Software Reliability Engineering, IEEE, Los Alamitos, CA, USA, 8–11 October 2000; pp. 72–82. [Google Scholar]
  24. Pham, H. A new software reliability model with Vtub-Shaped fault detection rate and the uncertainty of operating environments. Optimization 2014, 63, 1481–1490. [Google Scholar] [CrossRef]
  25. Chang, I.H.; Pham, H.; Lee, S.W.; Song, K.Y. A testing-coverage software reliability model with the uncertainty of operation environments. Int. J. Syst. Sci. Oper. Logist. 2014, 1, 220–227. [Google Scholar]
  26. Song, K.Y.; Chang, I.H.; Pham, H. A Three-parameter fault-detection software reliability model with the uncertainty of operating environments. J. Syst. Sci. Syst. Eng. 2017, 26, 121–132. [Google Scholar] [CrossRef]
  27. Huang, C.Y.; Kuo, S.Y.; Lyu, M.R. An assessment of testing-effort dependent software reliability growth models. IEEE Trans. Reliab. 2007, 56, 198–211. [Google Scholar] [CrossRef]
  28. Ahmad, N.; Khan, M.G.; Rafi, L.S. A study of testing-effort dependent inflection S-shaped software reliability growth models with imperfect debugging. Int. J. Qual. Reliab. Manag. 2010, 27, 89–110. [Google Scholar] [CrossRef]
  29. Kim, Y.S.; Song, K.Y.; Pham, H.; Chang, I.H. A software reliability model with dependent failure and optimal release time. Symmetry 2022, 14, 343. [Google Scholar] [CrossRef]
  30. Lee, D.H.; Chang, I.H.; Pham, H. Software reliability model with dependent failures and SPRT. Mathematics 2020, 8, 1366. [Google Scholar] [CrossRef]
  31. Lee, D.H.; Chang, I.H.; Pham, H. Software reliability growth model with dependent failures and uncertain operating environments. Appl. Sci. 2022, 12, 12383. [Google Scholar] [CrossRef]
  32. Cai, K.Y.; Cai, L.; Wang, W.D.; Yu, Z.Y.; Zhang, D. On the neural network approach in software reliability modeling. J. Syst. Softw. 2001, 58, 47–62. [Google Scholar] [CrossRef]
  33. Miyamoto, S.; Tamura, Y.; Yamada, S. Reliability assessment tool based on deep learning and data preprocessing for OSS. Amer. J. Oper. Res. 2022, 12, 111–125. [Google Scholar] [CrossRef]
  34. Oveisi, S.; Moeini, A.; Mirzaei, S.; Farsi, M.A. LSTM encoder-decoder dropout model in software reliability prediction. Int. J. Reliab. Risk Saf. Theory Appl. 2021, 4, 1–12. [Google Scholar] [CrossRef]
  35. Raamesh, L.; Jothi, S.; Radhika, S. Enhancing software reliability and fault detection using hybrid brainstorm optimization-based LSTM model. IETE J. Res. 2022, 1–15. [Google Scholar] [CrossRef]
  36. Wu, C.Y.; Huang, C.Y. A study of incorporation of deep learning into software reliability modeling and assessment. IEEE Trans. Reliab. 2021, 70, 1621–1640. [Google Scholar] [CrossRef]
  37. Batool, I.; Khan, T.A. Software fault prediction using data mining, machine learning and deep learning techniques: A systematic literature review. Comput. Electr. Eng. 2022, 100, 107886. [Google Scholar] [CrossRef]
  38. Sreekanth, N.; Rama, D.J.; Shukla, A.; Mohanty, D.K.; Srinivas, A.; Rao, G.N.; Alam, A.; Gupta, A. Evaluation of estimation in software development using deep learning-modified neural network. Appl. Nanosci. 2023, 13, 2405–2417. [Google Scholar] [CrossRef]
  39. Bhuyan, M.K.; Mohapatra, D.P.; Sethi, S. Software Reliability Prediction using Fuzzy Min-Max Algorithm and Recurrent Neural Network Approach. Int. J. Electr. Comput. Eng. 2016, 6, 1929–1938. [Google Scholar]
  40. Mittelman, R. Time-series modeling with undecimated fully convolutional neural networks. arXiv 2015, arXiv:1508.00317. [Google Scholar]
  41. Pan, C.; Lu, M.; Xu, B.; Gao, H. An improved CNN model for within-project software defect prediction. Appl. Sci. 2019, 9, 2138. [Google Scholar] [CrossRef]
  42. Karunanithi, N.; Whitley, D.; Malaiya, Y.K. Using neural networks in reliability prediction. IEEE Softw. 1992, 9, 53–59. [Google Scholar] [CrossRef]
  43. Tamura, Y.; Yamada, S. Software reliability model selection based on deep learning with application to the optimal release problem. J. Ind. Eng. Manag. Sci. 2016, 1, 43–58. [Google Scholar] [CrossRef]
  44. Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to sequence learning with neural networks. Adv. Neural Inf. Process. Syst. 2014, 27, 3104–3112. [Google Scholar]
  45. Schmidhuber, J.; Hochreiter, S. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar]
  46. Chen, L.; Zheng, J.; Okamura, H.; Dohi, T. Software reliability prediction through encoder-decoder recurrent neural networks. Int. J. Math. Eng. Manag. Sci. 2022, 7, 325. [Google Scholar]
  47. Che, Z.; Purushotham, S.; Cho, K.; Sontag, D.; Liu, Y. Recurrent neural networks for multivariate time series with missing values. Sci. Rep. 2018, 8, 6085. [Google Scholar] [CrossRef] [PubMed]
  48. Munir, H.S.; Ren, S.; Mustafa, M.; Siddique, C.N.; Qayyum, S. Attention based GRU-LSTM for software defect prediction. PLoS ONE 2021, 16, e0247444. [Google Scholar] [CrossRef]
  49. Jónás, T. Sigmoid functions in reliability based management. Period. Polytech. Soc. Manag. Sci. 2007, 15, 67–72. [Google Scholar] [CrossRef]
  50. Kyurkchiev, N. A note on a hypothetical piecewise smooth sigmoidal growth function: Reaction network analysis, applications. Int. J. Differ. Equat. Appl. 2022, 21, 1–17. [Google Scholar]
  51. Lu, K.; Ma, Z. A modified whale optimization algorithm for parameter estimation of software reliability growth models. J. Algorithms Comput. Technol. 2021, 15, 17483026211034442. [Google Scholar] [CrossRef]
  52. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  53. Misra, P.N. Software reliability analysis. IBM Syst. J. 1983, 22, 262–270. [Google Scholar] [CrossRef]
  54. Wang, J. Model of open source software reliability with fault introduction obeying the generalized pareto distribution. Arab. J. Sci. Eng. 2021, 46, 3981–4000. [Google Scholar] [CrossRef]
  55. Inoue, S.; Yamada, S. Discrete software reliability assessment with discretized NHPP models. Comput. Math. Appl. 2006, 51, 161–170. [Google Scholar] [CrossRef]
  56. Chiu, K.C.; Huang, Y.S.; Lee, T.Z. A study of software reliability growth from the perspective of learning effects. Reliab. Eng. Syst. Saf. 2008, 93, 1410–1421. [Google Scholar] [CrossRef]
  57. Haque, M.A.; Ahmad, N. An effective software reliability growth model. Saf. Reliab. 2021, 40, 1–12. [Google Scholar] [CrossRef]
  58. Zhao, J.; Liu, H.W.; Cui, G.; Yang, X.Z. Software reliability growth model with change-point and environmental function. J. Syst. Softw. 2006, 79, 1578–1587. [Google Scholar] [CrossRef]
  59. Cameron, A.C.; Windmeijer, F.A. An R-squared measure of goodness of fit for some common nonlinear regression models. J. Econom. 1997, 77, 329–342. [Google Scholar] [CrossRef]
  60. Pillai, K.; Nair, V.S. A model for software development effort and cost estimation. IEEE Trans. Softw. Eng. 1997, 23, 485–497. [Google Scholar] [CrossRef]
  61. Anjum, M.; Haque, M.A.; Ahmad, N. Analysis and ranking of software reliability models based on weighted criteria value. Int. J. Inform. Tech. Comp. Sci. 2013, 2, 1–14. [Google Scholar] [CrossRef]
  62. Sharma, K.; Garg, R.; Nagpal, C.K.; Garg, R.K. Selection of optimal software reliability growth models using a distance based approach. IEEE Trans. Reliab. 2010, 59, 266–276. [Google Scholar] [CrossRef]
  63. Selvakumar, K.; Lokesh, S. Retracted Article: The prediction of the lifetime of the new coronavirus in the USA using mathematical models. Soft Comput. 2021, 25, 10575–10594. [Google Scholar] [CrossRef]
  64. Dhaka, R.; Pachauri, B.; Jain, A. Two-dimensional software reliability model with considering the uncertainty in operating environment and predictive analysis. In Data Engineering for Smart Systems; Springer: Singapore, 2022; pp. 57–69. [Google Scholar]
  65. Musa, J.D.; Iannino, K.; Okumoto, K. Software Reliability Measurement Prediction Application; McGraw-Hill: New York, NY, USA, 2006. [Google Scholar]
  66. Lo, J.H.; Huang, C.Y.; Chen, Y.; Kuo, S.Y.; Lyu, M.R. Reliability assessment and sensitivity analysis of software reliability growth modeling based on software module structure. J. Syst. Softw. 2005, 76, 3–13. [Google Scholar] [CrossRef]
  67. Li, X.; Xie, M.; Ng, S.H. Sensitivity analysis of release time of software reliability models incorporating testing effort with multiple change-points. Appl. Math. Model. 2010, 34, 3560–3570. [Google Scholar] [CrossRef]
Figure 1. Structure of DNN model.
Figure 1. Structure of DNN model.
Applsci 13 10836 g001
Figure 2. (a) Structure of RNN model, (b) structure of LSTM model, and (c) structure of GRU model.
Figure 2. (a) Structure of RNN model, (b) structure of LSTM model, and (c) structure of GRU model.
Applsci 13 10836 g002
Figure 3. Prediction of all models for Dataset 1.
Figure 3. Prediction of all models for Dataset 1.
Applsci 13 10836 g003
Figure 4. Prediction of all models for Dataset 2.
Figure 4. Prediction of all models for Dataset 2.
Applsci 13 10836 g004
Figure 5. (a) Predicted change in parameter b for Dataset 1 and (b) predicted change in parameter b for Dataset 2.
Figure 5. (a) Predicted change in parameter b for Dataset 1 and (b) predicted change in parameter b for Dataset 2.
Applsci 13 10836 g005
Figure 6. Comparison of MSE by parameter b change.
Figure 6. Comparison of MSE by parameter b change.
Applsci 13 10836 g006
Table 1. Software reliability growth models.
Table 1. Software reliability growth models.
No.ModelMean Value FunctionNote
1Goel–Okumoto (GO) [8] m t = a 1 e b t Concave
2Ohba (IS) [15] m t = a 1 e b t 1 + β e b t S-shaped
3Zhang et al. (ZFR) [16] m t = a p β 1 1 + α e b t 1 + α e b t c b p β S-shaped
4Yamada et al. (YID 1) [17] m t = a b α + b e α t e b t Concave
5Yamada et al. (YID 2) [17] m t = a 1 e b t 1 α b + α a t Concave
6Pham–Zhang (PZ) [18] m t = c + a 1 e b t a b b α e a t e b t 1 + β e b t Both
7Pham et al. (PNZ) [19] m t = a 1 e b t 1 α b + α a t 1 + β e b t Both
8Teng–Pham (TP) [20] m t = a p q 1 β β + p q ln c + e b t c + 1 α S-shaped
9Roy et al. (RMD) [21] m t = a α 1 e b t a b b β e β t e b t Concave
10Pham (Vtub) [24] m t =   N 1 β β + a b t 1 α S-shaped
11Chang et al. (TC) [25] m t =   N 1 β β + a t b α Both
12Song et al. (3P) [26] m t =   N 1 β β a b ln 1 + c e b t 1 + c e b t S-shaped
13Kim et al. (DPF1) [29] m t = a 1 + a h 1 + c c + e b t a S-shaped,
dependent
14Lee et al. (DPF2) [30] m t = a 1 + a h b + c c + b e b t a b S-shaped,
dependent
15Lee et al. (UDPF) [31] m t =   N 1 β α + b t ln b t + 1 α S-shaped,
dependent
Table 2. Software reliability models using deep learning and the proposed deep-learning software reliability model.
Table 2. Software reliability models using deep learning and the proposed deep-learning software reliability model.
No.ModelMean Value Function and Structure
16DNNHidden Layer = 4, Activation Function = Sigmoid, Optimizer = Adam
17RNNDifference = 1, Optimizer = Adam
18LSTMDifference = 1, Optimizer = Adam
19GRUDifference = 1, Optimizer = Adam
20Proposed Model m t = 1 1 + e b 2 t b t + 1
Table 3. Criteria for model comparisons.
Table 3. Criteria for model comparisons.
No.CriteriaFormula
1MSE i = 1 n m ^ t i y i 2 n
2MAE i = 1 n m ^ t i y i n
3PRR i = 1 n m ^ t i y i m ^ t i 2
4PP i = 1 n m ^ t i y i y i 2
5 R 2 1 i = 1 n m ^ t i y i 2 i = 1 n y i y i ¯ 2
6RMSPE Variance 2 + Bias 2 .
7MEOP i = 1 n m ^ t i y i n + 1
8TS 100 i = 1 n y i m ^ t i 2 i = 1 n y i 2 %
9PC n 2 log i = 1 n m ^ t i y i 2 n + n 1 n
10preSSE i = k + 1 n m ^ t i y i 2
Table 4. Parameter estimation and structure of model from Dataset 1.
Table 4. Parameter estimation and structure of model from Dataset 1.
No.ModelEstimation and Structure
1GO a ^ = 405.3473 , b ^ = 0.01966
2IS a ^ = 398.5632 , b ^ = 0.02049 , β ^ = 0.02548
3ZFR a ^ = 48.7172 , b ^ = 0.00231 , α ^ = 0.04331 ,
β ^ = 0.23542 , c ^ = 0.17070 , p ^ = 0.35612
4YID 1 a ^ = 205.0374 , b ^ = 0.04056 , α ^ = 0.01222
5YID 2 a ^ = 98.4888 , b ^ = 0.08904 , α ^ = 0.04632
6PZ a ^ = 27.1595 , b ^ = 0.06871 , α ^ = 0.02132 ,
β ^ = 2.16867 , c ^ = 232.0570
7PNZ a ^ = 40.2796 , b ^ = 0.29103 ,
α ^ = 0.13231 , β ^ = 0.05003
8TP a ^ = 17840.283 , b ^ = 0.02703 , α ^ = 0.00375 , β ^ = 0.01292 ,
c ^ = 15.38103 , p ^ = 0.60263 , q ^ = 0.10827
9RMD a ^ = 42.38821 , b ^ = 0.02091 ,
α ^ = 10.00201 , β ^ = 0.00959
10Vtub a ^ = 1.82256 , b ^ = 0.50401 , α ^ = 0.52648 ,
β ^ = 12.54998 , N ^ = 399.1401
11TC a ^ = 0.04460 , b ^ = 0.84140 , α ^ = 12.34453 ,
β ^ = 220.2973 , N ^ = 2627.4388
123P a ^ = 0.09480 , b ^ = 0.000000315 , β ^ = 8.18127 ,
c ^ = 0.01147 , N ^ = 706.1525
13DPF 1 a ^ = 209.7896 , b ^ = 0.00754 ,
c ^ = 12.9492 , h ^ = 31.45712
14DPF 2 a ^ = 206.8508 , b ^ = 0.00316 ,
c ^ = 5.22432 , h ^ = 28.1848
15UDPF b ^ = 0.01562 , α ^ = 0.49790 ,
β ^ = 0.49731 , N ^ = 472.0649
16DNNHidden layer = 4, Optimizer = Adam, Epoch = 100
17RNNDifference = 1, Optimizer = Adam, Epoch = 100
18LSTMDifference = 1, Optimizer = Adam, Epoch = 100
19GRUDifference = 1, Optimizer = Adam, Epoch = 100
20Proposed model b ^ = 0.52 , Hidden layer = 4, Optimizer = Adam, Epoch = 100
Table 5. Comparison of all criteria from Dataset 1.
Table 5. Comparison of all criteria from Dataset 1.
No.ModelMSEMAEPRRPPR2RMSPEMEOPTSPCpreSSE
1GO13.99203.11871.09780.41340.99543.79513.02962.969645.8249138.994
2IS14.01723.12321.10190.41450.99543.79833.03392.972345.8555145.058
3ZFR13.99483.11911.09760.41330.99543.79553.03002.969945.8282140.045
4YID 113.62193.13930.90560.35900.99553.74523.04962.930145.369173.479
5YID 213.34503.14950.71340.30160.99563.70733.05952.900245.020052.222
6PZ6.95232.15840.06840.07890.99772.67642.09672.093333.9348232.611
7PNZ14.24973.11050.22590.13080.99533.83163.02162.996946.135170.068
8TP13.10423.08310.80880.33110.99573.67362.99502.873944.710579.662
9RMD13.97703.11791.09340.41220.99543.79313.02882.968145.8067137.644
10Vtub7.94082.32650.05780.05070.99742.86022.26012.237236.1948109.486
11TC10.95472.81620.21330.13260.99643.35952.73572.627641.664652.177
123P13.74903.10071.02770.39390.99553.76233.01212.943745.5270120.537
13DPF 120.36563.07360.45681.45880.99334.57742.98573.582752.2060672.499
14DPF 217.46593.29870.37030.99780.99424.24193.20443.317949.5949886.313
15UDPF8.70182.48850.08550.09470.99712.99422.41742.341937.7505113.299
16DNN5.98252.44590.06000.07260.99802.44592.37601.941831.380917.932
17RNN5.65342.34620.06060.07340.99812.37872.27721.931229.551936.121
18LSTM2.50881.55760.02850.03240.99911.58471.51181.286516.146412.845
19GRU3.24221.71180.03640.04180.99891.80331.66141.462520.377917.703
20Proposed model1.57641.25550.01730.01910.99951.25551.21970.99688.70774.728
Table 6. Parameter estimation and structure of models from Dataset 2.
Table 6. Parameter estimation and structure of models from Dataset 2.
No.ModelEstimation and Structure
1GO a ^ = 1200.6573 , b ^ = 0.02594
2IS a ^ = 1090.2373 , b ^ = 0.03435 , β ^ = 0.21312
3ZFR a ^ = 395.6997 , b ^ = 0.01072 , α ^ = 0.13188 ,
β ^ = 1.29491 , c ^ = 1.07014 , p ^ = 1.06829
4YID 1 a ^ = 735.9191 , b ^ = 0.04609 , α ^ = 0.00821
5YID 2 a ^ = 560.7125 , b ^ = 0.06328 , α ^ = 0.01755
6PZ a ^ = 0.02306 , b ^ = 0.00172 , α ^ = 0.00172 ,
β ^ = 0.02247 , c ^ = 0.92125
7PNZ a ^ = 249.3358 , b ^ = 0.44707 ,
α ^ = 0.05410 , β ^ = 9.86286
8TP a ^ = 16.13260 , b ^ = 0.01055 , α ^ = 0.07844 , β ^ = 0.00020 ,
c ^ = 0.86459 , p ^ = 1.07014 , q ^ = 1.06829
9RMD a ^ = 53.3054 3, b ^ = 0.02660 ,
α ^ = 22.98703 , β ^ = 0.01071
10Vtub a ^ = 8.59529 , b ^ = 0.50025 , α ^ = 0.00184 ,
β ^ = 17.01916 , N ^ = 39 , 113.5827
11TC a ^ = 0.07169 , b ^ = 1.02720 , α ^ = 0.32387 ,
β ^ = 2.09164 , N ^ = 3111.1052
123P a ^ = 1.44644 , b ^ = 0.00468 , β ^ = 2.31208 ,
c ^ = 30.67715 , N ^ = 1663.2977
13DPF 1 a ^ = 912.3438 , b ^ = 0.00016 ,
c ^ = 0.56449 , h ^ = 145.4555
14DPF 2 a ^ = 955.6838 , b ^ = 0.00280 ,
c ^ = 34.39205 , h ^ = 177.6015
15UDPF b ^ = 2.75584 , α ^ = 27.11226 ,
β ^ = 3.15487 , N ^ = 1490.17003
16DNNHidden layer = 4, Optimizer = Adam, Epoch = 100
17RNNDifference = 1, Optimizer = Adam, Epoch = 100
18LSTMDifference = 1, Optimizer = Adam, Epoch = 100
19GRUDifference = 1, Optimizer = Adam, Epoch = 100
20Proposed model b ^ = 0.76 , Hidden layer = 4, Optimizer = Adam, Epoch = 100
Table 7. Comparison of all criteria from Dataset 2.
Table 7. Comparison of all criteria from Dataset 2.
No.ModelMSEPRRPPR2RMSPEMAEMEOPTSPCpreSSE
1GO417.71671.00463.09980.993620.630015.440315.15953.3075163.9212146.460
2IS473.81900.99433.03580.992721.971217.770217.44713.5226167.32382358.03
3ZFR423.48131.00003.03710.993520.771915.536115.25363.3302164.2912190.859
4YID 1369.12111.09174.10900.994319.392014.969014.69693.1091160.58192702.19
5YID 2369.89501.15284.73330.994319.411814.999714.72703.1124160.63843432.36
6PZ753.28231.04653.39180.988427.612824.841424.38974.4415179.84142233.55
7PNZ301.24500.11470.11150.995417.519414.391114.12942.8087155.095417,417.7
8TP365.24661.13724.51680.994419.289814.480914.21763.0928160.29691390.88
9RMD416.18121.00543.10910.993620.592115.397415.11753.3014163.8217133.128
10Vtub306.29310.81282.18440.995317.665413.740613.49082.8322155.54413147.84
11TC367.94811.02833.49910.994319.361714.711614.44413.1042160.4959487.299
123P380.41311.05553.68160.994119.687214.919614.64843.1563161.3954245.064
13DPF 11686.15752.621386.76650.974041.446830.701330.14316.6451201.597110,674.2
14DPF 21947.48442.7591125.0630.970044.523729.240328.70877.1415205.48741217.06
15UDPF618.88341.578014.02780.990525.110918.824218.48204.0258174.5352577.008
16DNN521.78090.82153.34690.992022.842522.842522.42723.6966169.92722606.87
17RNN266.82700.56461.70340.995816.335516.300815.99892.6739149.02601192.65
18LSTM265.26960.56961.65380.995816.289216.175515.87592.6661148.87091034.38
19GRU298.86040.62271.85100.995317.291717.069516.75342.8299152.0305978.347
20Proposed model34.98170.13250.22420.99955.91455.91455.8070.957196.9617174.911
Table 8. Comparison of MSE by parameter b change.
Table 8. Comparison of MSE by parameter b change.
Data−10%−5%5%10%
Dataset 13.37812.45951.69952.0537
Dataset 21.58701.25831.11851.3501
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, Y.S.; Pham, H.; Chang, I.H. Deep-Learning Software Reliability Model Using SRGM as Activation Function. Appl. Sci. 2023, 13, 10836. https://doi.org/10.3390/app131910836

AMA Style

Kim YS, Pham H, Chang IH. Deep-Learning Software Reliability Model Using SRGM as Activation Function. Applied Sciences. 2023; 13(19):10836. https://doi.org/10.3390/app131910836

Chicago/Turabian Style

Kim, Youn Su, Hoang Pham, and In Hong Chang. 2023. "Deep-Learning Software Reliability Model Using SRGM as Activation Function" Applied Sciences 13, no. 19: 10836. https://doi.org/10.3390/app131910836

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop