Next Article in Journal
Analysis of the Aging Processes of Writing Ink: Raman Spectroscopy versus Gas Chromatography Aspects
Previous Article in Journal
A Study of Two Dimensional Tomography Reconstruction of Temperature and Gas Concentration in a Combustion Field Using TDLAS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Software Reliability Model with a Weibull Fault Detection Rate Function Subject to Operating Environments

1
Department of Computer Science and Statistics, Chosun University, 309 Pilmun-daero Dong-gu, Gwangju 61452, Korea
2
Department of Industrial and Systems Engineering, Rutgers University, 96 Frelinghuysen Road, Piscataway, NJ 08855-8018, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2017, 7(10), 983; https://doi.org/10.3390/app7100983
Submission received: 21 August 2017 / Revised: 20 September 2017 / Accepted: 22 September 2017 / Published: 25 September 2017

Abstract

:

Featured Application

This study introduces a new software reliability model with the Weibull fault detection rate function that takes into account the uncertainty of operating environments.

Abstract

When software systems are introduced, these systems are used in field environments that are the same as or close to those used in the development-testing environments; however, they may also be used in many different locations that may differ from the environment in which they were developed and tested. As such, it is difficult to improve software reliability for a variety of reasons, such as a given environment, or a bug location in code. In this paper, we propose a new software reliability model that takes into account the uncertainty of operating environments. The explicit mean value function solution for the proposed model is presented. Examples are presented to illustrate the goodness of fit of the proposed model and several existing non-homogeneous Poisson process (NHPP) models and confidence intervals of all models based on two sets of failure data collected from software applications. The results show that the proposed model fits the data more closely than other existing NHPP models to a significant extent.

1. Introduction

Software systems have become an essential part of our lives. These systems are very important because they are able to ensure the provision of high-quality services to customers due to their reliability and stability. However, software development is a difficult and complex process. Therefore, the main focus of software companies is on improving the reliability and stability of a software system. This has prompted research in software reliability engineering and many software reliability growth models (SRGM) have been proposed over the past decades. Many existing non-homogeneous Poisson process (NHPP) software reliability models have been developed through the fault intensity rate function and the mean value functions m ( t ) within a controlled testing environment to estimate reliability metrics such as the number of residual faults, failure rate, and reliability of software. Generally, the reliability increases more quickly and later the improvement slows down. Software reliability models are used to estimate and predict the reliability, number of remaining faults, failure intensity, total software development cost, and so forth, of software. Various software reliability models and application studies have been developed to date. Discovering the confidence intervals of software reliability is done in the field of software reliability because it can enhance the decision of software releases and control the related expenditures for software testing [1]. First, Yamada and Osaki [2] considered that the maximum likelihood estimates concerning the confidence interval of the mean value function can be estimated. Yin and Trivedi [3] present the confidence bounds for the model parameters via the Bayesian approach. Huang [4] also present a graph to illustrate the confidence interval of the mean value function. Gonzalez et al. [5] presented a general methodology that applied to a power distribution test system considering the effect of weather conditions and aging of components in the system reliability indexes for the analysis of repairable systems using non-homogeneous Poisson process, including several conditions in the system at the same time. Nagaraju and Fiondella [6] presented an adaptive expectation-maximization algorithm for non-homogeneous Poisson process software reliability growth models, and illustrated the steps of this adaptive approach through a detailed example, which demonstrates improved flexibility over the standard expectation-maximization (EM) algorithm. Srivastava and Mondal [7] proposed a predictive maintenance model for an N-component repairable system by integrating non-homogeneous Poisson process (NHPP) models and system availability concept, such that the use of costly predictive maintenance technology is minimized. Kim et al. [8] described application of the software reliability model of the target system to increase the software reliability, and presented some analytical methods as well as the prediction and estimation results.
Chatterjee and Singh [9] proposed a software reliability model based on NHPP that incorporates a logistic-exponential testing coverage function with imperfect debugging. In addition, Chatterjee and Shukla [10] developed a software reliability model that considers different types of faults incorporating both imperfect debugging and a change point. Yamada et al. [11] developed a software-reliability growth model incorporating the amount of test effort expended during the software testing phase. Joh et al. [12] proposed a new Weibull distribution based on vulnerability discovery model. Sagar et al. [13] presented best software reliability growth model with including feature of both Weibull distribution and inflection S-shaped SRGM to estimate the defects of software system, and provide help to researchers and software industries to develop highly reliable software products.
Generally, existing models are applied to software testing data and then used to make predictions on the software failures and reliability in the field. Here, the important point is that the test environment and operational environment are different from each other. Once software systems are introduced, the software systems used in the field environments are the same as or close to those used in the development-testing environment; however, the systems may be used in many different locations. Several researchers started applying the factor of operational environments. A few researchers, Yang and Xie, Huang et al., and Zhang et al. [14,15,16], proposed a method of predicting the fault detection rate to reflect changes in operating environments, and used methodology that modifies the software reliability model in the operating environments by introducing a calibration factor. Teng and Pham [17] discussed a generalized model that captures the uncertainty of the environment and its effects upon the software failure rate. Pham [18,19] and Chang et al. [20] developed a software reliability model incorporating the uncertainty of the system fault detection rate per unit of time subject to the operating environment. Honda et al. [21] proposed a generalized software reliability model (GSRM) based on a stochastic process and simulated developments that include uncertainties and dynamics. Pham [22] recently presented a new generalized software reliability model subject to the uncertainty of operating environments. And also, Song et al. [23] presented a new model with consideration of a three-parameter fault detection rate in the software development process, and relate it to the error detection rate function with consideration of the uncertainty of operating environments.
In this paper, we discuss a new model with consideration for the Weibull function in the software development process and relate it to the error detection rate function with consideration of the uncertainty of operating environments. We examine the goodness of fit of the fault detection rate software reliability model and other existing NHPP models based on several sets of software testing data. The explicit solution of the mean value function for the new model is derived in Section 2. Criteria for model comparisons and confidence interval for selection of the best model are discussed in Section 3. Model analysis and results are discussed in Section 4. Section 5 presents the conclusions and remarks.

2. A New Software Reliability Model

In this section, we propose a new NHPP software reliability model. First, we describe the NHPP software reliability model and present a solution of the new mean value function considering the new fault detection rate function against the generalized NHPP software reliability model, incorporating the uncertainty of fault detection rate per unit of time in the operating environments.

2.1. Non-Homogeneous Poisson Process Model

The software fault detection process has been widely formulated by using a counting process. A counting process, { N ( t ) ,   t 0 }, is said to be a non-homogeneous Poisson process with intensity function λ ( t ) if N ( t ) follows a Poisson distribution with the mean value function m ( t ) , namely,
Pr { N ( t ) = n } = { m ( t ) } n n ! exp { m ( t ) } ,   n = 0 , 1 , 2 , 3 .
The mean value function m ( t ) , which is the expected number of faults detected at time t , can be expressed as
m ( t ) = 0 t λ ( s ) ds
where λ ( t ) represents the failure intensity.
A general framework for NHPP-based SRGM has been proposed by Pham et al. [24]. They have modeled m ( t ) using the differential equation
d   m ( t ) dt = b ( t ) [ a ( t ) m ( t ) ]
Solving Equation (1) makes it possible to obtain different values of m ( t ) using different values for a ( t ) and b ( t ) , which reflects various assumptions of the software testing process.

2.2. Weibull Fault Detection Rate Function Model

A generalized NHPP model incorporating the uncertainty of operating environments can be formulated as follows [19]:
d   m ( t ) dt = η [ b ( t ) ] [ N m ( t ) ] ,
where η is a random variable that represents the uncertainty of the system fault detection rate in the operating environments with a probability density function g , N is the expected number of faults that exists in the software before testing, b ( t ) is the fault detection rate function, which also represents the average failure rate of a fault, and m(t) is the expected number of errors detected by time t or the mean value function. We propose an NHPP software reliability model, including the uncertainty of the operating environment using Equation (2) and the following assumptions [19,23]:
(a)
The occurrence of software failures follows an NHPP.
(b)
Software can fail during execution, caused by faults in the software.
(c)
The software-failure detection rate at any time is proportional to the number of remaining faults in the software at that time.
(d)
When a software failure occurs, a debugging effort removes the faults immediately.
(e)
For each debugging effort, regardless of whether the faults are successfully removed, some new faults may be introduced into the software system.
(f)
The environment affects the unit failure detection rate, b ( t ) , by multiplying by a factor η .
The solution for the mean value function m(t), where the initial condition m(0) = 0, is given by [19]:
m ( t ) = η N ( 1 e η 0 t b ( x ) dx ) dg ( η ) .
Pham [22] recently developed a generalized software reliability model incorporating the uncertainty of fault detection rate per unit of time in the operating environments where the random variable η has a generalized probability density function g with two parameters, α 0 and β 0 , and the mean value function from Equation (3) is given by:
m ( t ) = N ( 1 β β + 0 t b ( s ) ds ) α ,
where b(t) is the fault detection rate per fault per unit of time.
The Weibull distribution is one of the most commonly used distributions for modeling irregular data, is very easy to interpret and very useful. The Weibull distribution is a distribution that can be used instead of a normal distribution for data with a bias, and used lifetime distributions in reliability engineering. Weibull distribution has been applied in the area of reliability quality control duration, and failure time modelling. This distribution can be widely and effectively used in reliability applications because it has wide variety of shapes in its density and failure rate functions, making it useful for fitting many types of data. In the modelling software development was often described by Weibull-type curves. The discrete Weibull distribution can describe flexibility stochastic behavior of the failure occurrence times. The Weibull-based method is significantly better than the Laplacian-based rate prediction. Both logistic and Weibull distributions will result in a cumulative distribution function with an S-shaped for the lifetime software product [25,26].
In this paper, we consider a Weibull fault detection rate function b ( t ) to be as follows:
b ( t ) = a b bt b 1 ,   a , b > 0 ,
where a and b are known as the scale and shape parameters, respectively. A Weibull fault detection rate function b ( t ) is decreasing for b < 1 , increasing for b > 1 , and constant when b = 1 . We obtain a new NHPP software reliability model subject to the uncertainty of the environments, m(t), that can be used to determine the expected number of software failures detected by time t by substituting the function b(t) above into Equation (4):
m ( t ) = N ( 1 β β + ( at ) b ) α ,

3. Model Comparisons

In this section, we present a set of comparison criteria for best model selection, quantitatively compare the models using these comparison criteria, and obtain the confidence intervals of the NHPP software reliability model.

3.1. Criteria for Model Comparisons

Once the analytical expression for the mean value function m ( t ) is derived, the model parameters to be estimated in the mean value function can then be obtained with the help of a developed Matlab program based on the least-squares estimate (LSE) method. Five common criteria [27,28], namely the mean squared error (MSE), the sum absolute error (SAE), the predictive ratio risk (PRR), the predictive power (PP), and Akaike’s information criterion (AIC), will be used as criteria for the model estimation of the goodness of fit and to compare the proposed model and other existing models as listed in Table 1. Table 1 summarizes the proposed model and several existing well-known NHPP models with different mean value functions. Note that models 9 and 10 in Table 1 did consider environmental uncertainty.
The mean squared error is given by
MSE = i = 0 n ( m ( t i ) y i ) 2 n m .
The sum absolute error is given by
SAE = i = 0 n | m ( t i ) y i | .
The predictive ratio risk and the predictive power are given as follows:
PRR = i = 0 n ( m ^ ( t i ) y i m ^ ( t i ) ) 2 ,   PP = i = 0 n ( m ^ ( t i ) y i y i ) 2 .
To compare the all model’s ability in terms of maximizing the likelihood function (MLF) while considering the degrees of freedom, Akaike’s information criterion (AIC) is applied:
AIC = 2 log | MLF | + 2 m
where y i is the total number of failures observed at time t i ; m is the number of unknown parameters in the model; and m ( t i ) is the estimated cumulative number of failures at t i for i = 1 , 2 , , n .
The mean squared error measures the distance of a model estimate from the actual data with the consideration of the number of observations, n, and the number of unknown parameters in the model, m. The sum absolute error is similar to the sum squared error, but the way of measuring the deviation is by the use of absolute values, and sums the absolute value of the deviation between the actual data and the estimated curve. The predictive ratio risk measures the distance of model estimates from the actual data against the model estimate. The predictive power measures the distance of model estimates from the actual data against the actual data. MSE, SAE, PRR, and PP are the criterion to measure the difference between the actual and predicted values. AIC is a measure of goodness of fit of an estimated statistical model, and considered to be a measure which can be used to rank the models, and it gives a penalty to a model with more number of parameters. For all five of these criteria—MSE, SAE, PRR, PP and AIC—the smaller the value, the closer the model fits relative to other models run on the same data set.

3.2. Estimation of the Confidence Intervals

In this section, we use Equation (7) to obtain the confidence intervals [27] of the software reliability models in Table 1. The confidence interval is given by
m ^ ( t ) ± Z α / 2 m ^ ( t ) ,
where Z α / 2 is 100 ( 1 α ) percentile of the standard normal distribution.

4. Numerical Examples

Wireless base stations provide the interface between mobile phone users and the conventional telephone network. It can take hundreds of wireless base stations to provide adequate coverage for users within a moderately sized metropolitan area. Controlling the cost of an individual base station is therefore an important objective. On the other hand, the availability of a base station is also an important consideration since wireless users expect the system availability to be comparable to the high availability they experience with the conventional telephone network. The software in this numerical example runs on an element within a wireless network switching center. Its main function includes routing voice channels and signaling messages to relevant radio resources and processing entities [35]. Dataset #1, field failure data for Release 1 listed in Table 2, was reported by Jeske and Zhang [35]. Release 1 included Year 2000 compatibility modifications, an operating system upgrade, and some new features pertaining to the signaling message processing. Release 1 had a life cycle of 13 months in the field. The cumulative field exposure time of the software was 167,900 system days, and a total of 115 failures were observed in the field. Table 2 shows the field failure data for Release 1 for each of the 13 months. Software failure data is available from the field for Release 1. Dataset #2, test data for Release 2 listed in Table 3, was also reported by Jeske and Zhang [35]. The test data is the set of failures that were observed during a combination of feature testing and load testing. The test interval that was used in this analysis was a 36-week period between. At times, as many as 11 different base station controller frame (BCF) frames were being used in parallel to test the software. Thus, to obtain an overall number of days spent testing the software we aggregated the number of days spent testing the software on each frame. The 36 weeks of Release 2 testing accumulates 1001 days of exposure time. Dataset #2 also show the cumulative software failures and the cumulative exposure time for the software on a weekly basis during the test interval. Table 4 and Table 5 summarize the results of the estimated parameters of all 11 models in Table 1 using the least-squares estimation (LSE) technique and the values of the five common criteria (MSE, SAE, PRR, PP and AIC).
We obtained the five common criteria when t = 1 , 2 , , 13 from Dataset #1 (Table 2), with exposure time (Cum. System days) from Dataset #2 (Table 3), As can be seen from Table 4, the MSE, SAE, PRR, PP and AIC values for the proposed new model are the lowest values compared to all models. We can see that the values of MSE, SAE, and AIC of the proposed new model are 11.2281, 26.5568, and 79.3459, respectively, which is significantly smaller than the value of the other software reliability models. The values of PRR and PP of the proposed new model are 0.2042, 0.1558, respectively. As can be seen from Table 5, the MSE, SAE and PRR value for the proposed new model are the lowest values, and the PP and AIC value for the proposed new model are the second lowest values compared to all models. We can see that the values of MSE and SAE of the proposed new model are 9.8789, 90.3633, respectively, which is significantly smaller than the value of the other software reliability models. The values of PRR, PP, and AIC of the proposed new model are 0.2944, 0.5159, 187.4204, respectively. The results show the difference between the actual and predicted values of the new model is smaller than the other models and the AIC value which is the measure of goodness of fit of an estimated statistical model is much smaller than the other software reliability models.
Figure 1 shows the graph of the mean value functions for all 11 models for Datasets #1 and #2, respectively. Figure 2 and Figure 3 show that the relative error value of the software reliability model can quickly approach zero in comparison with the other models confirming its ability to provide more accurate prediction. Figure 4 and Figure 5 show the graph of the mean value function and confidence interval each of the proposed new model for Datasets #1 and #2, respectively. Refer to the Appendix A for confidence intervals for the other software reliability models.

5. Conclusions

Generally, existing models are applied to software testing data and then used to make predictions on the software failures and reliability in the field. Here, the important point is that the test environment and operational environment are different from each other. We do not know in which operating environment the software will be used. Therefore, we need to develop the software reliability model considering uncertainty of operating environment. In this paper, we discussed a new software reliability model based on a Weibull fault detection rate function of Weibull distribution, which is the most commonly used distribution for modeling irregular data subject to the uncertainty of operating environments. Table 4 and Table 5 summarized the results of the estimated parameters of all 11 models in Table 1 using the LSE technique and the five common criteria (MSE, SAE, PRR, PP and AIC) value for two data sets. As can be seen from Table 4, the MSE, SAE, PRR, PP and AIC value for the proposed new model are the lowest values compared to all models. As can be seen from Table 5, the MSE, SAE and PRR value for the proposed new model are the lowest values compared to all models. As the results show the difference between the actual and predicted values of the new model is smaller than the other models, and the AIC value, which is the measure of goodness of fit of an estimated statistical model, is much smaller than the other models. Finally, we show confidence intervals of all 11 models from Dataset #1 and #2, respectively. By estimating the confidence interval, we will help to find the optimal software reliability model at different confidence levels. Future work will involve broader validation of this conclusion based on recent data sets.

Acknowledgments

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2015R1D1A1A01060050).

Author Contributions

The three authors equally contributed to the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Confidence intervals of all 11 models Dataset #1: (a) GO Model; (b) Delayed S-shaped SRGM; (c) Inflection S-shaped SRGM; (d) Yamada Imperfect Debugging Model; (e) PNZ Model; (f) Pham-Zhang Model; (g) Dependent-Parameter Model1; (h) Dependent-Parameter Model2; (i) Testing Coverage Model; (j) Three-parameter Model.
Figure A1. Confidence intervals of all 11 models Dataset #1: (a) GO Model; (b) Delayed S-shaped SRGM; (c) Inflection S-shaped SRGM; (d) Yamada Imperfect Debugging Model; (e) PNZ Model; (f) Pham-Zhang Model; (g) Dependent-Parameter Model1; (h) Dependent-Parameter Model2; (i) Testing Coverage Model; (j) Three-parameter Model.
Applsci 07 00983 g005aApplsci 07 00983 g005b
Figure A2. Confidence intervals of all 11 models Dataset #2: (a) GO Model; (b) Delayed S-shaped SRGM; (c) Inflection S-shaped SRGM; (d) Yamada Imperfect Debugging Model; (e) PNZ Model; (f) Pham-Zhang Model; (g) Dependent-Parameter Model1; (h) Dependent-Parameter Model2; (i) Testing Coverage Model; (j) Three-parameter Model.
Figure A2. Confidence intervals of all 11 models Dataset #2: (a) GO Model; (b) Delayed S-shaped SRGM; (c) Inflection S-shaped SRGM; (d) Yamada Imperfect Debugging Model; (e) PNZ Model; (f) Pham-Zhang Model; (g) Dependent-Parameter Model1; (h) Dependent-Parameter Model2; (i) Testing Coverage Model; (j) Three-parameter Model.
Applsci 07 00983 g006aApplsci 07 00983 g006b
Table A1. Confidence interval of all 11 models from Dataset #1 ( α = 0.05 ).
Table A1. Confidence interval of all 11 models from Dataset #1 ( α = 0.05 ).
Time Index12345678910111213
Model
GOLCL3.410.317.825.633.641.850.058.366.775.183.692.2100.7
UCL15.427.338.749.760.571.281.892.3102.8113.2123.5133.8144.1
DSLCL−0.53.711.020.030.040.350.660.469.678.186.093.099.4
UCL6.116.128.341.855.769.482.594.9106.4116.9126.4134.9142.5
ISLCL0.75.111.319.128.538.949.960.770.779.687.193.197.9
UCL9.718.628.840.653.667.581.795.3107.7118.6127.7135.0140.7
YIDLCL0.56.213.721.930.539.448.457.566.675.985.294.6104.0
UCL9.220.532.544.556.468.179.791.3102.7114.1125.4136.7147.9
PNZLCL0.75.111.319.128.538.949.960.770.779.687.193.197.9
UCL9.718.628.840.653.667.581.695.3107.7118.6127.7135.0140.7
PZLCL0.75.111.319.128.538.949.960.770.779.687.193.197.9
UCL9.718.628.840.653.667.581.795.3107.7118.6127.7135.0140.7
DP1LCL−1.0−0.22.46.712.820.630.241.654.769.586.2104.6124.7
UCL2.77.213.421.431.242.755.971.087.8106.3126.6148.7172.5
DP2LCL14.515.818.121.526.132.039.247.958.169.983.398.4115.3
UCL33.735.739.144.050.458.367.979.292.1106.7123.1141.4161.4
TCLCL−0.34.011.019.829.740.150.560.569.978.586.293.098.9
UCL6.816.528.341.555.369.182.595.1106.8117.3126.7134.9141.9
3PDFLCL0.75.111.319.228.538.949.960.770.779.687.193.197.9
UCL9.718.628.840.653.667.581.795.3107.7118.6127.7135.0140.7
NEWLCL0.75.412.019.828.638.348.659.671.081.789.693.594.8
UCL9.619.129.941.553.866.780.194.0108.0121.2130.8135.4137.0
Table A2. Confidence interval of all 11 models from Dataset #2 ( α = 0.05 ).
Table A2. Confidence interval of all 11 models from Dataset #2 ( α = 0.05 ).
Time Index5913182833436388123153178
Model
GOLCL−0.9−0.60.00.72.53.45.49.514.922.428.834.1
UCL3.85.87.69.713.615.519.226.134.345.254.061.1
DSLCL−0.4−0.7−0.9−1.0−0.8−0.60.22.87.616.324.832.2
UCL0.61.11.72.64.75.88.414.323.036.448.558.7
ISLCL−0.8−0.20.51.64.05.37.913.219.929.136.642.7
UCL4.67.09.211.816.618.923.431.841.654.464.572.4
YIDLCL−0.60.41.53.16.38.011.317.725.334.942.247.8
UCL5.78.711.514.820.723.528.938.549.262.271.879.0
PNZLCL−0.40.82.14.07.79.513.220.127.837.144.049.2
UCL6.39.812.916.523.126.131.841.952.665.274.280.8
PZLCL−0.8−0.20.51.64.05.37.913.219.929.136.642.7
UCL4.67.09.211.816.618.923.431.841.754.464.572.5
DP1LCL−0.1−0.2−0.3−0.5−0.7−0.7−0.9−1.0−0.8−0.20.81.9
UCL0.20.30.40.61.01.21.72.74.37.09.812.4
DP2LCL36.136.136.036.035.835.735.534.833.932.631.630.9
UCL63.863.863.763.763.463.363.062.160.959.257.856.9
TCLCL1.33.24.96.810.312.015.020.626.834.640.845.6
UCL11.115.018.221.627.429.934.542.651.361.969.976.2
3PDFLCL−0.8−0.10.82.04.65.98.714.321.230.538.043.9
UCL4.97.49.812.517.620.124.733.543.556.366.374.0
NEWLCL1.53.45.17.110.712.415.521.127.535.441.746.6
UCL11.515.418.722.127.930.535.243.452.262.971.177.5
Time Index208238263288318348383418467519570619
Model
GOLCL40.346.451.456.362.067.674.080.188.496.8104.7111.9
UCL69.377.283.689.897.0103.9111.7119.3129.3139.4148.8157.4
DSLCL41.350.357.664.672.579.887.794.8103.5111.3117.5122.5
UCL70.782.291.4100.1109.9118.9128.5137.0147.4156.6164.0169.9
ISLCL49.756.461.766.872.678.184.289.997.4104.5110.9116.5
UCL81.489.996.5102.9110.1116.8124.2131.2140.1148.7156.2162.9
YIDLCL54.059.764.168.272.977.382.286.993.199.4105.3110.9
UCL86.994.099.5104.7110.5115.9121.8127.5135.0142.5149.6156.2
PNZLCL54.859.963.867.671.975.980.585.091.097.3103.4109.2
UCL87.994.399.2103.9109.1114.1119.7125.1132.5140.0147.3154.2
PZLCL49.756.461.766.872.678.184.290.097.4104.6110.9116.5
UCL81.589.996.6102.9110.1116.9124.3131.2140.1148.7156.3162.9
DP1LCL3.65.87.810.213.417.021.727.035.345.356.368.0
UCL15.919.723.327.132.137.544.351.662.875.989.9104.4
DP2LCL30.430.230.430.831.833.335.738.744.251.560.270.0
UCL56.255.956.256.858.160.163.267.374.483.894.8106.9
TCLCL51.056.260.464.469.073.578.583.489.996.6102.9108.7
UCL83.189.794.999.9105.6111.1117.3123.2131.1139.2146.7153.6
3PDFLCL50.757.162.267.072.577.883.589.096.1103.0109.3115.0
UCL82.790.897.1103.2110.0116.4123.4130.0138.6146.9154.4161.0
NEWLCL52.257.561.865.970.775.380.485.592.299.1105.5111.5
UCL84.691.396.7101.8107.7113.3119.7125.7133.9142.1149.8156.9
Time Index6576997337757988458929349559779991001
Model
GOLCL117.3123.0127.5132.8135.6141.2146.5151.0153.2155.5157.7157.9
UCL163.8170.5175.7181.9185.2191.7197.9203.2205.8208.4210.9211.2
DSLCL125.7128.7130.8133.0134.0135.8137.3138.3138.8139.2139.6139.7
UCL173.6177.2179.6182.2183.4185.5187.2188.4189.0189.5189.9190.0
ISLCL120.5124.6127.7131.2133.1136.5139.7142.3143.5144.8145.9146.0
UCL167.6172.4176.0180.1182.3186.3190.0193.1194.5195.9197.3197.4
YIDLCL115.1119.8123.5128.1130.6135.6140.7145.2147.5149.8152.2152.4
UCL161.2166.7171.1176.5179.4185.3191.2196.4199.1201.8204.5204.8
PNZLCL113.7118.7122.7127.6130.3135.9141.4146.4148.9151.5154.1154.3
UCL159.5165.4170.1175.9179.1185.6192.0197.8200.7203.7206.7207.0
PZLCL120.5124.6127.7131.2133.0136.5139.7142.3143.5144.7145.9146.0
UCL167.6172.4176.0180.1182.3186.3190.0193.0194.5195.9197.3197.4
DP1LCL77.889.499.3112.4119.8135.8152.8168.8177.1186.0195.2196.0
UCL116.4130.5142.4157.9166.7185.5205.2223.7233.3243.5253.9254.9
DP2LCL78.588.998.0110.0117.0132.2148.5164.1172.1180.9189.8190.6
UCL117.3129.9140.8155.2163.5181.3200.3218.2227.6237.5247.8248.7
TCLCL113.1117.8121.6126.1128.5133.4138.2142.3144.4146.5148.6148.8
UCL158.8164.4168.8174.1177.0182.7188.2193.1195.5198.0200.4200.6
3PDFLCL119.1123.3126.6130.5132.5136.4140.2143.3144.8146.4147.9148.0
UCL165.8170.9174.7179.2181.6186.2190.6194.3196.0197.8199.6199.7
NEWLCL116.0120.8124.6129.2131.7136.6141.4145.6147.7149.8151.9152.1
UCL162.2167.9172.4177.8180.7186.4192.1196.9199.3201.8204.2204.5

References

  1. Fang, C.C.; Yeh, C.W. Confidence interval estimation of software reliability growth models derived from stochastic differential equations. In Proceedings of the IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Singapore, 6–9 December 2011; pp. 1843–1847. [Google Scholar]
  2. Yamada, S.; Osaki, S. Software reliability growth modeling: Models and applications. IEEE Trans. Softw. Eng. 1985, 11, 1431–1437. [Google Scholar] [CrossRef]
  3. Yin, L.; Trivedi, K.S. Confidence interval estimation of NHPP-based software reliability models. In Proceedings of the 10th International Symposium on Software Reliability Engineering, Boca Raton, FL, USA, 1–4 November 1999; pp. 6–11. [Google Scholar]
  4. Huang, C.Y. Performance analysis of software reliability growth models with testing effort and change-point. J. Syst. Softw. 2005, 76, 181–194. [Google Scholar] [CrossRef]
  5. Gonzalez, C.A.; Torres, A.; Rios, M.A. Reliability assessment of distribution power repairable systems using NHPP. In Proceedings of the 2014 IEEE PES Transmission & Distribution Conference and Exposition-Latin America (PES T&D-LA), Medellin, Colombia, 10–13 September 2014; pp. 1–6. [Google Scholar]
  6. Nagaraju, V.; Fiondella, L. An adaptive EM algorithm for NHPP software reliability models. In Proceedings of the 2015 Annual Reliability and Maintainability Symposium (RAMS), Palm Harbor, FL, USA, 26–29 January 2015; pp. 1–6. [Google Scholar]
  7. Srivastava, N.K.; Mondal, S. Development of predictive maintenance model for N-component repairable system using NHPP models and system availability concept. Glob. Bus. Rev. 2016, 17, 105–115. [Google Scholar] [CrossRef]
  8. Kim, K.C.; Kim, Y.H.; Shin, J.H.; Han, K.J. A Case Study on Application for Software Reliability Model to Improve Reliability of the Weapon System. J. KIISE 2011, 38, 405–418. [Google Scholar]
  9. Chatterjee, S.; Singh, J.B. A NHPP based software reliability model and optimal release policy with logistic-exponential test coverage under imperfect debugging. Int. J. Syst. Assur. Eng. Manag. 2014, 5, 399–406. [Google Scholar] [CrossRef]
  10. Chatterjee, S.; Shulka, A. Software reliability modeling with different type of faults incorporating both imperfect debugging and change point. In Proceedings of the 4th International Conference on Reliability, Infocom, Technologies and Optimization, Noida, India, 2–4 September 2015; pp. 1–5. [Google Scholar]
  11. Yamada, S.; Hishitani, J.; Osaki, S. Software-reliability growth with a Weibull test-effort: A model and application. IEEE Trans. Reliab. 1993, 42, 100–106. [Google Scholar] [CrossRef]
  12. Joh, H.C.; Kim, J.; Malaiya, Y.K. FAST ABSTRACT: Vulnerability discovery modelling using Weibull distribution. In Proceedings of the 19th International Symposium on Software Reliability Engineering, Seattle, WA, USA, 10–14 November 2008; pp. 299–300. [Google Scholar]
  13. Sagar, B.B.; Saket, R.K.; Singh, C.G. Exponentiated Weibull distribution approach based inflection S-shaped software reliability growth model. Ain Shams Eng. J. 2016, 7, 973–991. [Google Scholar] [CrossRef]
  14. Yang, B.; Xie, M. A study of operational and testing reliability in software reliability analysis. Reliab. Eng. Syst. Saf. 2000, 70, 323–329. [Google Scholar] [CrossRef]
  15. Huang, C.Y.; Kuo, S.Y.; Lyu, M.R.; Lo, J.H. Quantitative software reliability modeling from testing from testing to operation. In Proceedings of the International Symposium on Software Reliability Engineering, Los Alamitos, CA, USA, 8–11 October 2000; pp. 72–82. [Google Scholar]
  16. Zhang, X.; Jeske, D.; Pham, H. Calibrating software reliability models when the test environment does not match the user environment. Appl. Stoch. Models Bus. Ind. 2002, 18, 87–99. [Google Scholar] [CrossRef]
  17. Teng, X.; Pham, H. A new methodology for predicting software reliability in the random field environments. IEEE Trans. Reliab. 2006, 55, 458–468. [Google Scholar] [CrossRef]
  18. Pham, H. A software reliability model with Vtub-Shaped fault-detection rate subject to operating environments. In Proceedings of the 19th ISSAT International Conference on Reliability and Quality in Design, Honolulu, HI, USA, 5–7 August 2013; pp. 33–37. [Google Scholar]
  19. Pham, H. A new software reliability model with Vtub-Shaped fault detection rate and the uncertainty of operating environments. Optimization 2014, 63, 1481–1490. [Google Scholar] [CrossRef]
  20. Chang, I.H.; Pham, H.; Lee, S.W.; Song, K.Y. A testing-coverage software reliability model with the uncertainty of operation environments. Int. J. Syst. Sci. Oper. Logist. 2014, 1, 220–227. [Google Scholar]
  21. Honda, K.; Nakai, H.; Washizaki, H.; Fukazawa, Y. Predicting time range of development based on generalized software reliability model. In Proceedings of the 21st Asia-Pacific Software Engineering Conference, Jeju, Korea, 1–4 December 2014; pp. 351–358. [Google Scholar]
  22. Pham, H. A generalized fault-detection software reliability model subject to random operating environments. Vietnam J. Comp. Sci. 2016, 3, 145–150. [Google Scholar] [CrossRef]
  23. Song, K.Y.; Chang, I.H.; Pham, H. A Three-parameter fault-detection software reliability model with the uncertainty of operating environments. J. Syst. Sci. Syst. Eng. 2017, 26, 121–132. [Google Scholar] [CrossRef]
  24. Pham, H.; Nordmann, L.; Zhang, X. A general imperfect software debugging model with S-shaped fault detection rate. IEEE Trans. Reliab. 1999, 48, 169–175. [Google Scholar] [CrossRef]
  25. Gupta, R.D.; Kundu, D. Exponentiated exponential family: An alternative to gamma and Weibull distributions. Biom. J. 2001, 43, 117–130. [Google Scholar] [CrossRef]
  26. Huo, Y.; Jing, T.; Li, S. Dual parameter Weibull distribution-based rate distortion model. In Proceedings of the International Conference on Computational Intelligence and Software Engineering, Wuhan, China, 11–13 December 2009; pp. 1–4. [Google Scholar]
  27. Pham, H. System Software Reliability; Springer: London, UK, 2006. [Google Scholar]
  28. Akaike, H. A new look at statistical model identification. IEEE Trans. Autom. Control 1974, 19, 716–719. [Google Scholar] [CrossRef]
  29. Goel, A.L.; Okumoto, K. Time dependent error detection rate model for software reliability and other performance measures. IEEE Trans. Reliab. 1979, 28, 206–211. [Google Scholar] [CrossRef]
  30. Yamada, S.; Ohba, M.; Osaki, S. S-shaped reliability growth modeling for software fault detection. IEEE Trans. Reliab. 1983, 32, 475–484. [Google Scholar] [CrossRef]
  31. Ohba, M. Inflexion S-shaped software reliability growth models. In Stochastic Models in Reliability Theory; Osaki, S., Hatoyama, Y., Eds.; Springer: Berlin, Germany, 1984; pp. 144–162. [Google Scholar]
  32. Yamada, S.; Tokuno, K.; Osaki, S. Imperfect debugging models with fault introduction rate for software reliability assessment. Int. J. Syst. Sci. 1992, 23, 2241–2252. [Google Scholar] [CrossRef]
  33. Pham, H.; Zhang, X. An NHPP software reliability models and its comparison. Int. J. Reliab. Qual. Saf. Eng. 1997, 4, 269–282. [Google Scholar] [CrossRef]
  34. Pham, H. Software Reliability Models with Time Dependent Hazard Function Based on Bayesian Approach. Int. J. Autom. Comput. 2007, 4, 325–328. [Google Scholar] [CrossRef]
  35. Jeske, D.R.; Zhang, X. Some successful approaches to software reliability modeling in industry. J. Syst. Softw. 2005, 74, 85–99. [Google Scholar] [CrossRef]
Figure 1. Mean value function of all 11 models; (a) Dataset #1; (b) Dataset #2.
Figure 1. Mean value function of all 11 models; (a) Dataset #1; (b) Dataset #2.
Applsci 07 00983 g001
Figure 2. Relative error value of 11 models in Table 1 for Dataset #1.
Figure 2. Relative error value of 11 models in Table 1 for Dataset #1.
Applsci 07 00983 g002
Figure 3. Relative error value of 11 models in Table 1 for Dataset #2.
Figure 3. Relative error value of 11 models in Table 1 for Dataset #2.
Applsci 07 00983 g003
Figure 4. Confidence intervals of the proposed new model; (a) Dataset #1; (b) Dataset #2.
Figure 4. Confidence intervals of the proposed new model; (a) Dataset #1; (b) Dataset #2.
Applsci 07 00983 g004
Table 1. Software reliability models. Software reliability growth model (SRGM).
Table 1. Software reliability models. Software reliability growth model (SRGM).
No.Model m ( t )
1G-O Model [29] m ( t ) = a ( 1 e bt )
2Delayed S-shaped SRGM [30] m ( t ) = a ( 1 ( 1 + bt ) e bt )
3Inflection S-shaped SRGM [31] m ( t ) = a ( 1 e bt ) 1 + β e bt
4Yamada Imperfect Debugging Model [32] m ( t ) = a [ 1 e bt ] [ 1 α b ] + α at
5PNZ Model [24] m ( t ) = a [ 1 e bt ] [ 1 α b ] + α at 1 + β e bt
6Pham-Zhang Model [33] m ( t ) = ( ( c + a ) [ 1 e bt ] [ ab b α ( e α t e bt ) ] ) 1 + β e bt
7Dependent-Parameter Model1 [34] m ( t ) = α ( 1 + γ t ) ( γ t + e γ t 1 )
8Dependent-Parameter Model2 [34] m ( t ) = m 0 ( γ t + 1 γ t 0 + 1 ) e γ ( t t 0 ) + α ( γ t + 1 ) ( γ t 1 + ( 1 γ t 0 ) e γ ( t t 0 )
9Testing Coverage Model [20] m ( t ) = N [ 1 ( β β + ( at ) b ) α ]
10Three-parameter Model [23] m ( t ) = N [ 1 ( β β a b ln ( ( 1 + c ) e bt 1 + ce bt ) ) ]
11Proposed New Model m ( t ) = N ( 1 β β + ( at ) b ) α
Table 2. Field failure data for Release 1—Dataset #1.
Table 2. Field failure data for Release 1—Dataset #1.
Month IndexSystem Days (Days)System Days (Cumulative)FailuresCumulative Failures
196196177
241705131310
3878913,9201424
411,85825,778832
513,11038,8881143
614,19853,086851
714,26567,351758
815,17582,5261977
915,37697,9021794
1015,704113,6066100
1118,182131,78811111
1217,760149,5484115
1318,352167,9000115
Table 3. Test data for Release 2—Dataset #2.
Table 3. Test data for Release 2—Dataset #2.
Week IndexSystem Days (Cumulative)Cumulative FailuresWeek IndexSystem Days (Cumulative)Cumulative Failures
15519383105
29620418110
3131321467117
4181322519123
5282223570128
6332424619130
7432925657136
8633426699141
9884027733148
101234628775156
111535329798156
121786330845164
132037031892166
142387132934169
152637433955170
162887834977176
173189035999180
1834898361001181
Table 4. Model parameter estimation and comparison criteria from Dataset #1. Least-squares estimate (LSE); mean squared error (MSE); sum absolute error (SAE); predictive ratio risk (PRR), predictive power (PP); Akaike’s information criterion (AIC).
Table 4. Model parameter estimation and comparison criteria from Dataset #1. Least-squares estimate (LSE); mean squared error (MSE); sum absolute error (SAE); predictive ratio risk (PRR), predictive power (PP); Akaike’s information criterion (AIC).
ModelLSE’sMSESAEPRRPPAIC
GO a ^ = 2354138, b ^ = 0.00000443.640072.25480.38791.023998.7606
DS a ^ = 168.009, b ^ = 0.19520.741443.25102.31070.429592.2587
IS a ^ = 134.540, b ^ = 0.336, β ^ = 8.93915.319637.20900.21200.158785.3000
YID a ^ = 1.130, b ^ = 1.110, α ^ = 9.12933.389051.09130.30270.2495100.7378
PNZ a ^ = 134.549, b ^ = 0.3359, α ^ = 0.0, β ^ = 8.94017.022337.24420.21240.158887.3098
PZ a ^ = 51.455, b ^ = 0.336, α ^ = 289998.1, β ^ = 8.939, c ^ = 83.08519.149537.20910.21200.158789.3019
DP1 α ^ = 0.0088, γ ^ = 9.996370.8651207.375060.50622.6446164.5728
DP2 α ^ = 672.637, γ ^ = 0.04, t 0 = 0.027, m 0 = 23.541215.7784133.22941.10378.6260168.846
TC a ^ = 0.242, b ^ = 1.701, α ^ = 17.967, β ^ = 73.604, N ^ = 149.41025.924441.80871.44730.360195.5655
3DP a ^ = 2.980, b ^ = 0.336, β ^ = 0.080, c ^ = 1105.772, N ^ = 135.14219.151737.21070.21190.158889.3053
NEW a ^ = 0.095, b ^ = 15.606, α ^ = 0.085, β ^ = 1.855, N ^ = 116.55111.228126.55680.20420.155879.3459
Table 5. Model parameter estimation and comparison criteria from Dataset #2.
Table 5. Model parameter estimation and comparison criteria from Dataset #2.
ModelLSE’sMSESAEPRRPPAIC
GO a ^ = 291.768, b ^ = 0.00195.3796299.716024.79243.4879198.5419
DS a ^ = 168.568, b ^ = 0.0057178.4899387.77247368.58857.4923317.8791
IS a ^ = 200.110, b ^ = 0.002, β ^ = 0.05943.2888182.970910.43362.1725202.0752
YID a ^ = 81.999, b ^ = 0.0063, α ^ = 0.001418.9651119.12083.18041.0871187.7564
PNZ a ^ = 67.132, b ^ = 0.009, α ^ = 0.0019, β ^ = 0.000118.2406119.77221.55660.6869188.9438
PZ a ^ = 200.057, b ^ = 0.002, α ^ = 9999.433, β ^ = 0.058, c ^ = 0.00146.0819183.044910.40902.1698206.0887
DP1 α ^ = 0.0003, γ ^ = 0.8662075.66771411.84121,165,906.4017.1338554.6335
DP2 α ^ = 9.035, γ ^ = 0.005, t 0 = 48.975, m 0 = 49.0041379.23311134.684313.0318156.8519572.8343
TC a ^ = 0.002, b ^ = 0.646, α ^ = 0.137, β ^ = 8.920, N ^ = 7973.50116.5529116.09370.30330.4499187.4100
3DP a ^ = 0.011, b ^ = 0.707, β ^ = 8.029, c ^ = 0.000001, N ^ = 300.68434.5762154.15937.77681.8500199.3282
NEW a ^ = 0.004, b ^ = 1.471, α ^ = 0.430, β ^ = 78.738, N ^ = 504.4039.878990.36330.29440.5159187.4204

Share and Cite

MDPI and ACS Style

Song, K.Y.; Chang, I.H.; Pham, H. A Software Reliability Model with a Weibull Fault Detection Rate Function Subject to Operating Environments. Appl. Sci. 2017, 7, 983. https://doi.org/10.3390/app7100983

AMA Style

Song KY, Chang IH, Pham H. A Software Reliability Model with a Weibull Fault Detection Rate Function Subject to Operating Environments. Applied Sciences. 2017; 7(10):983. https://doi.org/10.3390/app7100983

Chicago/Turabian Style

Song, Kwang Yoon, In Hong Chang, and Hoang Pham. 2017. "A Software Reliability Model with a Weibull Fault Detection Rate Function Subject to Operating Environments" Applied Sciences 7, no. 10: 983. https://doi.org/10.3390/app7100983

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop