Next Article in Journal
Dynamic Path-Planning and Charging Optimization for Autonomous Electric Vehicles in Transportation Networks
Next Article in Special Issue
Behavioral Indicator-Based Initial Flight Training Competency Assessment Model
Previous Article in Journal
Characterization and Productivity of Alluvial Aquifers in Sustainability Oasis Areas: A Case Study of the Tata Watershed (Southeast Morocco)
Previous Article in Special Issue
Implementation of the Mean Time to Failure Indicator in the Control of the Logistical Support of the Operation Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spare Parts Forecasting and Lumpiness Classification Using Neural Network Model and Its Impact on Aviation Safety

by
Imran Shafi
1,
Amir Sohail
2,
Jamil Ahmad
3,
Julio César Martínez Espinosa
4,5,6,*,
Luis Alonso Dzul López
4,5,7,
Ernesto Bautista Thompson
4,5,8 and
Imran Ashraf
9,*
1
College of Electrical and Mechanical Engineering, National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan
2
National Centre for Robotics and Automation (NCRA), National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan
3
Abasyn University Islamabad Campus, Islamabad 44000, Pakistan
4
Higher Polytechnic School, Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain
5
Department of Project Management, Universidad Internacional Iberoamericana, Campeche 24560, Mexico
6
Fundación Universitaria Internacional de Colombia, Bogota 11131, Colombia
7
Universidade Internacional do Cuanza, Cuito EN250, Bié, Angola
8
Project Management, Universidad Internacional Iberoamericana, Arecibo, PR 00613, USA
9
Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(9), 5475; https://doi.org/10.3390/app13095475
Submission received: 5 February 2023 / Revised: 16 April 2023 / Accepted: 24 April 2023 / Published: 27 April 2023
(This article belongs to the Special Issue Research on Aviation Safety)

Abstract

:
Safety critical spare parts hold special importance for aviation organizations. However, accurate forecasting of such parts becomes challenging when the data are lumpy or intermittent. This research paper proposes an artificial neural network (ANN) model that is able to observe the recent trends of error surface and responds efficiently to the local gradient for precise spare prediction results marked by lumpiness. Introduction of the momentum term allows the proposed ANN model to ignore small variations in the error surface and to behave like a low-pass filter and thus to avoid local minima. Using the whole collection of aviation spare parts having the highest demand activity, an ANN model is built to predict the failure of aircraft installed parts. The proposed model is first optimized for its topology and is later trained and validated with known historical demand datasets. The testing phase includes introducing input vector comprising influential factors that dictate sporadic demand. The proposed approach is found to provide superior results due to its simple architecture and fast converging training algorithm once evaluated against some other state-of-the-art models from the literature using related benchmark performance criteria. The experimental results demonstrate the effectiveness of the proposed approach. The accurate prediction of the cost-heavy and critical spare parts is expected to result in huge cost savings, reduce downtime, and improve the operational readiness of drones, fixed wing aircraft and helicopters. This also resolves the dead inventory issue as a result of wrong demands of fast moving spares due to human error.

1. Introduction

Safety is the most desired feature for aviation, and safety critical spare parts hold special importance for aviation organizations. Forecasting future consumption of safety-related spare parts is the most critical part of inventory management, as its inaccurate prediction poses a serious challenge for the organizations responsible for the maintenance of aviation fleets [1]. Keeping in view efficient spare parts management and decreasing maintenance budget, possession of a reasonable inventory level is critical in the aviation industry, where lead time does not always satisfy the actual demand due to which spare parts pile up in the warehouse. Figure 1 provides an example of a sample depot that is responsible for the storage of spares and for the maintenance of helicopters and fixed-wing aircraft from four different origins. The depot stores around 97,693 spares, including 18,930 time change components (TBOs) that are categorized as slow-moving spares. These spares are expensive and are replaced based on either operational time or calendar years. The depot maintains two types of spares—fast-moving spares (selective stock list (SSL) and expandable spares) and slow-moving spares (TBOs, and non-selective stock list (nonSSL)). The SSL items are stocked for one quarter based on the demand/consumption of the previous four quarters. The TBOs, being critical and costly items, are stored for the next five years, based on the last two years of consumption data from maintenance setups. NonSSL and expandable items are stored for four quarters based on projections.
The problem of modeling future consumption is further aggravated by a lumpy spare part demand pattern [2,3], marked by periods with no demand along with sporadic demand [4,5], as shown in Figure 2.
The continuous inspections and preventive maintenance render the aircraft unserviceable for operational requirements, which costs the maintenance organizations. Therefore, the topmost priority for small and medium enterprises is to make the right spare parts available at the need of the hour and at the required location. The increasing downtime of the aircraft can be managed with efficient forecasting to have better operational fleet performance, which is a less researched domain in the aviation sector and warrants investigation. In this connection, the system experts [6,7] have utilized the statistical techniques of exponential smoothing and regression analysis [8,9], but such approaches are found to perform inaccurately when a lumpy demand pattern is processed [10]. Another interesting approach presents an innovative model for a single site to take advantage of a distribution, the zero-inflated Poisson. The authors demonstrate the model effectiveness, confirming that their approach outperforms the traditional Poisson-based approach [11].
Different methods, with varying degrees of success, have been used to forecast lumpy and intermittent demand data. These techniques comprise a variety of models, such as Holt–Winterss [12] as the statistical method. Similarly, machine learning methods such as the support vector regression (SVR) [13] are used, while long short-term memory (LSTM) networks [14] are used as a deep learning method. However, it is still debatable which technique or meta-level approach predicts lumpy and intermittent demand data most accurately. This is due to a lack of historical research on intermittent and lumpy time series data [15]. In recent years, machine learning and deep learning research has also advanced rapidly.
Among these, artificial neural networks (ANN) are considered the most recognized artificial intelligence (AI) method [16] used to handle lumpy demand patterns because they have outperformed the traditional techniques in several fields [17,18,19,20]. Owing to the added advantage of performing in the way the human brain works by acquiring training from historical demand data, such models perform superbly. Afterward, they infer future demand based on the nonlinear pattern recognition and correlation establishment between the predictor variables and the outcomes. This way of learning has attracted many system managers in the field of aviation to harness the uncertainties and to solve future spare parts demand. On the other hand, the limitation to the use of ANN remains, as it is difficult to explore the best forecasting model because of the sensitivity analysis requirement on learning rate, momentum coefficient, and the number of hidden neurons. This work develops an ANN-based network that forecasts the demand for aviation service parts with lumpy patterns. This work is motivated by the fact that ANNs do not require any parametric assumptions about the data. Particularly, it has been demonstrated that the feedforward multilayer perceptrons utilized here are universal approximators; therefore, they should be able to capture the data-generating process of intermittent demand time series. ANNs are adaptive models that do not require human professionals to provide them with predetermined architecture. The proposed ANN technique enables interaction between the quantity of demand and the inter-demand intervals of demand events, or their lags if such interactions can be discovered from the data without requiring expert input. Their versatility makes them naturally beneficial for accommodating the irregular nature of demand. We demonstrate that the proposed method is superior to others in terms of forecasting and inventory performance based on a comprehensive comparison to the other techniques when measured using the MASE metric. The performance evaluation based on the MAE metric is comparable to other approaches.
The remaining sections are organized as follows: Section 2 of this paper is a review of the relevant literature. The proposed methodology is comprehensively described in Section 3. Section 4 presents the findings and comparative analysis of the various existing approaches. Section 5 provides a summary of the paper based on our analysis of the results and discusses possible future research directions.

2. Related Work

John Croston was the first to introduce a method that was designed specifically for intermittent data [21]. Croston proposed separating the data into two series: one for arrival times and another for positive demand. Croston noted, however, that the time series data for intermittent demand varied significantly from that of conventional time series data. The former have multiple zero-demand intervals that are different from the latter. He presented an alternative technique to forecast demand from intermittent time series data. Croston stated that his method that presupposes independence between demand size and demand intervals. However, Willemain et al. [22] casted doubt on this concept of autonomy. This, however, was maintained in subsequent work that improved Croston’s original method, such as Syntetos and Boylan [23]. Syntetos and Boylan investigated Croston’s method and found it to be biased. Later, they presented a modified version of Croston’s method to resolve the bias issue [24].
Leve’n and Segerstedt proposed a modification of Croston’s technique that attempts to eliminate its inherent bias [8]. Nevertheless, the Leve’n and Segerstedt method is more biased than the Croston method, particularly for highly intermittent series. Willemain et al. identified patterns of intermittent demand in several other scenarios such as heavy machinery, electronics, maritime spare parts, etc. [25]. Similarly, Syntetos and Boylan studied intermittency in automotive spare parts [26]. Ghobbar and Friend analyzed the demand for costly aircraft maintenance parts [1]. They observed that businesses were holding too much inventory due to inaccurate demand forecasts, resulting in subpar service levels. Whether the demand is large or small or arrives at the correct or incorrect time, both can lead to mistakes. Therefore, accurate demand projections are necessary to support inventory holding and replenishment decisions. In addition, as a result of shortened life cycles, technology migrations, time for lengthy production cycles, and protracted lead times for capacity expansion, electronic companies face complexity in inventory management and risk of excess supply and shortfall of important components [27].
The work of [28] performs intermittent prediction on data collected from the Internet of things (IoT) using a recurrent neural network (RNN). The performance evaluation metric is accuracy. In this specific prediction task, RNNs outperformed ANNs. As an alternative to the Croston method, a new method based on stochastic simulation is used to conduct research in [29]. Different evaluation metrics are used such as mean error (ME), mean absolute deviation (MAD), MASE, and D proposed by the authors. In conclusion, the proposed method did not outperform the existing standard. In [30], the ATA technique was compared with the Croston technique. The data from the M4 competition were utilized. The ATA and the exponential smoothing methods are similar, but their respective emphases differ. The study predicted six future time steps. As evaluation metrics, both mean squared error and standardized mean absolute percentage error are utilized. As a metric for out-of-sample evaluation, the ATA method is superior to the Croston method. In [31], a novel method, the modified SBA, is presented for intermittent forecasting. In conclusion, the proposed method can not compete with the current method. The authors of [32] employ a deep neural network (DNN) to predict sensor data. The ARIMA and generalized autoregressive conditional heteroskedasticity (GARCH) techniques have been surpassed by this method. This study utilized simulations that require a well-considered parameter design for this purpose. Ref. [33] provides an example of a simulation design that investigates multiple parameter combinations. The study results demonstrate promising outcomes.
A study in [34] introduces a seasonal adjustment method and a dynamic neural network as the primary seasonal forecasting model. Zero demand is counteracted by preprocessing the initial input data and by adding input nodes to the neural network. This study proposed a revised error measurement method for evaluating performance. The proposed framework for forecasting outperforms competing models for intermittent demand. Two ANN models, each using 36 observations as training data, were proposed in [35]. The proposed model was able to achieve competitive inventory performance relative to Croston despite low forecasting accuracy. In [36], an ANN and RNN are trained and compared for intermittent demand forecasting. The results show that ANN performance is superior to that of RNN in long-term demand series. Another work in [37] conducts an empirical analysis utilizing 5133 SKUs from an airline and acquired forecast performance and inventory performance outcomes for different methodologies. The findings of this study indicate that proposed ANN approaches are less biased as compared to other evaluated methods.
Table 1 summarizes different datasets having different patterns including intermittent, lumpy, and smooth. It also shows the industry related to each dataset. The forecasting methods used in the referenced article and its evaluated metrics are also included. From the table, it can be observed that the performance of various methods varies with the input datasets.

3. Materials and Methods

In this section, different forecasting approaches are discussed in detail. The data procured from Central Aviation Spares Depot (CASD) require categorization based on the proposed lumpiness classification; then, the ANN architecture is discussed in detail.

3.1. Proposed Lumpiness Classification

The lumpiness factor provides a measure of the relative variability between the stochastic distributions of demand transactions and the intervals between transactions. Three statistical aspects of the historical demand data are
  • Intervals between the demands;
  • Demand size;
  • Relationship between intervals and sizes.
The proposed lumpiness classification is based on the coefficient of variation (CV) concept, which is a useful approach when variability between point estimates is compared [44,45].

3.2. Croston Method

This technique was first developed by [22] to predict intermittent and lumpy data. This method uses the amount of demand and occurrence of demand individually to estimate the forecast value. This indicates that the likelihood of a demand occurrence does not depend on the amount of demand at a certain moment. Let  V t  and  Z t  be the estimated average interval for the occurrence of non-zero demand and estimated mean for non-zero demand, respectively, at time t. The actual demand value at time t is represented as  X t , and the number of consecutive periods of zero demand is indicated as q. Then,  Y t  is used to indicate an average estimate of the amount of demand. Mathematically
i f X t = 0 Z t + 1 = α · X t + ( 1 α ) · Z t , V t + 1 = α · q + ( 1 α ) · V t , Y t + 1 = Z t + 1 V t + 1
else,
i f X t 0 Z t + 1 = Z t , V t + 1 = V t , Y t + 1 = Y t
However, other authors have observed that the Croston approach has a positive bias [46]. The authors of [47] first demonstrated that there is a bias and then refined the bias by adjusting the multipliers  Z t  and  V t  in the  ( 1 α / 2 )  form. In this section, we will compare the results obtained using the modified Croston approach.

3.3. Auto-ARIMA

ARIMA is the popular term for the Box Jenkins prediction method, which was created in 1970. It is essentially an extrapolation strategy that forecasts the underlying variable using existing time series data. It consists of three processes for estimating the ARIMA model, namely, identification, estimation, and diagnosis.
According to this, the upcoming value of any variable is observed as a linear function of its previous errors and previous values, which are stated mathematically as
Y t = β 0 + β 1 Y t 1 + β 2 Y t 2 + , , β ρ Y t ρ + ε t + θ 1 ε t 1 + , , θ q ε t q + e
where  Y t  is the estimated amount of the variable, and in the prior period t, it is denoted as a function of lag variables of itself.
The amount of arbitrary error in this period is denoted by  ε t . Here,  β i  and  θ j  are the constants. The moving average and autoregression lags are represented by q and p, respectively. The left-hand side of Equation (3) from  β o  to  β p Y ( t ρ )  is basically the autoregression (AR) part, while the remaining part of the equation is the moving average (MA) part. This is why the equation as a whole is referred to as the ARMA. The degree of the integral variable, where  Y t  the dependent variable attains its stationarity, is represented by  I ( d ) . Since ARIMA models account for the lag of the dependent variable, the random error of estimate, and the order of variables becoming stationary, the models are written as ARIMA ( p , d , q ). It simply plots the autocorrelation function (ACF) and partial autocorrelation function (PACF) on a correlation matrix to determine the relative locations of p and q in a scatter plot [24].

3.4. Simple Exponential Smoothing

One of the basic methods for predicting a time series is simple exponential smoothing [48]. This model’s fundamental presumption is that the future will mostly resemble the recent past. As a result, the amount of demand will be the key trend this algorithm will detect from historical data [49]. At each interval, the model learns some from the current demand estimate and retains a little of the previous forecast. This is noteworthy since it demonstrates that the most recent prediction already contains some data from prior demand estimates. This indicates that the prior forecast contains the demand pattern insights that the model has previously obtained. The alpha in the smoothing parameter decides how much weight is placed on the current demand estimate. Mathematically, it can be written as
f t = α d t 1 + ( 1 α ) f t 1 0 < α 1
where  α  is the ratio that indicates how much weight the model will give to the current observation compared to the weight it will give to demand history.

3.5. Artificial Neural Network Modeling

The prime concern in a neural network system that is trained by examples is how well it generalizes out-of-sample. If the system memorizes the training set, then it is likely that it will train well in the sample and badly out-of-sample. For time series analysis, this translates to whether the trained net can make good forecasts [47,50,51]. Several parametric values of neural network needed to be finalized in the process of designing the same, which are listed as
  • Number of input neurons;
  • Number of hidden neurons;
  • Number of output neurons.
The values of the above-mentioned parameters demand experimentation or trial-and-error methods until the best forecasting results are achieved. A multilayer perceptron (MLP), three-layered network is designed that contains the input and hidden and output layers to explore the correlation of input and output datasets [52].
The number of input neurons represents the number of influential or high-risk factors that contribute to the behavior of the neural network model. It becomes a real challenge to decide predictor variables, as too many will complicate the network, and the solution will diverge instead of converging.
In backpropagation networks (BPN), how well a network can learn from historical demand data depends upon the number of hidden neurons. If too many of them are used, then the network does not generalize but rather memorizes the spare parts’ past usage history and vice versa [21]. Thus, evaluating the right combination of hidden neurons along with input neurons is a case of trial and error, but the approximation to the same can be made by Equation [53].
n 1 = n + 0.168 ( n m ) n m m 0.168 ( m n ) n m
The output neuron in the output layer of the proposed ANN model signifies the future demand value or outcome that is one parameter to be evaluated.

3.6. Recurent Neural Network Modeling

A recurrent neural network (RNN) is another form of a conventional artificial neural network (ANN), which was developed by [31]. The feedforward or multilayer perceptron designs are significantly different from RNN architectures. RNN is a dynamic system that represents the temporal state and has strong computational capabilities that can be applied to a wide range of temporal processes. RNN is frequently used to explore time-series data, including text, photos, and other sequential data. A straightforward RNN design with a feedback process is shown in Figure 3.
Due to this architecture, information for state  t + 1  will be provided by the output that has been received at stage t. Weights and parameters may be used to represent the information. This one-delay-step procedure takes place in the concealed layer and is often used. The hidden layer and output layer comprise the activation function that corresponds to the RNN. Mathematically, RNN can be represented as
h t = f 1 ( w h h h t 1 + i = 1 m w i x h x i t + b h ) , y t = f 2 ( w y h h t + b y )

3.7. Long Short-Term Memory Modeling

The LSTM neural network, which was proposed in [32], is a kind of recurrent neural network that is especially suitable for modeling long-range relationships. In contrast to recurrent neural networks, LSTM architecture consists of memory blocks as compared to hidden units. A memory block has one or more memory cells regulated by nonlinear sigmoidal gates and multiplicatively applied. Memory cells share the same gates in order to decrease parameters. These gates control whether the model retains the values at the gates or if they are discarded. Thus, the network is able to leverage long-term temporal context [33]. The unit activation can be found with the following equations
i t = σ · ( W x i · x t + W h i · h t 1 + W c i · c t 1 + b i ) f t = σ · ( W x f · x t + W h f · h t 1 + W c f · c t 1 + b f ) c t = f t · c t 1 + i t · t a n h · ( W x c · x t + W h c · h t 1 + b c ) o t = σ · ( W x o · x t + W h o · h t 1 + W c o · c t 1 + b o ) i t = o t · t a n h ( c t )
The logistic sigmoid function is denoted by a, activation vectors for cells are represented by the letters c, while input gates are denoted by the letters i, forget gates by the letters f, and output gates by the letters o. The size of the hidden vector h is the same as that of other vectors. The W notations represent the cell-to-gate vector weight matrices. The activation function at the output is denoted by the symbol tanh [54,55].

3.8. Calibration

Calibration optimizes the network by computing the mean squared error between the actual and forecast demand values [5,56]. As the training continues, the average error continues to get smaller, as shown in Figure 4. Finally, the average error reaches a point where it is fairly horizontal to the epochs’ axis. The average error further continues to reduce to a point from where it slowly begins to get larger [57]. This point is the optimal point, and from this point, training ceases to make any progress. Calibration saves the network at this optimal point.

3.9. Implementation Methodology

This section signifies the proposed method that was used in this research to find the results comprising the following stages [8,25]
  • Filtering the critical spare parts (CSP) from the whole collection of aviation spare parts having the highest demand activity and characterized by lumpiness [58];
  • Defining the influential factors that signified variables most strongly predictive of an outcome [41];
  • Utilizing the neural network model to forecast the unknown demand data values of future consumption.

3.9.1. Lumpiness Factor Calculation

The lumpiness factor was evaluated by the following equation
γ = C V s C V I = σ s / μ s σ I / μ I = Coefficient of variation of demand size Coefficient of variation of demand interval
Note that lumpy demand implies,  σ I μ I > 0 .

3.9.2. Calculation of Size Information

The coefficient of variance of demand size was found by taking the ratio of the standard deviation of the demand size to its mean using the following equation [58]
C V s = σ s μ s = i = 1 N ( i ) 2 N
where
= i = 1 N i N
Here, N represents total number of observations, and i represents observation at a specific time.

3.9.3. Calculation of Interval Information

The coefficient of variance of the number of no-demand periods was found by taking the ratio of standard deviation to the mean (average of time periods existing between two successive demand values), i.e.,
C V I = σ I μ I = ( t t i ) 2 N i = 1 N t i N

3.9.4. Neural Network Training

The historical demand data were divided into three categories i.e.,
  • Training set (80%);
  • Validation set (20% of the training set decided based on literature review);
  • Testing set (20%).
The research process flowchart, given in Figure 5, shows the steps that were utilized to evaluate the best forecasting results.
Dividing the dataset into training, validation and test sets is an essential step to develop an efficient machine learning model that requires sufficient observations. In our case, the data are randomly shuffled to ensure that samples are ordered randomly. Initially, they are divided into two parts: 80% of the data are kept in the training set, and 20% is kept in the test set to evaluate the final model. The training set is used to train the model while further selecting 20% as the validation set from the training data. The validation set is kept to monitor the model’s performance during training to avoid overfitting while tuning the hyperparameters. The dataset has been collected from a spare depot that stocks fixed wing and helicopter spare parts and deals with their demand and supply process on a yearly basis and is in line with the operation shown in Figure 2. The sample size and the observations are around 23,600, including 3770 lumpy and rest as regular demand pattern spares from a list of 97,693 items related to flying machines from four different origins.
Figure 2 is drawn for a single spare part number 8AT-1250-00-02 with nomenclature vibration damper assembly that is demanded in 24 quarters (6 years, 2009–2015), and it shows no regular pattern. The attention is also drawn toward Table 2, where ten high-frequency spare parts that are lumpy in nature have been selected, and the necessary details are given. For lumpiness classification, neural network models can be used to identify which spare parts have irregular demand patterns. This is conducted by training the model on a labeled dataset that consists of spare parts with known demand patterns. The input to the model can include demand data for each spare part, and the output can be a binary classification, indicating whether a spare part has a lumpy or regular demand pattern.
The ANN-based demand forecasting procedure is as follows. The first step is the input variable selection. For all networks, the number of factors is the same as denoted by alphabets starting from  x 1  through  x 3 , which are listed as:
  • x 1 —Average of difference between the demand and its mean of four quarters over three years;
  • x 2 —Average of four quarters demand over three years;
  • x 3 —Average of four quarter interdemand interval over three years.
Data pre-processing is the second step. Data truncation of leading and trailing zeros was performed both in the known inputs and output dataset values. In the third step, scaling is performed. The scaling of the inputs and target values was performed according to the equation [58].
T = ( x x m i n ) ( x m a x x m i n ) + 0.1
Training is the fourth step of the methodology. The network training algorithm ’traingdm’ was used. The activation function utilized in the hidden layer was set as ’tan-sigmoid’, whereas the output layer contained the ’log sig’ activation function [41]. The learning rate was finalized as 0.05. The network was trained by setting the number of epochs as 500. The architecture of the proposed ANN model is given in Figure 6.
The outcome of the trained model was simulated by the trained and validated ANN model [59]. After that, post-processing was carried out. The descaling of the outcome was performed by the inverse transformation equation [58].
x = x m i n + ( T 0.1 ) ( x m a x x m i n ) 0.8
The mean absolute percentage error (MAPE) is a popular metric for measuring the accuracy of a predictive model. Lolli et al. [36] use MAPE as a criterium to evaluate the accuracy of different forecasting models. The MAPE is calculated as the average of the absolute percentage errors over all forecasted periods. The formula for calculating MAPE is as follows
M A P E = 1 n t = 1 n | A t F t A t |
where  F t  is the forecasted value at time period t A t  is the actual value at time period t, n is the total number of time periods, and the absolute value and percentage are taken to ensure that the errors are positive and relative to the actual value.
This criterion of MAPE is used to compare the forecasting performance of different models, including ARIMA, seasonal decomposition of time series, exponential smoothing, and neural networks, on various demand patterns for aircraft spare parts [36]. There are other metrics such as MAE, MASE and root mean square error (RMSE) that are also found to be used for similar purposes in the literature [60,61,62,63].
MAE is a commonly used metric in time series forecasting to evaluate the performance of a forecasting model. MAE measures the average magnitude of errors between actual and predicted values. It measures the average absolute difference between the actual and predicted values of a variable. The formula for calculating MAE is
M A E = 1 N | y i y ¯ |
where n is the total number of observations,  y i  is the actual value of the  i - t h  observation and  y ¯  is the mean value of all actual values.
In other words, it computes the absolute difference between each actual value and the mean value, sums them up, and then divides by the total number of observations. The MAE is a useful metric because it gives a sense of the magnitude of the errors in predictions, regardless of their direction. It is also relatively easy to interpret, as it is expressed in the same units as the variable being predicted.
MASE is another common metric used to evaluate the accuracy of a time series forecasting model. It measures the forecast’s accuracy relative to a naive forecast and allows for comparisons of different forecasting methods across different time series. A value of 1 indicates that the forecasting method is no better than the naive forecast, while a value less than 1 indicates that the method is better than the naive forecast. MASE is calculated as the MAE of the forecasted values divided by the MAE of a naive forecast, where the naive forecast is typically the simple persistence forecast (i.e., using the last observed value as the forecast for the next period). The formula for MASE is as follows
M A S E = m e a n ( | A t F t | ) / ( 1 / n s u m ( | A t A t 1 | ) )
where  A t  is the actual value at time t F t  is the forecasted value at time t, and n is the number of observations in the dataset.

4. Results and Discussion

The spare parts demand forecast by the proposed artificial neural network approach was explored in this section by using the mathematical tools and techniques discussed earlier. The discovery of the predictor variables or influential factors dictating the historical demand pattern was the most critical part of the research, which ultimately formed part of inputs to the artificial neural network model. Historical data were procured from CASD, which showed lumpiness, thus making forecasting a real challenge. The spare parts were then prioritized based on the highest demand activity and lumpiness factor as shown in Table 2.
The plot square error dynamics for training a neural network are shown in Figure 7. The MSE training plot reaches a minimum value of 0.074371 after training for 500 epochs. Table 3 shows the computation for testing dataset.
The trained model was further validated with a normalized (p2, t2) validation dataset. Ultimately, the normalized test dataset p3’ was posed to the trained and validated model to obtain the normalized output dataset t3’. The forecasted result required descaling or inverse transformation to depict the real data as shown in Table 4 below (tabulated data are of Tail Rotor Blade).
The inverse transformation formula requires the maximum and minimum demand data values of the year 2014–2015, which were unavailable; however, approximation to the unknown was carried out by taking the average of the corresponding maximum and minimum values of four quarters for 5 years (2009–2014).

4.1. Comparison with Other State-of-the-Art Models

For issues involving time series prediction, RNNs are the most preferred models. In the field of natural language processing, they have performed particularly well. RNNs are all-purpose approximators, such as ANNs. Contrary to ANNs, recurrent cell feedback loops naturally handle the temporal dependencies and temporal order of the sequences [64]. In [65], the authors proposed an RNN model to forecast intermittent and lumpy time series data. To compare our model with the RNN model, results of lumpy data from [66] are considered here. An R package ’tsintermittent library’ is used for the simulation process. A two-layer RNN model is used, which consists of 64 recurrent layers and a single node dense layer. Sigmoid and rectified linear unit (ReLu) are used as activation functions. A mean absolute error (MAE) is used for performance evaluation. The lower the MAE score, the better. This is because MAE is a measure of the average error between the predictions and intended targets; thus, we want to minimize this value. The results shows that RNN performs well in forecasting tasks, and the calculated MAE value is 0.6 as compared to the simple ANN model, where the calculated MAE value is 0.4. Comparison of the RNN model with ANN, Croston, and ANN-based approach using the Levenberg–Marquardt training algorithm, SVM, adaptive univariate SVM [39] and single exponential smoothing (SES) models was also carried out. The results show that the proposed approach based on gradient descent with momentum term outperforms the other methods in the case of lumpy data. MAE values obtained for different methods are listed in Table 5. A lower MAE indicates a better performance and vice versa.
In [67], a comparison is made between various approaches to forecasting time series, including classical, machine learning, and deep learning approaches. This study examined four distinct classes of data, labeled A, B, C, and D. These classes are distinguished from one another according to the degree of lumpiness and intermittency exhibited by the data. Class C data are lumpy, and their characteristics are very comparable to those of the data we use in our work. Therefore, we will compare the results of our predictions with the forecasting results obtained from data C using various models. The research study that was referenced selected Croston, Holt–Winter, and Auto-ARIMA techniques from the classical methods category; RF, XGBoost, and Auto-SVR models from the machine learning category; and MLP and LSTM models from the deep learning category for evaluation and comparison.
The LSTM displays the best performance with values of 0.97 for MASE, followed by Auto-ARIMA with values of 1.04 for MASE. The Croston technique has the worst performance, and its MASE value is greater than 2. MLP performs second worst with a value of 1.68, listed in Table 6. We also included several more approaches based on ANN with the traditional Levenberg–Marquardt algorithm, SVM and adaptive univariate SVM, and the results indicate improved performance of the proposed method vs. existing approaches.

4.2. Implications

The results of this study suggested that ANN was the best forecasting technique to handle and infer future lumpy demand. Previous studies considered the data feature extraction along with the recorded data based on flying hours and operational intensity. The hybrid neural networks were also used by combining traditional techniques with neural networks [68]. However, all these approaches led to magnified forecasting error due to a lack of data pertaining to fleet performance and its approximation.
During the research, quantity demanded instead of quantity issued was taken into consideration. Forecasting future consumption was performed with the prior information that there were no spare parts available in the warehouse. Furthermore, a detailed survey was performed to choose CSPs from a whole collection of spare parts, which was a challenge solved by demand categorization based on the extent of lumpiness as shown in Table 2. The spare parts were sorted and selected, which showed variability in the demand interval and demand value. Hence, planning for the worst-case scenario was performed to forecast lumpy demand.
The training MSE of the ANN model was found to decrease, which showed perfect training. The trained and validated model was then posed with a new set of inputs to evaluate the forecasted value of 7.989 as shown in Table 3, which was comparable with the value of nine actually demanded in the years 2014–2015.

5. Conclusions

This research holds a significant potential to forecast the demand for aviation spare parts, as it presents a new and more efficient way to forecast demand. The problem of accurately harnessing the uncertainties will prevail for a long time to come; however, an attempt is being made to keep the forecasting error to a minimum so that it can be effectively and practically utilized in the field of aviation. Ongoing work includes the application of the designed ANN model for the rest of the spare parts demand forecast. A comparative analysis of the forecast and actual demand values was carried out. Furthermore, this work also presents a detailed comparison of the proposed model with a few state-of-the-art models and observed that the proposed method’s performance is superior to other methods. The current model can serve as a starting point for further advancements in forecasting the demand for aviation spare parts. This will increase the operational availability of aircraft and improve safety by reducing the number of flights with inoperative components.

Author Contributions

Conceptualization, I.S. and A.S.; Data curation, A.S., J.A. and J.C.M.E.; Formal analysis, I.S. and J.A.; Funding acquisition, L.A.D.L.; Investigation, J.C.M.E. and E.B.T.; Methodology, J.A.; Project administration, L.A.D.L.; Resources, L.A.D.L.; Software, J.C.M.E. and E.B.T.; Supervision, I.A.; Validation, I.A.; Visualization, E.B.T.; Writing—original draft, I.S. and A.S.; Writing—review and editing, I.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the European University of the Atlantic.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Ghobbar, A.A.; Friend, C.H. Evaluation of forecasting methods for intermittent parts demand in the field of aviation: A predictive model. Comput. Oper. Res. 2003, 30, 2097–2114. [Google Scholar] [CrossRef]
  2. Nasiri Pour, A.; Rostami-Tabar, B.; Rahimzadeh, A. A hybrid neural network and traditional approach for forecasting lumpy demand. Proc. World Acad. Sci. Eng. Technol. 2008, 2, 1028–1034. [Google Scholar]
  3. Amin-Naseri, M.R.; Tabar, B.R. Neural network approach to lumpy demand forecasting for spare parts in process industries. In Proceedings of the 2008 International Conference on Computer and Communication Engineering, Kuala Lumpur, Malaysia, 13–15 May 2008; pp. 1378–1382. [Google Scholar]
  4. Hua, Z.; Zhang, B.; Yang, J.; Tan, D. A new approach of forecasting intermittent demand for spare parts inventories in the process industries. J. Oper. Res. Soc. 2007, 58, 52–61. [Google Scholar] [CrossRef]
  5. Wang, W.; Syntetos, A.A. Spare parts demand: Linking forecasting to equipment maintenance. Transp. Res. Part E Logist. Transp. Rev. 2011, 47, 1194–1209. [Google Scholar] [CrossRef]
  6. Wang, L.; Zeng, Y.; Chen, T. Back propagation neural network with adaptive differential evolution algorithm for time series forecasting. Expert Syst. Appl. 2015, 42, 855–863. [Google Scholar] [CrossRef]
  7. Zhang, G.; Patuwo, B.E.; Hu, M.Y. Forecasting with artificial neural networks: The state of the art. Int. J. Forecast. 1998, 14, 35–62. [Google Scholar] [CrossRef]
  8. Syntetos, A.A.; Boylan, J.E. The accuracy of intermittent demand estimates. Int. J. Forecast. 2005, 21, 303–314. [Google Scholar] [CrossRef]
  9. Vaitkus, V.; Zylius, G.; Maskeliunas, R. Electrical spare parts demand forecasting. Elektron. Ir Elektrotechnika 2014, 20, 7–10. [Google Scholar] [CrossRef]
  10. Bacchetti, A.; Saccani, N. Spare parts classification and demand forecasting for stock control: Investigating the gap between research and practice. Omega 2012, 40, 722–737. [Google Scholar] [CrossRef]
  11. Costantino, F.; Di Gravio, G.; Patriarca, R.; Petrella, L. Spare parts management for irregular demand items. Omega 2018, 81, 57–66. [Google Scholar] [CrossRef]
  12. Gamberini, R.; Lolli, F.; Rimini, B.; Sgarbossa, F. Forecasting of sporadic demand patterns with seasonality and trend components: An empirical comparison between Holt-Winters and (S) ARIMA methods. Math. Probl. Eng. 2010, 2010, 579010. [Google Scholar] [CrossRef]
  13. Pai, P.F.; Lin, K.P.; Lin, C.S.; Chang, P.T. Time series forecasting by a seasonal support vector regression model. Expert Syst. Appl. 2010, 37, 4261–4265. [Google Scholar] [CrossRef]
  14. Fu, W.; Chien, C.F.; Lin, Z.H. A hybrid forecasting framework with neural network and time-series method for intermittent demand in semiconductor supply chain. In Proceedings of the IFIP International Conference on Advances in Production Management Systems, Seoul, Republic of Korea, 26–30 August 2018; pp. 65–72. [Google Scholar]
  15. Nikolopoulos, K. We need to talk about intermittent demand forecasting. Eur. J. Oper. Res. 2021, 291, 549–559. [Google Scholar] [CrossRef]
  16. Rosienkiewicz, M. Artificial intelligence methods in spare parts demand forecasting. Logist. Transp. 2013, 18, 41–50. [Google Scholar]
  17. Hua, Z.; Zhang, B. A hybrid support vector machines and logistic regression approach for forecasting intermittent demand of spare parts. Appl. Math. Comput. 2006, 181, 1035–1048. [Google Scholar] [CrossRef]
  18. Rustam, F.; Siddique, M.A.; Siddiqui, H.U.R.; Ullah, S.; Mehmood, A.; Ashraf, I.; Choi, G.S. Wireless capsule endoscopy bleeding images classification using CNN based model. IEEE Access 2021, 9, 33675–33688. [Google Scholar] [CrossRef]
  19. Siddiqui, H.U.R.; Shahzad, H.F.; Saleem, A.A.; Khan Khakwani, A.B.; Rustam, F.; Lee, E.; Ashraf, I.; Dudley, S. Respiration Based Non-Invasive Approach for Emotion Recognition Using Impulse Radio Ultra Wide Band Radar and Machine Learning. Sensors 2021, 21, 8336. [Google Scholar] [CrossRef] [PubMed]
  20. Shafi, I.; Hussain, I.; Ahmad, J.; Kim, P.W.; Choi, G.S.; Ashraf, I.; Din, S. License plate identification and recognition in a non-standard environment using neural pattern matching. Complex Intell. Syst. 2022, 8, 3627–3639. [Google Scholar] [CrossRef]
  21. Gutierrez, R.S.; Solis, A.O.; Mukhopadhyay, S. Lumpy demand forecasting using neural networks. Int. J. Prod. Econ. 2008, 111, 409–420. [Google Scholar] [CrossRef]
  22. Croston, J.D. Forecasting and stock control for intermittent demands. J. Oper. Res. Soc. 1972, 23, 289–303. [Google Scholar] [CrossRef]
  23. Willemain, T.R.; Smart, C.N.; Shockor, J.H.; DeSautels, P.A. Forecasting intermittent demand in manufacturing: A comparative evaluation of Croston’s method. Int. J. Forecast. 1994, 10, 529–538. [Google Scholar] [CrossRef]
  24. Syntetos, A.A.; Boylan, J.E. On the bias of intermittent demand estimates. Int. J. Prod. Econ. 2001, 71, 457–466. [Google Scholar] [CrossRef]
  25. Willemain, T.R.; Smart, C.N.; Schwarz, H.F. A new approach to forecasting intermittent demand for service parts inventories. Int. J. Forecast. 2004, 20, 375–387. [Google Scholar] [CrossRef]
  26. Levén, E.; Segerstedt, A. Inventory control with a modified Croston procedure and Erlang distribution. Int. J. Prod. Econ. 2004, 90, 361–367. [Google Scholar] [CrossRef]
  27. Fu, W.; Chien, C.F. UNISON data-driven intermittent demand forecast framework to empower supply chain resilience and an empirical study in electronics distribution. Comput. Ind. Eng. 2019, 135, 940–949. [Google Scholar] [CrossRef]
  28. Lee, K.; Kang, D.; Choi, H.; Park, B.; Cho, M.; Kim, D. Intermittent demand forecasting with a recurrent neural network model using IoT data. Int. J. Control Autom. 2018, 11, 153–168. [Google Scholar] [CrossRef]
  29. Doszyń, M. New forecasting technique for intermittent demand, based on stochastic simulation. An alternative to Croston’s method. Acta Univ. Lodz. Folia Oeconomica 2018, 5, 41–55. [Google Scholar] [CrossRef]
  30. Yilmaz, T.E.; Yapar, G.; Yavuz, İ. Comparison of Ata method and croston based methods on forecasting of intermittent demand. Mugla J. Sci. Technol. 2019, 5, 49–55. [Google Scholar]
  31. Babai, M.Z.; Dallery, Y.; Boubaker, S.; Kalai, R. A new method to forecast intermittent demand in the presence of inventory obsolescence. Int. J. Prod. Econ. 2019, 209, 30–41. [Google Scholar] [CrossRef]
  32. Aggarwal, A.; Rani, A.; Sharma, P.; Kumar, M.; Shankar, A.; Alazab, M. Prediction of landsliding using univariate forecasting models. Internet Technol. Lett. 2022, 5, e209. [Google Scholar] [CrossRef]
  33. Huifeng, W.; Shankar, A.; Vivekananda, G. Modelling and simulation of sprinters’ health promotion strategy based on sports biomechanics. Connect. Sci. 2021, 33, 1028–1046. [Google Scholar] [CrossRef]
  34. Liu, P. Intermittent demand forecasting for medical consumables with short life cycle using a dynamic neural network during the COVID-19 epidemic. Health Inform. J. 2020, 26, 3106–3122. [Google Scholar] [CrossRef]
  35. Kourentzes, N. Intermittent demand forecasts with neural networks. Int. J. Prod. Econ. 2013, 143, 198–206. [Google Scholar] [CrossRef]
  36. Lolli, F.; Gamberini, R.; Regattieri, A.; Balugani, E.; Gatos, T.; Gucci, S. Single-hidden layer neural networks for forecasting intermittent demand. Int. J. Prod. Econ. 2017, 183, 116–128. [Google Scholar] [CrossRef]
  37. Babai, M.Z.; Tsadiras, A.; Papadopoulos, C. On the empirical performance of some new neural network methods for forecasting intermittent demand. IMA J. Manag. Math. 2020, 31, 281–305. [Google Scholar] [CrossRef]
  38. Tian, X.; Wang, H.; Erjiang, E. Forecasting intermittent demand for inventory management by retailers: A new approach. J. Retail. Consum. Serv. 2021, 62, 102662. [Google Scholar] [CrossRef]
  39. Jiang, P.; Huang, Y.; Liu, X. Intermittent demand forecasting for spare parts in the heavy-duty vehicle industry: A support vector machine model. Int. J. Prod. Res. 2021, 59, 7423–7440. [Google Scholar] [CrossRef]
  40. Pennings, C.L.; Van Dalen, J.; van der Laan, E.A. Exploiting elapsed time for managing intermittent demand for spare parts. Eur. J. Oper. Res. 2017, 258, 958–969. [Google Scholar] [CrossRef]
  41. Regattieri, A.; Gamberi, M.; Gamberini, R.; Manzini, R. Managing lumpy demand for aircraft spare parts. J. Air Transp. Manag. 2005, 11, 426–431. [Google Scholar] [CrossRef]
  42. Abbasimehr, H.; Shabani, M.; Yousefi, M. An optimized model using LSTM network for demand forecasting. Comput. Ind. Eng. 2020, 143, 106435. [Google Scholar] [CrossRef]
  43. Dudek, G.; Pełka, P.; Smyl, S. A hybrid residual dilated LSTM and exponential smoothing model for midterm electric load forecasting. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 2879–2891. [Google Scholar] [CrossRef]
  44. Syntetos, A.A. A note on managing lumpy demand for aircraft spare parts. J. Air Transp. Manag. 2007, 13, 166–167. [Google Scholar] [CrossRef]
  45. Li, S.; Kuo, X. The inventory management system for automobile spare parts in a central warehouse. Expert Syst. Appl. 2008, 34, 1144–1153. [Google Scholar] [CrossRef]
  46. Teunter, R.; Sani, B. On the bias of Croston’s forecasting method. Eur. J. Oper. Res. 2009, 194, 177–183. [Google Scholar] [CrossRef]
  47. Solis, A.; Longo, F.; Mukhopadhyay, S.; Nicoletti, L.; Brasacchio, V. Approximate and exact corrections of the bias in Croston’s method when forecasting lumpy demand: Empirical evaluation. In Proceedings of the 13th International Conference on Modeling and Applied Simulation, MAS, Bordeaux, France, 10–12 September 2014; p. 205. [Google Scholar]
  48. Yermal, L.; Balasubramanian, P. Application of auto arima model for forecasting returns on minute wise amalgamated data in nse. In Proceedings of the 2017 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), Coimbatore, TN, India, 14–16 December 2017; pp. 1–5. [Google Scholar]
  49. Ostertagová, E.; Ostertag, O. The simple exponential smoothing model. In Proceedings of the 4th International Conference on Modelling of Mechanical and Mechatronic Systems, Herľany, Slovak Republic, 20–22 September 2011; Technical University of Košice: Košice, Slovak Republic, 2011; pp. 380–384. [Google Scholar]
  50. Ostertagova, E.; Ostertag, O. Forecasting using simple exponential smoothing method. Acta Electrotech. Et Inform. 2012, 12, 62. [Google Scholar] [CrossRef]
  51. Truong, N.K.V.; Sangmun, S.; Nha, V.T.; Ichon, F. Intermittent Demand forecasting by using Neural Network with simulated data. In Proceedings of the 2011 International Conference on Industrial Engineering and Operations Management, Kuala Lumpur, Malaysia, 22–24 January 2011. [Google Scholar]
  52. Kozik, P.; Sęp, J. Aircraft engine overhaul demand forecasting using ANN. Manag. Prod. Eng. Rev. 2012, 3, 21–26. [Google Scholar]
  53. Huang, Y.; Sun, D.; Xing, G.; Chang, H. Criticality evaluation for spare parts based on BP neural network. In Proceedings of the 2010 International Conference on Artificial Intelligence and Computational Intelligence, Sanya, China, 23–24 October 2010; Volume 1, pp. 204–206. [Google Scholar]
  54. Mitrea, C.; Lee, C.; Wu, Z. A comparison between neural networks and traditional forecasting methods: A case study. Int. J. Eng. Bus. Manag. 2009, 1, 11. [Google Scholar] [CrossRef]
  55. Hu, Y.; Liu, Y.; Wang, Z.; Wen, J.; Li, J.; Lu, J. A two-stage dynamic capacity planning approach for agricultural machinery maintenance service with demand uncertainty. Biosyst. Eng. 2020, 190, 201–217. [Google Scholar] [CrossRef]
  56. Tseng, F.M.; Yu, H.C.; Tzeng, G.H. Combining neural network model with seasonal time series ARIMA model. Technol. Forecast. Soc. Chang. 2002, 69, 71–87. [Google Scholar] [CrossRef]
  57. Chen, F.L.; Chen, Y.C. An investigation of forecasting critical spare parts requirement. In Proceedings of the 2009 WRI World Congress on Computer Science and Information Engineering, Los Angeles, CA, USA, 31 March–2 April 2009; Volume 4, pp. 225–230. [Google Scholar]
  58. Ren, J.; Xiao, M.; Zhou, Z.; Zhang, F. Based on improved bp neural network to forecast demand for spare parts. In Proceedings of the 2009 Fifth International Joint Conference on INC, IMS and IDC, Seoul, Republic of Korea, 25–27 August 2009; pp. 1811–1814. [Google Scholar]
  59. Ying, Z.; Hanbin, X. Study on the model of demand forecasting based on artificial neural network. In Proceedings of the 2010 Ninth International Symposium on Distributed Computing and Applications to Business, Engineering and Science, Hong Kong, China, 10–12 August 2010; pp. 382–386. [Google Scholar]
  60. Tofallis, C. A better measure of relative prediction accuracy for model selection and model estimation. J. Oper. Res. Soc. 2015, 66, 1352–1362. [Google Scholar] [CrossRef]
  61. Hyndman, R.J.; Koehler, A.B. Another look at measures of forecast accuracy. Int. J. Forecast. 2006, 22, 679–688. [Google Scholar] [CrossRef]
  62. Kim, S.; Kim, H. A new metric of absolute percentage error for intermittent demand forecasts. Int. J. Forecast. 2016, 32, 669–679. [Google Scholar] [CrossRef]
  63. Makridakis, S. Accuracy measures: Theoretical and practical concerns. Int. J. Forecast. 1993, 9, 527–529. [Google Scholar] [CrossRef]
  64. Qu., S.; Sun, Z.; Fan, H.; Li, K. BP neural network for the prediction of urban building energy consumption based on Matlab and its application. In Proceedings of the 2010 Second International Conference on Computer Modeling and Simulation, Darmstadt, Germany, 15–18 November 2010; Volume 2, pp. 263–267. [Google Scholar]
  65. Muhaimin, A.; Prastyo, D.D.; Lu, H.H.S. Forecasting with recurrent neural network in intermittent demand data. In Proceedings of the 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 28–29 January 2021; pp. 802–809. [Google Scholar]
  66. Hewamalage, H.; Bergmeir, C.; Bandara, K. Recurrent neural networks for time series forecasting: Current status and future directions. Int. J. Forecast. 2021, 37, 388–427. [Google Scholar] [CrossRef]
  67. Kiefer, D.; Grimm, F.; Bauer, M.; Van Dinther, C. Demand forecasting intermittent and lumpy time series: Comparing statistical, machine learning and deep learning methods. In Proceedings of the 54th Annual Hawaii International Conference on System Sciences, Kauai, HI, USA, 5–8 January 2021. [Google Scholar]
  68. Song, H.; Zhang, C.; Liu, G.; Zhao, W. Equipment spare parts demand forecasting model based on grey neural network. In Proceedings of the 2012 International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering, Chengdu, China, 15–18 June 2012; pp. 1274–1277. [Google Scholar]
Figure 1. A sample spare depot storage template.
Figure 1. A sample spare depot storage template.
Applsci 13 05475 g001
Figure 2. Lumpy demand forecasting graphical representation.
Figure 2. Lumpy demand forecasting graphical representation.
Applsci 13 05475 g002
Figure 3. Schematic architecture of RNN.
Figure 3. Schematic architecture of RNN.
Applsci 13 05475 g003
Figure 4. Neural network calibrations.
Figure 4. Neural network calibrations.
Applsci 13 05475 g004
Figure 5. ANN-based research process flowchart.
Figure 5. ANN-based research process flowchart.
Applsci 13 05475 g005
Figure 6. Architecture of proposed ANN.
Figure 6. Architecture of proposed ANN.
Applsci 13 05475 g006
Figure 7. Neural network performance plot.
Figure 7. Neural network performance plot.
Applsci 13 05475 g007
Table 1. Datasets having intermittent lumpy and smooth patterns, their forecasting methods, and accuracies on different metrics. The ’×’ indicates that the value for this parameter is not reported.
Table 1. Datasets having intermittent lumpy and smooth patterns, their forecasting methods, and accuracies on different metrics. The ’×’ indicates that the value for this parameter is not reported.
Series TypeRef.ApplicationForecasting MethodMAEsMAPERMSEMADMASE
Intermittent2021 [38]Retail industrySBA××0.632××
SES××0.617××
Croston××0.630××
Markov-combined method0.3280.4060.576××
2019 [27]Electronics distributionARIMA12.39×16.90×1.108
Croston13.24×17.31×1.197
SVM10.11×16.72×0.901
RNN9.29×16.61×0.792
UNISON data driven9.19×15.13×0.768
2021 [39]Vehicle industrySVM××××0.830
ANN××××0.954
RNN××××0.999
2017 [40]Naval industrySES××××1.796
SBA××××1.678
TSB××××1.700
Bootstrap××××1.981
LUMPY2021 [39]Vehicle industrySVM××××0.821
ANN××××1.042
RNN××××1.069
2005 [41]Space industrySES×××9.26×
Croston×××8.77×
2008 [21]ElectronicsSBA×138.36×××
NN×128.20×××
SMOOTH2021 [39]Vehicle industrySVM××××0.877
ANN××××0.930
RNN××××0.987
2020 [42]Furniture industryARIMA×0.16163531××
KNN×0.13742913××
RNN×0.13332788××
ANN×0.10112115××
SVM×0.11011967××
LSTM×0.10722267××
Dilated LSTM×0.10232172××
2021 [43]Electric powerANN×5.27378××
ARIMA×5.65463××
GRNN×5.01350××
LSTM×6.11431××
ETS+RD-LSTM×4.46351××
Table 2. Spare parts demand categorization. Summary July 2009–June 2015.
Table 2. Spare parts demand categorization. Summary July 2009–June 2015.
Part No.NomenclatureQuantity DemandedLumpiness FactorDemand Categorization
246-3904-000Tail rotor hub481.230240223Lumpy
246-3925-00Tail rotor blade441.391716506Lumpy
LOPR-15875-2300-1Tail rotor chain411.543305508Lumpy
KAY-115AMHydraulic booster321.404591553Erratic
HP-3BMFuel control unit311.17969417Lumpy
TB3-117BMEngine assembly302.228030758Lumpy
AK50T-1Air compressor271.404358296Lumpy
8AT-1250-00-02Vibration damper assembly241.334541546Lumpy
8AT-2710-000Main rotor blade set of 5241.723303051Lumpy
8-1930-00Main rotor hub221.423085356Lumpy
Grand Total323 *
* Selected entries are shown here from the full sample space of lumpy spares held at an aviation spare depot.
Table 3. Neural network training dataset computation (2009–2015).
Table 3. Neural network training dataset computation (2009–2015).
Testing Dataset Values
QuartersSet of InputsActual Output Dataset Values
X 1 X 2 X 3
12.846410.3333333
21.83306666731.6666672
32.846410.3333331
40.3930666672.33333331.3333333
Normalized Dataset ValuesActual = 9
10.90.10.1
20.5695652170.90.9
30.90.10.1
40.10.63333330.7
Table 4. Forecasted set of values.
Table 4. Forecasted set of values.
Forecasted Dataset Values
Quarters1234Forecasted
Normalized0.4978691740.7523087380.4978691740.427485871
Descaled1.7904112832.935389321.7904112831.4736864197.989898306
Table 5. Comparison of different methods based on MAE.
Table 5. Comparison of different methods based on MAE.
MetricRNNSimple ANNCrostonSESANN with Levenberg-Marquardt Training AlgorithmSVMAdaptive Univariate SVM [39]Proposed
MAE0.60.40.70.90.360.520.430.11
Table 6. Model comparison based on MASE.
Table 6. Model comparison based on MASE.
S. No.ModelMASE
1Croston2.18
2Holt-Winter1.07
3Auto-ARIMA1.04
4Random Forest1.15
5XGBoost1.10
6Auto-SVR1.06
7MLP1.68
8LSTM0.96
9ANN0.821
10SVM1.042
11Adaptive univariate SVM [39]1.069
12Propose approach0.613
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shafi, I.; Sohail, A.; Ahmad, J.; Espinosa, J.C.M.; López, L.A.D.; Thompson, E.B.; Ashraf, I. Spare Parts Forecasting and Lumpiness Classification Using Neural Network Model and Its Impact on Aviation Safety. Appl. Sci. 2023, 13, 5475. https://doi.org/10.3390/app13095475

AMA Style

Shafi I, Sohail A, Ahmad J, Espinosa JCM, López LAD, Thompson EB, Ashraf I. Spare Parts Forecasting and Lumpiness Classification Using Neural Network Model and Its Impact on Aviation Safety. Applied Sciences. 2023; 13(9):5475. https://doi.org/10.3390/app13095475

Chicago/Turabian Style

Shafi, Imran, Amir Sohail, Jamil Ahmad, Julio César Martínez Espinosa, Luis Alonso Dzul López, Ernesto Bautista Thompson, and Imran Ashraf. 2023. "Spare Parts Forecasting and Lumpiness Classification Using Neural Network Model and Its Impact on Aviation Safety" Applied Sciences 13, no. 9: 5475. https://doi.org/10.3390/app13095475

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop