Next Article in Journal
Cervical Tissue Hydration Level Monitoring by a Resonant Microwave Coaxial Probe
Previous Article in Journal
Refractive Index Sensing Using Helical Broken-Circular-Symmetry Core Microstructured Optical Fiber
Previous Article in Special Issue
Detection of a New Large Free Core Nutation Phase Jump
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Short-Term Prediction of Length of Day Using 1D Convolutional Neural Networks (1D CNN)

1
UAVAC, Department of Applied Mathematics, Universidad de Alicante, Carretera San Vicente del Raspeig s/n, 03690 San Vicente del Raspeig, Alicante, Spain
2
Department Geodesy, Federal Agency for Cartography and Geodesy (BKG), 60322 Frankfurt am Main, Germany
3
GFZ German Research Centre for Geosciences, 14473 Potsdam, Germany
4
Institute for Geodesy and Geoinformation Science, Technische Universität Berlin, 10623 Berlin, Germany
5
Indian Institute of Technology Kanpur, Kanpur 208 016, Uttar Pradesh, India
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(23), 9517; https://doi.org/10.3390/s22239517
Submission received: 22 September 2022 / Revised: 4 November 2022 / Accepted: 29 November 2022 / Published: 6 December 2022
(This article belongs to the Special Issue Monitoring and Understanding the Earth’s Change by Geodetic Methods)

Abstract

:
Accurate Earth orientation parameter (EOP) predictions are needed for many applications, e.g., for the tracking and navigation of interplanetary spacecraft missions. One of the most difficult parameters to forecast is the length of day (LOD), which represents the variation in the Earth’s rotation rate since it is primarily affected by the torques associated with changes in atmospheric circulation. In this study, a new-generation time-series prediction algorithm is developed. The one-dimensional convolutional neural network (1D CNN), which is one of the deep learning methods, is introduced to model and predict the LOD using the IERS EOP 14 C04 and axial Z component of the atmospheric angular momentum (AAM), which was taken from the German Research Centre for Geosciences (GFZ) since it is strongly correlated with the LOD changes. The prediction procedure operates as follows: first, we detrend the LOD and Z-component series using the LS method, then, we obtain the residual series of each one to be used in the 1D CNN prediction algorithm. Finally, we analyze the results before and after introducing the AAM function. The results prove the potential of the proposed method as an optimal algorithm to successfully reconstruct and predict the LOD for up to 7 days.

1. Introduction

The Earth’s orientation in space can be expressed through three independent angles (e.g., the Euler angles). However, five Earth orientation parameters (EOP) are generally used, the polar motion (PM), which are the x and y parameters; the diurnal rotation (e.g., ERA = Earth rotation angle or U T 1 U T C ); and the celestial pole offsets, dX and dY, which describe the adjustments to the conventional IAU precession-nutation models. These parameters provide the rotation of the International Terrestrial Reference System (ITRS) to the Geocentric Celestial Reference System (GCRS) as a function of time.
The EOP can be accurately observed with modern high-precision space geodetic techniques (i.e., very-long-baseline interferometry (VLBI), satellite laser ranging (SLR), global navigation satellite systems (GNSS)) [1,2,3]. Highly accurate celestial pole offsets (CPO) and UT1 are exclusively obtained using VLBI, whereas satellite-based techniques can provide polar motion with high accuracy (up to 50–100 μas) and temporal resolution [4]. The complexity of the data processing of advanced geodetic techniques makes it difficult to determine the EOP in real time. Additionally, short-term EOP predictions are needed for many applications, including the precise tracking and navigation of interplanetary spacecraft missions, laser ranging to satellites and the Moon, and climate forecasting. Consequently, accurate and novel EOP prediction methods combining existing geophysical phenomena information are of critical importance and required by the scientific community [5].
The LOD is the difference between the duration of the day measured using space geodesy and the nominal day of 86,400 s duration and is defined as L O D = d d t U T 1 U T C [6]. Advanced geodetic techniques can estimate the LOD with high accuracy of up to 5–10 μs [4]. As with the previous parameters, short-term LOD predictions have to be provided for many real-time applications such as interplanetary navigation and precise orbit determination because of its coupling with the orbit node.
The LOD is the most challenging parameter to forecast since the greatest difficulties come from the occurrence of extreme events such as El Nino, which have demonstrated themselves in LOD signals [7,8,9]. The changes in the LOD could be due to tidal or non-tidal origins. Since tidal variations in the LOD can be precisely modeled, they can be removed from LOD data. Subsequently, the tidal term can be considered in the methodology of calculating the LOD prediction. In addition, non-tidal changes in the LOD of periods of five years or less are predominantly initiated by the exchange of angular momentum between the solid Earth and the global atmosphere [4].
Various algorithms have been created to improve the accuracy of LOD predictions such as the auto-covariance (AC) [10,11], wavelet decomposition [12], neural network (NN), [4,13,14,15], combination of least squares and autoregression (LS+AR), and autoregression moving average (ARMA) algorithms [11,16,17], among others. In addition to these examples, other approaches use a direct combination of LOD data and the axial component of the effective angular momentum (EAMz) [6,18,19,20,21,22,23]. Ref. [6] showed that the use of atmospheric angular momentum (AAM) wind terms in the Kalman filter technique could improve short-term LOD predictions. A comparison of LOD predictions computed using a combination of Copula-based analysis and singular spectrum analysis (SSA) illustrated a prediction with few errors over 10 days [5]. In [19], the authors used U T 1 -like observations determined by the AAM in the U T 1 U T C combination solution to predict U T 1 , which showed a prediction with few errors compared to the previous forecast strategy [24]. In addition, the prediction method proposed by [23] used 6-day predicted EAM values for the PM and U T 1 U T C prediction using LS extrapolation and the AR model.
The most significant difficulties in time-series predictions are due to the temporal dependence between the observations. Therefore, specialized handling of the data is required when fitting and evaluating models. In addition, LOD data contain complex nonlinear factors that vary considerably. In this work, we introduce one of the deep learning methods to improve short-term LOD predictions. One-dimensional convolutional neural networks, hereafter called 1D CNNs, provide powerful capabilities for accurate LOD forecasting. This method is robust to different types of noise present in the input datasets and the mapping function, providing accurate predictions and learning linear and nonlinear relationships despite the presence of missing values [25]. In this study, we use a combination of LS and 1D CNN as a deterministic-stochastic tool for LOD prediction, where the deterministic part is estimated using LS and 1D CNN is used for modeling the stochastic part. We introduce the AAM to the 1D CNN model since it strongly correlates to the LOD. First, we detrend the LOD and Z-component series using the LS method. Then, we obtain the residual series of each one to be used in the 1D CNN prediction algorithm. Multiple sets of ultra-short-term LOD predictions (up to 7 days) are based on the IERS EOP 14 C04 time series. The results show that the proposed approach can efficiently predict the LOD. The remainder of the paper is structured as follows. Section 2 describes the theoretical framework of the algorithm. Section 3 provides a description of the dataset and the followed prediction methodology. The results of the combined method are presented/evaluated in Section 4 in terms of accuracy. The conclusions and future work ideas are finally presented in Section 5.

2. Methods

2.1. One-Dimensional Convolutional Neural Networks (1D CNN)

A convolutional neural network (CNN) is a type of neural network that is designed to handle image data efficiently. It is a deep feed-forward network that extends a “classic” artificial neural network by adding more layers (deep learning) including the introduction of convolution blocks. The term “convolutional” comes from the implementation of convolution blocks in the network. The first appearance of this type of neural network dates back to 1980, when Kunihiko Fukushima [26] introduced the concept of convolution and downsampling layers, and to 1990, when LeCun et al. published a paper presenting the principles of the CNN modern framework [27,28]. The ability of CNNs to learn and automatically extract features from raw input data can be applied to time-series forecasting problems. A sequence of observations can be treated like one-dimensional image data that a CNN model can read and distill into the most salient elements. CNNs have the benefits of multilayer perceptrons for time-series forecasting, namely support for multivariate inputs and outputs and the ability to learn arbitrarily complex functional relationships, but they do not require that the model learns directly from lag observations. Instead, the model can learn a representation from a large input sequence that is most relevant to the prediction problem.

2.1.1. The Use of 1D CNNs

The input data for CNNs can have a 1D, 2D, or 3D format. For many years, 2D CNNs have been used for image data. Their use has dramatically improved image processing (including image classification, image semantic segmentation, and object detection) [29]. Nevertheless, the number of published articles on 1D CNNs is pretty narrow when compared with 2D CNNs. The 1D signals in 1D CNNs have made various successful contributions, such as electrocardiograms (ECG), electroencephalograms (EEG), electromyograms (EMG) [30,31], and other healthcare applications [32]. A 1D CNN is also used for streamflow prediction [33].

2.1.2. The Architecture of 1D CNN Model

The typical structure of a 1D CNN is composed of three important layers: 1D convolutional layers, pooling layers, and fully connected layers (see Figure 1) [34]. In addition to these three layers, there are two more important parameters, which are the dropout layer and activation function.
  • One-dimensional convolutional layer [35]: This is the layer that can be used to detect features in a vector. The raw one-dimensional input (vector) x n , where n = 0 , 1 , , N 1 is given as an input to the first layer of the CNN architecture. The layer utilizes the following parameters:
    1. Filters or kernels: The feature maps are the outputs of one filter applied to the previous layer. The filters/kernels produce the feature maps by performing convolutions with the input data. The number and size of the kernels are crucial for adequately capturing the relevant features from the input data. Let κ n denote the convolution kernel with size ϑ , then, the convolution output ζ n can be calculated as
    ζ n = x n κ n = m = 0 ϑ 1 κ m . x n m , n = 0 , 1 , , N 1
    where ’*’ denotes the convolution operation. In general, the convolved features at the output of the l t h layer can be written as
    ζ i l = σ b i l + j ζ j l 1 . κ i j l
    where ζ i l represents the i t h feature in the l t h layer, ζ j l 1 denotes the j t h features in the ( l 1 ) t h layer, κ i j l represents the kernel linked from the i t h to the j t h features, b i l denotes the bias for these features, and σ represents the activation function.
    2. Activation function ( R ) [35]: One of the most important parameters of the CNN model is the activation function, which is used to learn and approximate any kind of continuous and complex relationship between the variables of the network. There are several activation functions such as the RELU, softmax, and sigmoid functions. In this work, we use the exponential linear unit or its widely known name the ELU, which is an activation function based on the RELU that has an extra alpha constant ( α ) that defines function smoothness when the inputs are negative. It is a function that tends to converge cost to zero faster and produces more accurate results. Its formula is with α > 0 :
    R ( ζ ) = ζ if ζ > 0 α . exp ( ζ ) 1 if ζ < = 0
  • Strid [36]: The strid value defines how the kernel moves in the input data. The most common value is 1, meaning that the kernel moves over one column of the input data at each iteration.
  • Pooling Layer [35]: This type of layer is often placed after the convolutional layer. The aim of this layer is to decrease the size of the convolved features map to reduce computational costs. There are several types of pooling operations (max pooling, average pooling, sum pooling) [36]. In this work, we used 1D max pooling, which consists of running the input with a defined spatial neighborhood or specified pool size and strid, taking the maximum value from the considered region. Its operation can be represented by
    ζ h l = max p r h ζ p l 1
    where rh denotes the pooling region with index h.
  • Flatten layer and dropout layer [35]: The flatten layer transforms the input data into a one-dimensional vector to be fed to the fully connected/dense layer. A dropout parameter is added after the flatten layer; however, when all the features are connected to the flatten layer, it can cause overfitting in the training dataset. To overcome this problem, a dropout layer is utilized, wherein a few neurons are dropped from the neural network during the training process, resulting in the reduced size of the model.
  • Dense fully connected layer [35]: The flattened output is given as an input to the next layer, i.e., the dense fully connected layer, which produces the output. The activation function is one of its parameters. In this work, we use the ELU function, which is described in (4).

2.2. Error Analysis

The mean absolute error (MAE) is used to evaluate the performance of the prediction accuracy. We use the MAE instead of the root mean squared error (RMSE) because the MAE is the parameter used by most authors when addressing EOP predictions [37], at least since the first EOP PCC, and it is also the parameter of choice in the EOP PCC2, which can be accessed at http://eoppcc.cbk.waw.pl/ (accessed on 2 September 2022). In addition, several papers have proved that using the MAE is a more natural measure of the average error [38,39]. The MAE is calculated for the k t h day in the future as follows:
M A E = 1 k i = 1 k | P i O i |
where P i is the predicted value of the i t h prediction day, O i is the corresponding observed value, and k is the total prediction number.

3. Calculation and Analysis

3.1. Dataset Description

3.1.1. Length of Day (LOD)

In this process, daily time-series data of the LOD were taken from the International Earth Rotation and Reference Systems Service (IERS) combined with the EOP solution 14 C04. This product can be accessed at https://hpiers.obspm.fr/iers/series/opa/eopc04R_IAU2000_dialy (accessed on 2 September 2022). Since older data have much higher uncertainty than more recent data, we excluded them. Hence, we used data from 1999 to 2020.

3.1.2. Atmospheric Angular Momentum (AAM) Function

The mass and motion terms from the AAM models explain the non-tidal geophysical excitation of the Earth’s rotation due to atmospheric mass redistribution. The AAM data consist of three main components X, Y, and Z. The X and Y components are associated with the excitation of polar motion, whereas the Z component plays an important role in interannual LOD variations [23,40,41,42]. Consequently, the latter component was introduced into the proposed prediction since it is strongly correlated with the LOD change using the available model-based EAM for the atmosphere provided by the German Research Centre for Geosciences (GFZ).

3.2. Introducing AAM to LOD Prediction Using 1D CNN

In this paper, we define a workflow algorithm to predict the time-varying behaviors of the LOD change (Figure 2). Changes in the LOD can be of tidal or non-tidal origin since tidal variations, such as the influence of the solid Earth tides of periods of 5 days up to 18.6 years and diurnal and semi-diurnal variations due to ocean tides, can be accurately modeled; therefore, they can be removed from the LOD [43]. These tidal variations were first removed from the observed LOD measurements using the zonal and ocean tide models recommended in the IERS conventions 2010 [43]. The time series obtained after removing these tides are denoted as LODR.

3.2.1. Detrending of LODR

The LODR still includes a linear part and some seasonal variations such as annual and semi-annual oscillations (Figure 3), which are evidenced by the spectral analysis of the LODR illustrated in Figure 4. The parameters of the linear term and seasonal variations are estimated using the LS method in the following equation:
f L O D R ( t ) = α 0 + α 1 t + A 1 sin w 1 t + ϕ 1 + A 2 sin w 2 t + ϕ 2 + A 3 sin w 3 t + ϕ 3
where α 0 is the bias, α 1 is the drift of the linear term, w 1 = 2 π 365.24 , w 2 = 2 π 182.62 , and w 3 = 2 p i · 1.19 · 10 04 .
The LS model is used for two purposes: to obtain stochastic residuals (the difference between the LODR data and LS model) and predict the deterministic components of the signal [4]. The linear and harmonic terms are removed from the LODR in order to avoid the error coming from the extrapolation problem (see Figure 3). Thus, the final prediction of the LODR is the sum of the prediction of the LODR residual series and the extrapolated trend terms. In Figure 3, it can be seen that the LODR residual series still includes some periodic terms, which are modeled later using the 1D CNN.

3.2.2. Detrending of AAM Z-Component Series

Similar to the LODR (previous section), the linear and periodic terms of the AAM Z component were also approximated using the LS method in the same temporal domain (Figure 5). The LS model can be written as
f A A M ( t ) = A + β t + d 1 sin w 1 t + θ 1 + d 2 sin w 2 t + θ 2
where (A) is the bias, ( β ) is the drift of the linear term, and w 1 = 2 π 365.24 , w 2 = 2 π 182.62 .

3.2.3. LODR Prediction Using 1D CNN

Convolutional neural network models or CNNs can be used for multistep time-series forecasting. CNNs can be used in either a recursive or direct forecast strategy, where the model makes one-step predictions and outputs are fed as inputs for subsequent predictions, and where one model is developed for each time step to be predicted. Alternately, CNNs can be used to predict the entire output sequence as a one-step prediction of the entire vector. This is a general benefit of feed-forward neural networks. An important secondary benefit of using CNNs is that they can support multiple 1D inputs in order to make a prediction. This is useful if the multistep output sequence is a function of more than one input sequence. This can be achieved using two different model configurations [25]:
  • Multiple Input channels. This is where each input sequence is read as a separate channel like the different channels of an image (red, green, blue)
  • Multiple Input Heads. This is where each input sequence is read by a different CNN submodel and the internal representations are combined before being interpreted and used to make a prediction [25].
In this contribution, we developed a convolutional neural network for multistep time-series forecasting using only the univariate sequence of daily LODR residuals. The number of prior days used as input defines the one-dimensional (1D) subsequence of data. The CNN reads and learns to extract features. The closer the observational data to the day to be forecasted, the greater the impact on the prediction. In addition, it turns out that for near-term predictions, the values from the past few days are the most essential [4]. Consequently, a more sophisticated strategy is to utilize previous values as inputs of the 1D CNN: seven days, two weeks, and one month (28 days).
As shown in Figure 3, the training interval used to feed the 1D CNN was from 1999 to 2017 and the data from 2017 to 2020 were used as the testing set. In the first step, we used the prior seven days.
A 1D CNN model expects data to have the shape of [sample, timestep, features]; one sample will comprise seven timesteps with one feature for the seven days of total daily LODR residuals. Thus, the training dataset was x = [939, 7, 1], meaning that we had 939 weeks of data. The data in this format permitted the model to use the prior standard week to predict the next standard week. A way to create more training data is to change the problem during training to predict the next seven days given the previous seven days, regardless of the standard week. Thus, we iterated over the timesteps and divided the data into overlapping windows; each iteration moved along one timestep and predicted the subsequent seven days (see Figure 6). Therefore, we transformed 939 weeks into 6559. The transformed dataset had the shape x = [6559, 7, 1] For testing our prediction method, we predicted 3 years, which meant 52 weeks per year, which was in total, 156 points for each day of the week.
This multistep time-series forecasting problem is an autoregression, where the next seven days are some function of the observations of the previous seven days. We used a model with one convolutional layer with 64 filters and a kernel size of 4 so the input sequence of seven days was read with a convolutional operation of four time steps at a time and this operation was performed 64 times. The output layer predicted the next seven days in the sequence. We used the mean squared error loss function and the efficient Adam implementation of the stochastic gradient descent and fit the model for 1000 epochs, with a batch size of 32. We made the prediction using only the true observation. The model was required to make a one-week prediction, then, the actual data for that week were computed, which were used as the basis for the prediction of the subsequent week.

3.2.4. Introducing AAM Z-Component Series to LODR Using 1D CNN

In this approach, we combined the AAM Z component with the LODR residual series. In order to achieve this, we developed a multiheaded CNN model to use each of the two time-series variables to predict the next standard week of the LODR. We provided each one-dimensional time series to the model as a separate sub-CNN model (head for each input variable) and the output of each of these sub-models was merged into one long vector, which was interpreted before the prediction was made for the output sequence, representing the prediction of the LODR residuals. We obtained the final prediction of the LODR by adding the LODR trend series extrapolated by Equation (5). In order to understand the prediction performance using different prior days, several prediction models were estimated using the prior 7, 14, and 28 days. The flow chart in Figure 2 shows how we introduced the AAM to the LODR prediction using the 1D CNN.

4. Results

Discussion of the Results

To validate our LOD predictions, we included an error analysis of the differences between the predictions and the final results obtained from the observational data. This analysis revealed that the prediction errors using different input sizes increased gradually with the increase in the input size. Furthermore, by comparing the LODR + AAM with the LODR, similar patterns and behaviors became obvious for all the test cases evaluated, specifically noting that the prediction accuracy increased after introducing the AAM. Table 1 shows the MAE of the ultra-short-term prediction using 7 days, 14 days, and 28 days as input before and after introducing the AAM.
The presented technique showed an MAE larger than 0.1 ms after the 6-day prediction using 7 and 14 prior days as inputs, whereas the use of 28 prior days reached the aforementioned error one day earlier. A visual inspection of the differences between the observed and predicted data between December 2017 and December 2020 for all the input sizes revealed that the use of 7 prior days produced the best prediction results for the first and second days with MAEs of 0.027 ms/day and 0.051 ms/day, respectively (Figure 7 and Figure 8).
In addition, the proposed algorithm was also capable of capturing the periodic terms in the LODR residual time series. This was confirmed by the representations of the seven time series and the observed data with 7, 14, and 28 prior days before and after introducing the AAM using the 1D CNN, as seen in Figure 9 and Figure 10, where each plot represents, respectively, the first and the second to the seventh predicted days using different input sizes before and after introducing the AAM. The number of predicted weeks covers 3 years (2017, 2018, 2019).
The first Earth orientation parameter prediction comparison campaign, the EOP PCC, gave us the chance to compare our results with the highest number of existing prediction methods used for EOP predictions. The results of the first EOP PCC were kindly provided by Kalarus [37] to one of the authors and are the same as those used in [5]. Although we tried our best to meet the same conditions as those of the first EOP PCC to show the presented method’s effectiveness, the EOP PCC participants used the last values (predictions) with lower precision compared to the a posteriori data (“finals”). In any case, in a situation of real competition such as the current EOP PCC2, any prediction method applied to the LOD should be subject to the same limitations and faces the same problem with using predicted values. Note that the WRMS values of the EOP PCC were obtained when EOP 05 C04, which was discontinued in 2012, was the conventional IERS daily solution. However, this fact has no significant impact on the comparison of the LOD predictions, as shown, e.g., in [39]. Taking into account this limitation, we compared the prediction results with the existing techniques of the first EOP PCC, showing that the proposed approach can efficiently predict the LOD (Table 2). We only predicted 7 days in the future because we introduced the Z-AAM from the GFZ, which is predicted every 7 days. As we can see in Figure 11, it was the most accurate technique for the first prediction day, independent of the use of different input sizes (with an MAE of about 0.30 ms). It is important to note that the use of the AAM and 7 days as the a priori values obtained the best errors for the second-day prediction (0.051 ms), which makes these results better than the Copula Archimedean 12+SSA and Kalman filter. The comparison with the rest of the prediction techniques (i.e., wavelet, LS extrapolation, LS+AR, adaptive transformation, AR, LS, NN, and HE (harmonic and exponential method of approximation)) also showed smaller errors between the first and seventh prediction days. Lastly, the Copula + SSA models in [5] showed comparable performance with our technique, even with a smaller MAE for the first 4 days in the future compared to the Joe + SSA approach, and were better than the Frank + SSA approach for the first 3 days in the future.

5. Conclusions

In this study, a combination of stochastic and deterministic methods was studied for LOD predictions. LS was used as a deterministic technique to obtain stochastic residuals (the difference between the observed data and the LS model). Consequently, a one-dimensional convolutional neural network (1D CNN) was used to predict the time-varying behaviors of the LOD change using different input sizes (i.e., 7, 14, and 28 days). Considering that the axial AAM function is strongly correlated with the LOD change, we also introduced it into the length-of-day data with the tide model removed (LODR) to predict the LOD change. The results showed that after introducing the AAM, the prediction accuracy improved in all the tested cases, especially using 28 prior days as the input size, with an improvement of about 41% for the first prediction day. The comparison of the results after introducing the AAM with different input sizes showed that using 7 prior days resulted in a sophisticated performance with low errors on the first and second days (0.027 and 0.051 ms, respectively). The comparison with respect to the first EOP PCC indicated that the 1D CNN can precisely predict the LOD parameters of ultra-short predictions (from 1 to 7 days), providing comparable accuracy, and is even better than the Kalman filter and Copula + SSA methods for the first and second days. Regarding the potential application of the proposed method to provide operational LOD predictions, we had to account for the latency associated with the availability of the VLBI LOD estimates (approximately one month). This drawback could be solved by feeding our method with official and available predictions, for instance, the USNO finals.daily series. This strategy was typically utilized in the first/second EOP PCC.
Based on these results, the LOD forecast can be considerably improved using the 1D CNN technique. In spite of the reality that we set up this technique with a single output, much more needs to be done. We need to make greater efforts to build models using a multi-output strategy to predict more than one parameter. In addition, we should find new variables to include as a priori data in 1D CNN models. These new variables could improve the results for short/mid/long-term predictions. Finally, the values of the effective angular momentum (EAM) functions could be included in 1D CNN models. In this way, each input series can be handled by a separate CNN and the output of each of these submodels can be combined before a prediction is made for the output sequence.

Author Contributions

S.G. conducted most of the data analysis, the writing of the manuscript, and the 1D CNN. S.B. wrote part of the manuscript and conceived and designed the study. J.M.F. provided the supervision. S.M., H.S., R.H., S.R. and S.D. participated in the design of the study and helped to improve the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

S.B. was partially supported by Generalitat Valenciana (SEJIGENT/2021/001) and the European Union—NextGenerationEU (ZAMBRANO 21-04). J.M. was partially supported by Spanish Projects PID2020-119383GB-I00 funded by MCIN/AEI/10.13039/501100011033 and PROMETEO/2021/030 (Generalitat Valenciana).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is available upon request to correspondence author.

Acknowledgments

We are grateful to the International Earth Rotation and Reference System Service (IERS) for providing the length-of-day data and the German Research Center for Geosciences (GFZ) for providing the atmospheric angular momentum data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tapley, S.; Bryden, M. A group test for the assessment of performance between the hands. Neuropsychologia 1985, 23, 215–221. [Google Scholar] [CrossRef] [PubMed]
  2. Schuh, H.; Schmitz-Hübsch, H. Short period variations in earth rotation as seen by VLBI. Surv. Geophys. 2000, 21, 499–520. [Google Scholar] [CrossRef]
  3. Lichten, S.M.; Marcus, S.L.; Dickey, J.O. Sub-daily resolution of Earth rotation variations wtth global positioning system measurements. Geophys. Res. Lett. 1992, 19, 537–540. [Google Scholar] [CrossRef]
  4. Lei, Y.; Guo, M.; Cai, H.; Hu, D.; Zhao, D. Prediction of length-of-day using Gaussian process regression. J. Navig. 2015, 68, 563–575. [Google Scholar] [CrossRef] [Green Version]
  5. Modiri, S.; Belda, S.; Hoseini, M.; Heinkelmann, R.; Ferrándiz, J.M.; Schuh, H. A new hybrid method to improve the ultra-short-term prediction of LOD. J. Geod. 2020, 94, 1–14. [Google Scholar] [CrossRef] [Green Version]
  6. Freedman, A.; Steppe, J.; Dickey, J.; Eubanks, T.; Sung, L.Y. The short-term prediction of universal time and length of day using atmospheric angular momentum. J. Geophys. Res. Solid Earth 1994, 99, 6981–6996. [Google Scholar] [CrossRef]
  7. Holton, J.R.; Dmowska, R. El Niño, La Niña, and the Southern Oscillation; Academic Press: Cambridge, MA, USA, 1989. [Google Scholar]
  8. Gross, R.S.; Marcus, S.L.; Eubanks, T.M.; Dickey, J.O.; Keppenne, C.L. Detection of an ENSO signal in seasonal length-of-day variations. Geophys. Res. Lett. 1996, 23, 3373–3376. [Google Scholar] [CrossRef]
  9. Raut, S.; Modiri, S.; Heinkelmann, R.; Balidakis, K.; Belda, S.; Kitpracha, C.; Schuh, H. Investigating the Relationship between Length of Day and El-Nino Using Wavelet Coherence Method; Springer: Berlin/Heidelberg, Germany, 2022; pp. 1–6. [Google Scholar] [CrossRef]
  10. Kosek, W.; McCarthy, D.; Luzum, B. Possible improvement of Earth orientation forecast using autocovariance prediction procedures. J. Geod. 1998, 72, 189–199. [Google Scholar] [CrossRef]
  11. Kosek, W. Autocovariance prediction of complex-valued polar motion time series. Adv. Space Res. 2002, 30, 375–380. [Google Scholar] [CrossRef]
  12. Akyilmaz, O.; Kutterer, H.; Shum, C.; Ayan, T. Fuzzy-wavelet based prediction of Earth rotation parameters. Appl. Soft Comput. 2011, 11, 837–841. [Google Scholar] [CrossRef]
  13. Schuh, H.; Ulrich, M.; Egger, D.; Müller, J.; Schwegmann, W. Prediction of Earth orientation parameters by artificial neural networks. J. Geod. 2002, 76, 247–258. [Google Scholar] [CrossRef]
  14. Liao, D.; Wang, Q.; Zhou, Y.; Liao, X.; Huang, C. Long-term prediction of the earth orientation parameters by the artificial neural network technique. J. Geodyn. 2012, 62, 87–92. [Google Scholar] [CrossRef]
  15. Lei, Y.; Guo, M.; Hu, D.-d.; Cai, H.-b.; Zhao, D.-n.; Hu, Z.-p.; Gao, Y.-p. Short-term prediction of UT1-UTC by combination of the grey model and neural networks. Adv. Space Res. 2017, 59, 524–531. [Google Scholar] [CrossRef]
  16. Xu, X.; Zhou, Y. EOP prediction using least square fitting and autoregressive filter over optimized data intervals. Adv. Space Res. 2015, 56, 2248–2253. [Google Scholar] [CrossRef]
  17. Wu, F.; Chang, G.; Deng, K. One-step method for predicting LOD parameters based on LS+ AR model. J. Spat. Sci. 2021, 66, 317–328. [Google Scholar] [CrossRef]
  18. Gross, R.S.; Eubanks, T.; Steppe, J.; Freedman, A.; Dickey, J.; Runge, T. A Kalman-filter-based approach to combining independent Earth-orientation series. J. Geod. 1998, 72, 215–235. [Google Scholar] [CrossRef]
  19. Johnson, T.J.; Luzum, B.J.; Ray, J.R. Improved near-term Earth rotation predictions using atmospheric angular momentum analysis and forecasts. J. Geodyn. 2005, 39, 209–221. [Google Scholar] [CrossRef] [Green Version]
  20. Niedzielski, T.; Kosek, W. Prediction of UT1–UTC, LOD and AAM χ3 by combination of least-squares and multivariate stochastic methods. J. Geod. 2008, 82, 83–92. [Google Scholar] [CrossRef]
  21. Kosek, W. Future improvements in EOP prediction. In Geodesy for Planet Earth; Springer: Berlin/Heidelberg, Germany, 2012; pp. 513–520. [Google Scholar]
  22. Nastula, J.; Gross, R.; Salstein, D.A. Oceanic excitation of polar motion: Identification of specific oceanic areas important for polar motion excitation. J. Geodyn. 2012, 62, 16–23. [Google Scholar] [CrossRef]
  23. Dill, R.; Dobslaw, H.; Thomas, M. Improved 90-day Earth orientation predictions from angular momentum forecasts of atmosphere, ocean, and terrestrial hydrosphere. J. Geod. 2019, 93, 287–295. [Google Scholar] [CrossRef] [Green Version]
  24. McCarthy, D.D.; Luzum, B.J. Prediction of Earth orientation. Bull. Geod. 1991, 65, 18–21. [Google Scholar] [CrossRef]
  25. Brownlee, J. Deep Learning for Time Series Forecasting: Predict the Future with MLPs, CNNs and LSTMs in Python; Machine Learning Mastery: Vermont, Australia, 2018. [Google Scholar]
  26. Fukushima, K.; Miyake, S. Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition. In Competition and Cooperation in Neural Nets; Springer: Berlin/Heidelberg, Germany, 1982; pp. 267–285. [Google Scholar]
  27. Kamilaris, A.; Prenafeta-Boldú, F.X. A review of the use of convolutional neural networks in agriculture. J. Agric. Sci. 2018, 156, 312–322. [Google Scholar] [CrossRef] [Green Version]
  28. LeCun, Y.; Boser, B.; Denker, J.; Henderson, D.; Howard, R.; Hubbard, W.; Jackel, L. Handwritten digit recognition with a back-propagation network. In Advances in Neural Information Processing Systems 2; Morgan-Kaufmann: Burlington, MA, USA, 1989. [Google Scholar]
  29. Vo, A.T.; Tran, H.S.; Le, T.H. Advertisement image classification using convolutional neural network. In Proceedings of the 2017 9th International Conference on Knowledge and Systems Engineering (KSE), Hue, Vietnam, 19–21 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 197–202. [Google Scholar]
  30. Nannavecchia, A.; Girardi, F.; Fina, P.R.; Scalera, M.; Dimauro, G. Personal heart health monitoring based on 1D convolutional neural network. J. Imaging 2021, 7, 26. [Google Scholar] [CrossRef] [PubMed]
  31. Hsieh, C.H.; Li, Y.S.; Hwang, B.J.; Hsiao, C.H. Detection of atrial fibrillation using 1D convolutional neural network. Sensors 2020, 20, 2136. [Google Scholar] [CrossRef] [Green Version]
  32. Abo-Tabik, M.; Costen, N.; Darby, J.; Benn, Y. Towards a smart smoking cessation app: A 1D-CNN model predicting smoking events. Sensors 2020, 20, 1099. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Hussain, D.; Hussain, T.; Khan, A.A.; Naqvi, S.A.A.; Jamil, A. A deep learning approach for hydrological time-series prediction: A case study of Gilgit river basin. Earth Sci. Inform. 2020, 13, 915–927. [Google Scholar] [CrossRef]
  34. Chaerun Nisa, E.; Kuan, Y.D. Comparative Assessment to Predict and Forecast Water-Cooled Chiller Power Consumption Using Machine Learning and Deep Learning Algorithms. Sustainability 2021, 13, 744. [Google Scholar] [CrossRef]
  35. Saini, M.; Satija, U.; Upadhayay, M.D. Light-Weight 1-D Convolutional Neural Network Architecture for Mental Task Identification and Classification Based on Single-Channel EEG. arXiv 2020, arXiv:2012.06782. [Google Scholar]
  36. Rala Cordeiro, J.; Raimundo, A.; Postolache, O.; Sebastião, P. Neural Architecture Search for 1D CNNs—Different Approaches Tests and Measurements. Sensors 2021, 21, 7990. [Google Scholar] [CrossRef]
  37. Kalarus, M.; Schuh, H.; Kosek, W.; Akyilmaz, O.; Bizouard, C.; Gambis, D.; Gross, R.; Jovanović, B.; Kumakshev, S.; Kutterer, H.; et al. Achievements of the Earth orientation parameters prediction comparison campaign. J. Geod. 2010, 84, 587–596. [Google Scholar] [CrossRef]
  38. Willmott, C.J.; Matsuura, K. Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Clim. Res. 2005, 30, 79–82. [Google Scholar] [CrossRef]
  39. Michalczak, M.; Ligas, M. Kriging-based prediction of the Earth’s pole coordinates. J. Appl. Geod. 2021, 15, 233–241. [Google Scholar] [CrossRef]
  40. Salstein, D.A. Monitoring atmospheric winds and pressures for Earth orientation studies. Adv. Space Res. 1993, 13, 175–184. [Google Scholar] [CrossRef]
  41. Dobslaw, H.; Thomas, M. Atmospheric induced oceanic tides from ECMWF forecasts. Geophys. Res. Lett. 2005, 32, 10. [Google Scholar] [CrossRef]
  42. Dobslaw, H. Homogenizing surface pressure time-series from operational numerical weather prediction models for geodetic applications. J. Geod. Sci. 2016, 6, 1. [Google Scholar] [CrossRef]
  43. Petit, G.; Luzum, B. IERS conventions. IERS Tech. Note 2010, 36, 2010. [Google Scholar]
Figure 1. One-dimensional convolutional neural network (1D CNN) architecture.
Figure 1. One-dimensional convolutional neural network (1D CNN) architecture.
Sensors 22 09517 g001
Figure 2. Flowchart of introducing the AAM to LODR predictions using 1D CNN.
Figure 2. Flowchart of introducing the AAM to LODR predictions using 1D CNN.
Sensors 22 09517 g002
Figure 3. Time series of the LODR between 1999 and 2020. From top to bottom, the data series shown are the LODR, trend terms, and LODR residual series. The time series is divided into two sets: the training set (1999–2017) and testing set (2017–2020).
Figure 3. Time series of the LODR between 1999 and 2020. From top to bottom, the data series shown are the LODR, trend terms, and LODR residual series. The time series is divided into two sets: the training set (1999–2017) and testing set (2017–2020).
Sensors 22 09517 g003
Figure 4. Spectral analysis of the LODR using fast Fourier transform (FFT).
Figure 4. Spectral analysis of the LODR using fast Fourier transform (FFT).
Sensors 22 09517 g004
Figure 5. Time series of AAM between 1999 and 2020. From top to bottom, the data series shown are the AAM, trend terms, and AAM Z residual series. The time series is divided into two sets: the training set (1999–2017) and testing set (2017–2020).
Figure 5. Time series of AAM between 1999 and 2020. From top to bottom, the data series shown are the AAM, trend terms, and AAM Z residual series. The time series is divided into two sets: the training set (1999–2017) and testing set (2017–2020).
Sensors 22 09517 g005
Figure 6. Example of overlapping weekly data.
Figure 6. Example of overlapping weekly data.
Sensors 22 09517 g006
Figure 7. Errors of the predicted LOD with 7, 14, and 28 prior days before and after introducing the AAM using the 1D CNN. (a) LOD prediction error using 7 prior days after introducing the AAM; (b) LOD prediction error using 7 prior days; (c) LOD prediction error using 14 prior days after introducing the AAM; (d) LOD prediction error using 14 prior days; (e) LOD prediction error using 28 prior days after introducing the AAM; (f) LOD prediction error using 28 prior days.
Figure 7. Errors of the predicted LOD with 7, 14, and 28 prior days before and after introducing the AAM using the 1D CNN. (a) LOD prediction error using 7 prior days after introducing the AAM; (b) LOD prediction error using 7 prior days; (c) LOD prediction error using 14 prior days after introducing the AAM; (d) LOD prediction error using 14 prior days; (e) LOD prediction error using 28 prior days after introducing the AAM; (f) LOD prediction error using 28 prior days.
Sensors 22 09517 g007
Figure 8. Mean absolute errors of the predicted LOD using the 1D CNN before and after introducing the AAM with different input sizes (7, 14, 28 days).
Figure 8. Mean absolute errors of the predicted LOD using the 1D CNN before and after introducing the AAM with different input sizes (7, 14, 28 days).
Sensors 22 09517 g008
Figure 9. The seven time series using 7, 14, and 28 prior days with the LODR + AAM using the 1D CNN. Each plot represents, respectively, the first and the second to the seventh prediction days using different input sizes before and after introducing the AAM. The number of predicted weeks covers 3 years (2017, 2018, 2019). (a) The first time series using 7, 14, 28 prior days (LODR + AAM); (b) The second time series using 7, 14, 28 prior days (LODR + AAM); (c) The third time series using 7, 14, 28 prior days (LODR + AAM); (d) The fourth time series using 7, 14, 28 prior days (LODR + AAM); (e) The fifth time series using 7, 14, 28 prior days (LODR + AAM); (f) The sixth time series using 7, 14, 28 prior days (LODR + AAM); (g) The seventh time series using 7, 14, 28 prior days (LODR + AAM).
Figure 9. The seven time series using 7, 14, and 28 prior days with the LODR + AAM using the 1D CNN. Each plot represents, respectively, the first and the second to the seventh prediction days using different input sizes before and after introducing the AAM. The number of predicted weeks covers 3 years (2017, 2018, 2019). (a) The first time series using 7, 14, 28 prior days (LODR + AAM); (b) The second time series using 7, 14, 28 prior days (LODR + AAM); (c) The third time series using 7, 14, 28 prior days (LODR + AAM); (d) The fourth time series using 7, 14, 28 prior days (LODR + AAM); (e) The fifth time series using 7, 14, 28 prior days (LODR + AAM); (f) The sixth time series using 7, 14, 28 prior days (LODR + AAM); (g) The seventh time series using 7, 14, 28 prior days (LODR + AAM).
Sensors 22 09517 g009
Figure 10. The seven time series using 7, 14, and 28 prior days with the LODR using the 1D CNN. Each plot represents, respectively, the first and the second to the seventh prediction days using different input sizes. The number of predicted weeks covers 3 years (2017, 2018, 2019). (a) The first time series using 7, 14, 28 prior days (LODR); (b) The second time series using 7, 14, 28 prior days (LODR); (c) The third time series using 7, 14, 28 prior days (LODR); (d) The fourth time series using 7, 14, 28 prior days (LODR) (e) The fifth time series using 7, 14, 28 prior days (LODR); (f) The sixth time series using 7, 14, 28 prior days (LODR); (g) The seventh time series using 7, 14, 28 prior days (LODR).
Figure 10. The seven time series using 7, 14, and 28 prior days with the LODR using the 1D CNN. Each plot represents, respectively, the first and the second to the seventh prediction days using different input sizes. The number of predicted weeks covers 3 years (2017, 2018, 2019). (a) The first time series using 7, 14, 28 prior days (LODR); (b) The second time series using 7, 14, 28 prior days (LODR); (c) The third time series using 7, 14, 28 prior days (LODR); (d) The fourth time series using 7, 14, 28 prior days (LODR) (e) The fifth time series using 7, 14, 28 prior days (LODR); (f) The sixth time series using 7, 14, 28 prior days (LODR); (g) The seventh time series using 7, 14, 28 prior days (LODR).
Sensors 22 09517 g010
Figure 11. Mean absolute errors of the predicted LOD using the 1D CNN and the 2ns EOP PCC results.
Figure 11. Mean absolute errors of the predicted LOD using the 1D CNN and the 2ns EOP PCC results.
Sensors 22 09517 g011
Table 1. Comparison of the mean absolute errors of the predicted LOD using the 1D CNN before and after introducing the AAM using different input sizes (7, 14, 28). Units: (ms/day).
Table 1. Comparison of the mean absolute errors of the predicted LOD using the 1D CNN before and after introducing the AAM using different input sizes (7, 14, 28). Units: (ms/day).
Using 7 DaysUsing 14 DaysUsing 28 Days
DayLODRLODR + AAMLODRLODR + AAMLODRLODR + AMM
Day 10.0310.0270.0350.0300.0550.032
Day 20.0550.0510.0570.0520.0730.054
Day 30.0710.0700.0730.0690.0890.076
Day 40.0870.0850.0850.0840.1010.090
Day 50.100.0980.09920.0990.1110.105
Day 60.1160.1150.1110.1100.1200.117
Day 70.12030.12040.120.1180.1270.12
Table 2. Comparison of 1D CNN prediction and 2ns EOP PCC prediction errors. Units: (ms/day).
Table 2. Comparison of 1D CNN prediction and 2ns EOP PCC prediction errors. Units: (ms/day).
Prediction Day1234567
1D CNN + AAM using 7 days0.0270.0510.0700.0850.0980.1150.1204
1D CNN + AAM using 14 days0.0300.0520.0690.0840.0990.1100.118
1D CNN + AAM using 28 days0.0320.0540.0760.0960.1050.1170.12
Archi 12 + SSA0.470.0600.0630.0630.0630.0640.066
Kalman filter0.0420.0510.0570.0620.0710.0840.094
wavelet0.0960.1310.1640.1970.2330.2580.271
LSE0.0610.0880.1070.1170.1280.1380.151
LSE+AR EOP PC0.0700.0970.1180.1330.1420.1430.154
Adaptive transform0.1650.1580.1620.1590.1600.1600.160
AR0.1540.1820.1830.1930.2070.2160.224
LSC0.1760.2220.2450.2660.2760.2750.264
NN0.610.1960.2180.2370.2500.2570.256
HE0.0930.1570.2000.2350.2570.2810.289
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guessoum, S.; Belda, S.; Ferrandiz, J.M.; Modiri, S.; Raut, S.; Dhar, S.; Heinkelmann, R.; Schuh, H. The Short-Term Prediction of Length of Day Using 1D Convolutional Neural Networks (1D CNN). Sensors 2022, 22, 9517. https://doi.org/10.3390/s22239517

AMA Style

Guessoum S, Belda S, Ferrandiz JM, Modiri S, Raut S, Dhar S, Heinkelmann R, Schuh H. The Short-Term Prediction of Length of Day Using 1D Convolutional Neural Networks (1D CNN). Sensors. 2022; 22(23):9517. https://doi.org/10.3390/s22239517

Chicago/Turabian Style

Guessoum, Sonia, Santiago Belda, Jose M. Ferrandiz, Sadegh Modiri, Shrishail Raut, Sujata Dhar, Robert Heinkelmann, and Harald Schuh. 2022. "The Short-Term Prediction of Length of Day Using 1D Convolutional Neural Networks (1D CNN)" Sensors 22, no. 23: 9517. https://doi.org/10.3390/s22239517

APA Style

Guessoum, S., Belda, S., Ferrandiz, J. M., Modiri, S., Raut, S., Dhar, S., Heinkelmann, R., & Schuh, H. (2022). The Short-Term Prediction of Length of Day Using 1D Convolutional Neural Networks (1D CNN). Sensors, 22(23), 9517. https://doi.org/10.3390/s22239517

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop