Next Article in Journal
ISO 22000 Certification: Diffusion in Europe
Previous Article in Journal
Life Cycle Sustainability Assessment of a Novel Bio-Based Multilayer Panel for Construction Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Different Nonlinear Regression Techniques and Sensitivity Analysis as Tools to Optimize Oil Viscosity Modeling

1
LUKOIL Neftohim Burgas, 8104 Burgas, Bulgaria
2
Department of Mathematics, University of Chemical Technology and Metallurgy, Kliment Ohridski 8, 1756 Sofia, Bulgaria
3
Faculty of Mathematics and Informatics, St. Kliment Ohridski University, 15 Tsar Osvoboditel Blvd, 1504 Sofia, Bulgaria
4
Institute of Biophysics and Biomedical Engineering, Bulgarian Academy of Sciences, Academic Georgi Bonchev 105, 1113 Sofia, Bulgaria
5
Intelligent Systems Laboratory, Department Industrial Technologies and Management, University Prof. Dr. Assen Zlatarov, Professor Yakimov 1, 8010 Burgas, Bulgaria
*
Author to whom correspondence should be addressed.
Resources 2021, 10(10), 99; https://doi.org/10.3390/resources10100099
Submission received: 24 August 2021 / Revised: 16 September 2021 / Accepted: 24 September 2021 / Published: 29 September 2021

Abstract

:
Four nonlinear regression techniques were explored to model gas oil viscosity on the base of Walther’s empirical equation. With the initial database of 41 primary and secondary vacuum gas oils, four models were developed with a comparable accuracy of viscosity calculation. The Akaike information criterion and Bayesian information criterion selected the least square relative errors (LSRE) model as the best one. The sensitivity analysis with respect to the given data also revealed that the LSRE model is the most stable one with the lowest values of standard deviations of derivatives. Verification of the gas oil viscosity prediction ability was carried out with another set of 43 gas oils showing remarkably better accuracy with the LSRE model. The LSRE was also found to predict better viscosity for the 43 test gas oils relative to the Aboul Seoud and Moharam model and the Kotzakoulakis and George.

1. Introduction

The modeling of characteristics of petroleum and its derivatives has been a subject of numerous studies [1,2]. Different regression techniques [3,4,5,6,7,8,9,10,11,12,13,14] and artificial intelligence [15,16] (machine learning, neural network) approaches have been applied to model petroleum characteristics. Nonlinear regression has been the most used approach for model parameter estimation [17]. Typically, it minimizes an objective function based on the sum of squares of errors between experimental and calculated values [17]. Usually, the models have various parameters to be determined, and sometimes multiple solutions of the objective function can be obtained. The optimal solution depends mostly on the initial guess of parameters [17]. The appropriate parameter estimation has been reported to assure by application of sensitivity analysis on the calculated parameter values [17]. The sensitivity analysis (SA) is the study of how the variation in the output of a model (numerical or otherwise) can be apportioned, qualitatively or quantitatively, to different sources of variation, and how the given model depends on the information fed into it [18]. Good modeling practice requires that the modelers provide an evaluation of the confidence in the model, possibly assessing the uncertainties associated with the modeling process and with the outcome of the model itself [18]. Originally, SA was created to deal simply with uncertainties in the input variables and model parameters. Over the course of time, the ideas have been extended to incorporate model conceptual uncertainty, that is, uncertainty in model structures, assumptions, and specifications [18]. In our recent research [14], we developed an empirical model to predict the viscosity of secondary vacuum gas oils (VGOs) that outperformed the existent empirical models published in the literature. This model was developed based on data for 24 VGOs, extending the model of Aboul-Seoud and Moharam [1] by separating the influence of the specific gravity and the average boiling point on the VGO viscosity, adopting the idea of Kotzakoulakis and George [7]. The model was validated with data for 10 additional VGOs not included in the initial database of 24 VGOs showing a better prediction ability than the model of Aboul-Seoud and Moharam [14]. In that study [14], we applied nonlinear regression using the classical approach for estimation of model parameters by minimization of the sum of squares of errors between experimental and calculated values. The viscosity measurement, however, is associated with a relatively high error (about 5% repeatability, and about 15% reproducibility) [19]. The error in viscosity measurement in our recent study [14] was found to linearly increase with the temperature of the measurement decreasing (between 5.5 and 57.8% for the temperature range 60–100 °C, being the lowest at the highest temperature).
The model parameters can be estimated not only by minimization of the sum of squares of errors between experimental and calculated values but also by minimization of the sum of absolute errors, and by minimization of the sum of relative errors [20]. Which of these nonlinear regression methods gives the best prediction is a question that needs to be investigated. For that reason, we employed data of 41 VGOs from primary and secondary origin to examine the application of four nonlinear regression methods: classical least square method, minimization of the sum of absolute errors, minimization of the sum of the squares of relative errors, and the minimization of the sum of the absolute relative errors for modeling of VGO viscosity prediction with the aim to answer the question which nonlinear regression method provides the most appropriate prediction of viscosity of VGO and other oils.
Hernández et al. [3], Hosseinifar, and Jamshidi [4], Samano et al. [17], and Alcazar, and Ancheyta [21], after the application of nonlinear regression, employed sensitivity analysis to find the most appropriate values of the model parameters. This approach was also adopted in this work and extended not only to the model parameters but also to the given data. In the works mentioned above [3,4,17,21] no sensitivity analysis with respect to the given data has been carried out.
The aim of this research is to evaluate which nonlinear regression technique is best suited to model oil viscosity and how the application of sensitivity analysis with respect to obtained model parameters and with respect to given data can assist in the selection of the most appropriate model.

2. Materials and Methods

2.1. Experimental Materials and Methods

Kinematic viscosity at 80 °C, specific gravity, average boiling point, refractive index, molecular weight, and aromatic ring index of the 43 VGOs from primary and secondary origin were used to develop the empirical model for prediction of viscosity applying the four nonlinear regression methods are presented in Table 1. Kinematic viscosity of VGOs was estimated on the basis of the Engler specific viscosity measured in accordance with ASTM D1665 at 80 °C using Equation (1) [22]:
Kin .   vis . = 7.41 × Engler   specific   viscosity ,
where
Kin. vis. = kinematic viscosity, mm2/s
Engler specific viscosity = Engler specific viscosity, °E
The specific gravity of VGOs was measured in accordance with ASTM D 4052 method. The distillation characteristics were measured by high-temperature simulation distillation (HTSD) according to the ASTM D7169 method. The average boiling point was estimated by Equation (2):
A B P = T 10 % + T 30 % + T 50 % + T 70 % + T 90 % 5 .

2.2. Theory/Calculation

2.2.1. Models

Walther’s equation [24] was used as a basis for the empirical modeling of the viscosity of oils [7,10]. Mehrotra [10] proposed a correlation that has the form:
l n l n ν + 0.8 = α 1 + α 2 l n T ,
with
α 1 = 0.148 T b 0.5 + 5.489
and
α 2 = 3.7 .
Aboul-Seoud and Moharam [1] modified Equations (3)–(5) by including in it oil specific gravity and the empirical model then took the form:
l n l n ν + 0.8 = α 1 + α 2 l n T ,
where
α 1 = 4.3414 T b γ 0.2 + 6.6913     and   a 2 = 3.7 .
We started our model development from a form analogous to the modified Walther’s equation as shown in Equations (6) and (7), having the following appearance:
z i = f x i , y i , a + ε i ,     i = 1 , , n ,
where z i is the result (VGO kinematic viscosity), x i (average boiling point), and y i (specific gravity) is the input data; the unknown parameter a = a 1 , a 2 , a 3 , a 4 , a 5 T is a 5-dimensional vector; and ε i are random numbers, n = 41; and
f x , y , a = exp exp a 1 x a 2 y a 3 + a 4 a 5 .
To estimate the components of parameter a we used four optimization methods:
Method 1: Classical least squares method:
m i n F 1 a = i = 1 n z i f x i , y i , a 2 : a R 5 .
Method 2: Minimization of the sum of absolute values:
m i n F 2 a = i = 1 n z i f x i , y i , a : a R 5 .
Method 3: Minimization the sum of squared relative errors:
m i n F 3 a = i = 1 n z i f x i , y i , a z i 2 :   a R 5 .
Method 4: Minimization the sum of absolute relative errors:
m i n F 4 a = i = 1 n z i f x i , y i , a z i :   a R 5 .

2.2.2. Computational Minimization

In many cases, there are well-known specialized algorithms for global optimization. Such a case is when f is a monotone function, see for example [25,26]. On the other hand, there are many examples when the sum of squares can have several local minima, see for example [26] and references therein. In our case, we did not have any conditions guaranteeing the convergence of an iterative process to the global extremum. In this study, one of the goals was to examine that the above-stated four methods are adequate mathematical models capable of satisfactorily describing the data. In order to do this, a minimum of the difference between measured and predicted oil viscosity (different for each model) was searched and sensitivity of model parameters with respect to the data was performed.
As an initial guess, the following modification of Aboul-Seoud and Moharam correction to Walther’s model was used:
a 1 = 0 , a 2 = 0.2 , a 3 = 0.2 , a 4 = 1 , a 5 = 0.8
If one starts the computations with the above-mentioned initial conditions, that is, when a 1 = 4.3414 and a 1 = −15.01620372 many overflow warnings/errors are obtained.
Using a set of quasi-random points in a five-dimensional parametric space in the neighborhood of the initial guess and calculating the values of corresponding criterion function F j , the computations started (for method 1) with the initial condition:
a 1 = 0 , a 2 = 1.0889 , a 3 = 0.825 , a 4 = 1.6 , a 5 = 1.6333 .
More precisely, Halton sequences of quasi-random numbers, with base 2–6 were used to cover the hypercube neighborhood of the initial guess and the lengths of vertices 2. As examples, Halton squares of 20 × 20 points with bases 2, 3, and 4, 5 are plotted on Figure 1. One may compute the initial condition using initial guess and Halton points with indices and bases (0, 2), (8, 3), (10, 4), (3, 5), and (5, 6), respectively.
The discovery strategy for initial conditions of the other three methods was the same.
All computations were performed by the use of CAS Maple and NLPSolve with Modified Newton Iterative Method starting from the corresponding initial condition. The stop-criteria is the absolute difference of two consecutive iterations to be less or equal to 0.01.

2.2.3. Sensitivity Analysis with Respect to Obtained Model Parameters

After successive realization of Newton iterative procedure for method 1, one may receive the following parameters a ˜ 1 = 0.0000972 , a ˜ 2 = 1.5542645 , a ˜ 3 = 1.0946136 , a ˜ 4 = 1.5265719 ,   a ˜ 5 = 1.4404829 .
Here, it is worth denoting that the derivatives of F 1 a = i = 1 n z i f x i , y i , a 2 were huge numbers outside a “really small” neighborhood of the minimum. Indeed, one may check F 1 0.0000972 , a ˜ 2 , a ˜ 3 , a ˜ 4 , a ˜ 5 = 394.358 and F 1 0.0000970 , a ˜ 2 , a ˜ 3 , a ˜ 4 , a ˜ 5 = 659.175 . Therefore, it is necessary to use numbers with at least seven digits after the decimal sign. In Figure 2, the graph of function F 1 a , a ˜ 2 , a ˜ 3 , a ˜ 4 , a ˜ 5 is plotted in blue.
Moreover, taking in mind the above fact, the appropriateness of the estimated parameters was verified by a sensitivity analysis using perturbations of model parameters in the range of ±20%, similarly as it is described by the authors of [3,4,17,21].
Generating random numbers in the ±20% interval around the obtained values, we were lucky to refine it   a 1 0 = 0.0000973 , a 2 0 = 1.5542641 ,   a 3 0 = 1.0946132 , a 4 0 = 1.5265719 ,   a 5 0 = 1.4404824 . Here F 1 a 1 0 , a 2 0 , a 3 0 , a 4 0 ,   a 5 0 = 367.502 . In Figure 2, the graphs of functions F 1 a , a ˜ 2 , a ˜ 3 , a ˜ 4 , a ˜ 5 and F 1 a , a 2 0 , a 3 0 , a 4 0 , a 5 0 are compared.
The same procedures were to methods 2, 3, and 4. All results are summarized in Table 2.

2.2.4. Sensitivity Analysis with Respect to Given Data

Following Ref. [20], the optimization criterion in the four methods were rewritten as constrained optimization problems
min F j a : a R p ,
subject to
g i a = 0 ,       i = 1 , , n ,
h i a 0 ,       i = 1 , , m .
The Lagrangian function for the primal problem (14)–(16) is
L a , λ , μ = F j a + i = 1 n λ i g i a + i = 1 m μ i h i a ,
where λ i are Lagrange multipliers associated with g i   λ = λ 1 , , λ n ; μ i are Lagrange multipliers associated with h i , μ = μ 1 , , μ n The Lagrange dual function is defined by L ˜ a , λ , μ = inf L a , λ , μ : a R p . As an infimum of affine functions, the Lagrange dual function is concave. Let us recall that in the local minimum a 0   the necessary conditions, described in Karush-Kuhn-Tucker theorem, are satisfied.
The gradients of Lagrangians of stated methods are calculated. Calculating the arithmetic mean μ x and deviation σ x of derivatives with respect to xi (for example), the standardized deviation of derivatives
S x i = L j a 0 x i μ x σ x ,     i = 1 , , n ,   j = 1 , , 4 ,
are interpreted as sensitivity coefficients with respect to x i .

2.2.5. Sensitivity Analysis of Least Squares Method

The classical least square problem is equivalent to the following Lagrange problem
min i = 1 n ε i 2 ,
subject to
z i f x i , y i , a = ε i ,     i = 1 , , n .
The Lagrangian function for the least square method (19), (20) is
L 1 a = i = 1 n z i f x i , y i , a 2 = i = 1 n z i exp exp a 1 x i a 2 y i a 3 + a 4 + a 5 2
Therefore, the sensitivities with respect to z i are
L 1 a 0 z i = 2 z i exp exp a 1 0 x i a 2 0 y i a 3 0 + a 4 0 + a 5 0 = 2 z i f x i , y i , a 0 ,     i = 1 , , n .
Let
μ z = 1 n i = 1 n L 1 a 0 z i = 2 n i = 1 n z i 2 n i = 1 n f x i , y i , a 0
be the arithmetic mean of derivatives. Let
σ z 2 = 1 n 1 i = 1 n   L 1 a 0 z i μ z   2
be the variance of derivatives (here the Bessel’s correction is used).
Standardizing, the sensitivity coefficients with respect to z i are obtained.
S z i = L 1 a 0 z i μ z σ z ,   i = 1 , , n .
Similarly, the sensitivities with respect to x i and y i are
L 1 a x i = 2 a 1 a 2 x i a 2 1 y i a 3 e x p a 1 x i a 2 y i a 3 + e x p a 1 x i a 2 y i a 3 + a 4 + a 4 × z i e x p e x p a 1 x i a 2 y i a 3 + a 4 + a 5 = 2 a 1 a 2 x i a 2 1 y i a 3 l n f x i , y i + a 5 f x i , y i + a 5 z i f x i , y i ,
L 1 a y i = 2 a 1 a 3 x i a 2 y i a 3 1 e x p a 1 x a 2 y i a 3 + e x p a 1 x i a 2 y i a 3 + a 4 + a 4 × z i e x p e x p a 1 x i a 2 y i a 3 + a 4 + a 5 = 2 a 1 a 3 x i a 2 y i a 3 1 l n f x i , y i + a 5 f x i , y i + a 5 z i f x i , y i .
Using both equalities in (26) and (27), is derived
L 1 a y i = a 3 a 2 x i y i L 1 a x i ,     i = 1 , , n .
From (26), arithmetic mean, and variance
μ x = 1 n i = 1 n L 1 a 0 x i ,   σ x 2 = 1 n 1 i = 1 n L 1 a 0 x i μ x 2 ,
the sensitivity coefficients with respect to x i are calculated
S x i = L 1 a 0 x i μ x σ x ,     i = 1 , , n .
Analogously, using (28), calculated values of L 1 a x i corresponding arithmetic mean μ y and variance σ y 2 , the sensitivity coefficients with respect to 𝑦𝑖 are obtained.
S y i = L 1 a 0 y i μ y σ y ,   i = 1 , , n .
Let us mark, that sometimes it is suitable to have the expressions for derivatives of L1 in terms of Lagrange multipliers λ i :
L 1 a = i = 1 n ε i 2 + i = 1 n λ i z i f x i , y i , a ε i .
It is straightforward
L 1 a z i = λ i = 2 ε i , L 1 a x i = λ i f x i , y i , a x i , i = 1 , , n . L 1 a y i = λ i f x i , y i , a y i ,

2.2.6. Sensitivity Analysis of Absolute Value Minimization Problem

Analogously, it is suitable to consider the following constrained analog to absolute value minimization problem in method 2:
min i = 1 n ε i ,  
subject to
z i f x i , y i , a ε i ,     i = 1 , , n ,
f x i , y i , a z i ε i ,     i = 1 , , n ,
0 ε i ,     i = 1 , , n .
The Lagrangian for the problem (33)–(36)
L 2 a = i = 1 n ε i + i = 1 n μ 1 i z i f x i , y i , a ε i + i = 1 n μ 2 i f x i , y i , a z i ε i + i = 1 n μ 3 i ε i ,
where μ j i  ji are the Lagrange multipliers,
i = 1 ,   .   .   .   ,   n ,   j = 1 ,   2 ,   3
Thus
L 2 a z i = μ 1 i μ 2 i ,   i = 1 , , n ,
L 2 a x i = a 1 a 2 x i a 2 1 y i a 3 μ 2 i μ 1 i f x i , y i , a + a 5 ln f x i , y i , a + a 5 ,
L 2 a y i = a 1 a 3 x i a 2 y i a 3 1 μ 2 i μ 1 i f x i , y i , a + a 5 ln f x i , y i , a + a 5 .
It follows from a well-known lemma from the proof of Karush-Kuhn-Tucker conditions (in fact Fritz John conditions), see [27] that if a 0 is an optimal solution of the problem (33)–(36), then there exist multipliers μ 0 0 , μ j i 0 such that μ 0 0 , μ j i 0 0 , j = 1 ,   2 ,   3 ,   i = 1 ,   .   .   .   , n , not all zero, and
μ 0 0   ε i = 1 n ε i + i J 1 a 0   μ 1 i 0 e z i f x i , y i , a ε i + i J 2 a 0   μ 2 i 0   ε f x i , y i , a z i ε i + i J 3 a 0   μ 3 i 0   ε ε i = 0 ,
where:
J 1 a 0 = i 1 , , n : z i f x i , y i , a ε i = 0 ,   J 2 a 0 = i 1 , , n : f x i , y i , a z i ε i = 0
J 3 a 0 = i 1 , , n : ε i = 0
are the corresponding active conditions, ε = ε 1 , ε 2 , , ε n T Simplifying
μ 0 0 e i J 1 a 0   μ 1 i 0 e i i J 2 a 0   μ 2 i 0 e i i J 3 a 0   μ 3 i 0 e i = 0 ,
where e i is the i-th unit vector and e = 1 ,   .   .   .   ,   1 T . Let us note:
J 1 a 0 J 2 a 0 = 1 , 2 , , n = J 3 a 0   and   J 1 a 0 J 2 a 0 = .
Hence, one may construct a non-negative solution of the linear system setting:
μ 1 i 0 = 0 , μ 2 i 0 = 1   if   ε i < 0 ,   i . e . ,   i J 2 ,
μ 1 i 0 = 1 , μ 2 i 0 = 0   if   ε i > 0 , i . e . ,   i J 1 ,
μ 0 0 = 1 , μ 3 i = 0 ,   i = 1 , , n .

2.2.7. Sensitivity Analysis of Squared Relative Errors

Analogously to previous subsections, the minimization problem in Method 3 is equivalent to the following Lagrange problem. The equivalent Lagrange problem is
min i = 1 n ε i 2 ,
subject to
z i f x i , y i , a = z i ε i ,   i = 1 , , n .
The Lagrangian is
L 3 a = i = 1 n z i f x i , y i , a z i 2 = i = 1 n 1 e x p e x p a 1 x i a 2 y i a 3 + a 4 a 5 z i 2
The first derivatives are (here the already calculated derivatives of L 1 are used)
L 3 a 0 z i = 1 z i 2 L 1 a 0 z i 1 2 z i 3 L 1 a 0 z i 2 ,
L 3 a 0 x i = 1 z 1 2 L 1 a 0 x i ,
L 3 a 0 y i = 1 z 1 2 L 1 a 0 y i ,
where i = 1 ,   .   .   .   ,   n .
The formulas for sensitivity analysis of the sum of absolute relative errors are omitted because they are similar to the explanation in Section 2.2.7.

3. Results

The data in Table 1 indicate that the selected vacuum gas oils (VGO) differentiate significantly in their properties. The most important for modeling viscosity oil properties: specific gravity, and average boiling point [14] varied in the range 0.838 ÷ 1.177 for specific gravity, and 309 ÷ 488 °C for average boiling point. The VGO viscosity at 80 °C varied between 3.6 and 312.8 mm2/s.
The Bayesian approach was used over several classical distributions to find the distribution functions of specific gravity (SG) and average boiling point (ABP). Using the Bayesian information criterion, one may conclude that the best distribution for SG is the normal distribution with mean and variance 0.98712, 0.0771899, respectively. The second and third candidates for continuous probability distribution are Gamma distribution and LogNormal distribution. The histogram and PDF (Probability Density Function) of SG data are plotted on Figure 3.
For the second data—ABP, using similar arguments, we again obtained the normal distribution with mean and variance 416.284, 39.0181, respectively. The histogram and PDF function are plotted in Figure 4.
Table 2 presents data about regression coefficients for the four methods obtained after application of the Newton iterative procedure, and after the sensitivity analysis with respect to obtained parameters. These data show that the performed sensitivity analysis with respect to the model parameters in most cases led to a modification of the values of the regression coefficients.
Table 3 indicates data of calculated viscosity of the 41 VGOs from Table 1, error, absolute relative error, and average abs. rel. error (AARE), also known as %AAD (average absolute deviation) by the use of the optimized values of the regression coefficients from Table 2 (model parameters after sensitivity analysis). The errors, and the %AAD were computed as shown in Equations (53) and (54) respectively:
Error   ( E ) :   E = υ e x p υ c a l c υ e x p × 100
% A A D = 1 n i = 1 n v e x p v c a l c v e x p × 100
Considering the %AAD as a criterion for classification of the four studied methods the method %AAD increases in the order: Method 3 < Method 4 < Method 1 < Method 2.
Table 4 shows the standardized sensitivities for the four studied estimation methods.
The data in Table 4 display that Method 1 VGO under numbers 6, 9, 15, 20, 24, 27 exhibited a high sensitivity for the data of ABP (x). The VGO under number 24 demonstrated high sensitivity for the data of viscosity (z). The VGOs under numbers 6, 9, 24, 27 indicate a high sensitivity for the data of SG (y). For Method 2 only the data of VGO under number 19 exhibits a high sensitivity for the data of ABP (x) and SG (y). For Method 3, the data of VGO under numbers 35 and 37 demonstrate a high sensitivity for the data of viscosity (z). The VGOs under numbers 11, 20, and 37 show high sensitivity for the data of ABP (x). The data for SG (y) of VGOs under numbers 20, and 37 display a high sensitivity. Method 4 indicates a high sensitivity for VGOs under numbers 10 and 35 for the data of viscosity (z), under number 19 for ABP (x) and SG (y).
Table 5 presents data about the means and standard deviations of derivatives. It is evident from these data that Method 3 is characterized with the lowest standard deviation of derivatives followed by Method 4. Methods 1 and 2 have two and three orders of magnitude higher standard deviation of derivatives than those of Methods 3 and 4.
Table 6 presents independent data (kin. viscosity at 80 °C; ABP, and SG)) for 43 gas oils to verify the capability of the four methods to predict viscosity. These data include gas oils ranging from light gas oil to VGO. The SG and ABP for this independent data set vary between 0.805 and 1.006, and between 205 and 463 °C, respectively. The kinematic viscosity varies between 0.8 and 28.1 mm2/s. The % AAD increases in the order Method 3 (18.2%) < Method 4 (28.3%) < Method 1 (61.8%) < Method 2 (67.8%).

3.1. Evaluation of the Accuracy of Viscosity Estimation by the Studied Four Methods

Besides the error (53), and %AAD (54) the following additional statistical parameters were used to evaluate the accuracy of viscosity estimation by the studied four methods for the data set of Table 1 [3]:
Standard   error   ( SE ) :   S E = ( υ e x p υ c a l c ) 2 n 1 2
Relative   standard   error   ( RSE ) :   R S E = S E m e a n   o f   t h e   s a m p l e × 100
Sum   of   square   errors   ( SSE ) :   S S E = 1 υ e x p 2 ( υ e x p υ c a l c ) 2
Residual   ( R ) :   R = υ e x p υ c a l c ,
Relative   Error   ( RE ) :   R E = ( ( v e x p v c a l c v e x p ) ) × 100
Table 7 summarizes the statistical analyses for the four studied methods employing the data in Table 1. According to the statistical parameters standard error, relative standard error Methods 1 and 2 surpass in the accuracy of viscosity prediction Methods 3 and 4. However, regarding the statistical parameters relative error, the sum of square errors, %AAD, Method 3 seems to be the best. It is difficult to distinguish the best method on the basis of the statistical parameters estimated by Equations (53)–(59). The Akaike information criterion (AIC) and Bayesian information criterion (BIC) were found capable of estimating the relative quality of a statistical method, and thus being able of providing means for model selection [13,28,29] when several models are available. Below the estimation of AIC and BIC for the four studied methods is summarized:
Akaike Information Criterion.
Consider the obtained errors ϵ 1 , , ϵ n   as independent random samples from a density function f ( ϵ _ i | θ ) , n = 41. Supposing normal distribution of errors:
f x | θ = f x | μ , σ = 1 σ 2 π exp 1 2 x μ σ 2 .
Then by the definition of likelihood function:
L θ = i = 1 n f ε i | θ = i = 1 n 1 σ 2 π exp 1 2 ε i μ σ 2
The function L has maximum, if
μ = μ ^ = 1 n i = 1 n ε i   and   σ 2 = σ ^ 2 = 1 n i = 1 n ε i μ ^ 2 .
Method 1: Obtained errors: ε 1 , , ε n . Estimating the maximizers of likelihood function: μ ^ = 9.0452 and σ = 167.7992 . Hence the Akaike information criterion value is
A I C 1 = 2 × n u m b e r   o f   p a r a m e t e r s 2 × ln L θ ^ 211
Analogously A I C 2 175 , A I C 3 = 14 , and A I C 4 = 190 .
For model comparison, the model with the lowest AIC score is preferred [29].
Bayesian information criterion
The Bayesian information criterion is defined by
B I C = number   of   parameters   ×   ln number   of   data   points 2 × ln L θ ^ ,
In our case:
BIC1 ≈ 220, BIC2 ≈ 184, BIC3 ≈ −5, BIC4 ≈ 198.
Again: the model with the lowest BIC score is preferred.
On the base of AIC, and BIC one may conclude that Method 3 is the model with the highest quality.

3.2. Sensitivity Analysis with Respect to Given Data

Table 4 and Table 5 summarizes the means and standard deviations of derivatives of the four investigated methods.
The variances σ 2 in the datasets of derivatives (especially with respect to yi, i = 1, ..., n, we have σ y 2 1408   o r   850 , respectively) are huge in the first two methods.
In Table 4 the extreme values of sensitive coefficients are marked in bold. In fact, the extreme value of the deviation of the derivative is a sign of a possible problem: the model is not suitable for certain data or the given point does not correspond to the model, etc. On the other hand, different objective functions and corresponding analyses produce different extremal values in the set of all sensitivities. Therefore, it is a good idea to perform sensitivity analysis through different objective/target functions to one and the same mathematical model and to analyze the obtained values in order to improve the model or to exclude an initially given data. In Figure 5 the distributions of sensitivity coefficients in the four methods are presented.
The two mentioned sets of derivatives, mentioned above, are spread out from their average value—the mean. A situation like this is possible if we did not find the extremum, or if the model function is not adequate, or the derivatives are huge in “any” small neighborhood of the extremum. In any case, the calculated values of variance are reasons to doubt the first two methods. Contrary to the third and fourth method the variances are not so huge numbers. As example we present toon Figure 6 the histogram of derivative values, computed for the first and fourth methods.
Based on the arguments above, one may consider Method 3 or Method 4. Preferably, Method 3 taking in account the mean absolute percentage error.

3.3. Verification of the Viscosity Prediction Ability of the Four Studied Methods

The 43 gas oils from Table 6 were selected in such a way to cover the whole possible diversity of properties of gas oils from primary and secondary origin which can encounter in any refinery all over the world. As was already mention in section “Results” Method 3 surpassed all other methods concerning the accuracy of viscosity prediction. Table 8 summarizes the statistical analyses for the four studied methods employing the data in Table 6. These data indubitably reveal the superiority of Method 3 as the best method to model gas oil viscosity. As a supplement the oil viscosity models of Aboul Seoud and Moharam [1] (Equation (6)), and Kotzakoulakis, and George [7] (Equation (65)), which are based on Walther’s equation, were verified to predict viscosity of the 43 gas oils from Table 6. They predict the 43 gas oil viscosities with %AAD of 21.8%, and 89% respectively proving the superiority of Method 3 model.
ln ln V I S + 0.8 = 14.69 A B P 0.0684 S G 0.267 3.682 ln T  
The model obtained by Method 3 is currently used not only to predict viscosity of gas oils but also as a tool for verification of the correctness of viscosity measurement of gas oils in LUKOIL Neftohim Burgas Research laboratory. Several times it proved its usefulness as an indicator for incorrect viscosity measurement especially when H-Oil gas oils which contain both high amount of aromatic compounds and relatively high content of waxes that makes problematic their viscosity measurement. Once HVGO viscosity at 80 °C was measured equal to 72 mm2/s while the model based on Method 3 reported the value of 54 mm2/s. The repetition of the viscosity measurement reported the value of 54 mm2/s.

4. Conclusions

The gas oil properties average boiling point and specific gravity along with modified Walther’s equation and nonlinear regression techniques can be used to model oil physical property viscosity. The four nonlinear regression techniques: least squares of absolute errors, least absolute errors, least squares of relative errors, and least absolute relative errors can model gas oil viscosity. The developed gas oil viscosity models by use of the four nonlinear regression methods showed comparable accuracy of viscosity calculation for the initial base of 41 vacuum gas oils. The statistical parameters relative error, standard error, relative standard error, sum of square errors, % average absolute deviation, coefficient of determination were not in position to unequivocally select the best model. Both AIC, BIC and the standard deviations of derivatives unambiguously indicated that the model developed by nonlinear regression least squares of relative errors was the best one. The sensitivity analysis with respect to given data also revealed that the LSRE model is the most stable one with the lowest values of standard deviations of derivatives.
The LSRE model demonstrated the highest accuracy of viscosity prediction of 43 gas oils not included in the initial data base. It was also superior in oil viscosity prediction relative to other published models based on modified Walther’s equation. The LSRE can be used not only to predict gas oil viscosity but also to examine the correctness of the oil viscosity measurement.

Author Contributions

Conceptualization, E.S.; Data curation, S.N. and R.D.; Formal analysis, D.Y.; Investigation, D.D.S. (Denis D. Stratiev) and L.T.-Y.; Methodology, S.S., N.A.A.; Software, V.A., D.N., S.R. and D.D.S. (Danail D. Stratiev); Supervision, K.A.; Writing—original draft, D.S., S.N. and I.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Asen Zlatarov University–Burgas, Project: Information and Communication Technologies for a Digital Single Market in Science, Education and Security DCM # 577/17.08.2018 (2018–2021).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are grateful for the support provided by the Bulgarian Ministry of Education and Science under the National Research Programme “Information and Communi-cation Technologies for a Digital Single Market in Science, Education and Security” approved by DCM # 577/17 August 2018.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

ABPAverage boiling point
AICAkaike information criterion
ARIAromatic ring index
%AAD% average absolute deviation
BICBayesian information criterion
EError
FCCFluid catalytic cracking
HAGOHeavy atmospheric gas oil
HCOHeavy cycle oil
HTVGOHydrotreated vacuum gas oil
HVGOHeavy vacuum gas oil
LAELeast absolute errors
LARELeast absolute relative errors
LCOLight cycle oil
LSAELeast squares of absolute errors
LSRELeast squares of relative errors
LVGOLight vacuum gas oil
MWMolecular weight
NLLSRNonlinear least square regression
RERelative error
RIRefractive index
RSERelative standard error
SASensitivity analysis
SEStandard error
SGSpecific gravity
SLOSlurry oil
SRHVGOStraight run heavy vacuum gas oil
SRLVGOStraight run light vacuum gas oil
SRVGOStraight run vacuum gas oil
SSESum of square errors
VBGOVisbreaker gas oil
VGOVacuum gas oil
υKinematic viscosity, mm2/s

References

  1. Aboul-Seoud, A.L.; Moharam, H.M. A generalized viscosity correlation for undefined petroleum fractions. Chem. Eng. J. 1999, 72, 253–256. [Google Scholar] [CrossRef]
  2. Abutaqiya, M.I.L.; Alhammadi, A.A.; Sisco, C.J.; Vargas, F.M. Aromatic Ring Index (ARI): A characterization factor for nonpolar hydrocarbons from molecular weight and refractive index. Energy Fuels 2021, 35, 1113–1119. [Google Scholar] [CrossRef]
  3. Hernández, E.A.; Sánchez-Reyna, G.; Ancheyta, J. Comparison of mixing rules based on binary interaction parameters for calculating viscosity of crude oil blends. Fuel 2019, 249, 198–205. [Google Scholar] [CrossRef]
  4. Hosseinifar, P.; Jamshidi, S. A new correlative model for viscosity estimation of pure components, bitumens, size-asymmetric mixtures and reservoir fluids. J. Petrol. Sci. Eng. 2016, 147, 624–635. [Google Scholar] [CrossRef]
  5. Kamel, A.; Alomair, O.; Elsharkawy, A. Measurements and predictions of Middle Eastern heavy crude oil viscosity using compositional data. J. Petrol. Sci. Eng. 2019, 173, 990–1004. [Google Scholar] [CrossRef]
  6. Kariznovi, M.; Nourozieh, H.; Abedi, J. Measurement and modeling of density and viscosity for mixtures of Athabasca bitumen and heavy n-alkane. Fuel 2013, 112, 83–95. [Google Scholar] [CrossRef]
  7. Kotzakoulakis, K.; George, S.C. A simple and flexible correlation for predicting the viscosity of crude oils. J. Pet. Sci. Eng. 2017, 158, 416–423. [Google Scholar] [CrossRef]
  8. Kumar, R.; Maheshwari, S.; Voolapalli, R.K.; Upadhyayul, S. Investigation of physical parameters of crude oils and their impact on kinematic viscosity of vacuum residue and heavy product blends for crude oil selection. J. Taiwan Inst. Chem. Eng. 2021, 120, 33–42. [Google Scholar] [CrossRef]
  9. Malta, J.A.M.S.C.; Calabrese, C.; Nguyen, T.B.; Trusler, J.P.M.; Vesovic, V. Measurements and modelling of the viscosity of six synthetic crude oil mixtures. Fluid Phase Equilibria 2020, 505, 112343. [Google Scholar] [CrossRef]
  10. Mehrotra, A.K. A Simple Equation for Predicting the Viscosity of Crude-Oil Fractions. Chem. Eng. Res. Des. 1995, 73, 87–90. [Google Scholar]
  11. Pabón, R.E.C.; Filho, C.R.S. Crude oil spectral signatures and empirical models to derive API gravity. Fuel 2019, 237, 1119–1131. [Google Scholar] [CrossRef]
  12. Raut, B.; Patil, S.L.; Dandekar, A.Y.; Fisk, R.; Maclean, B.; Hice, V. Comparative study of compositional viscosity prediction models for medium-heavy oils. Int. J. Oil Gas Coal Technol. 2008, 1, 229. [Google Scholar] [CrossRef]
  13. Sánchez-Minero, F.; Sánchez-Reyna, G.; Ancheyta, J.; Marroquin, G. Comparison of correlations based on API gravity for predicting viscosity of crude oils. Fuel 2014, 138, 193–199. [Google Scholar] [CrossRef]
  14. Stratiev, D.S.; Nenov, S.; Shishkova, I.K.; Dinkov, R.K.; Zlatanov, K.; Yordanov, D.; Sotirov, S.; Sotirova, E.; Atanassova, V.; Atanassov, K.; et al. Comparison of Empirical Models to Predict Viscosity of Secondary Vacuum Gas Oils. Resources 2021, 10, 82. [Google Scholar] [CrossRef]
  15. Samano, V.; Tirado, A.; F´elix, G.; Ancheyta, J. Revisiting the importance of appropriate parameter estimation based on sensitivity analysis for developing kinetic model. Fuel 2020, 267, 117113. [Google Scholar] [CrossRef]
  16. Ghorbani, B.; Hamedi, M.; Shirmohammadi, R.; Mehrpooy, M.; Hamedi, M.H. A novel multi-hybrid model for estimating optimal viscosity correlations of Iranian crude oil. J. Petrol. Sci. Eng. 2016, 142, 68–76. [Google Scholar] [CrossRef]
  17. Khamehchi, E.; Mahdiani, M.R.; Amooie, M.A.A.; Hemmati-Sarapardeh, A. Modeling viscosity of light and intermediate dead oil systems using advanced computational frameworks and artificial neural networks. J. Petrol. Sci. Eng. 2020, 193, 107388. [Google Scholar] [CrossRef]
  18. Saltelli, A.; Tarantola, S.; Campolongo, F.; Ratto, M. Sensitivity Analysis in Practice; John Wiley & Sons Ltd.: Chichester, UK, 2004. [Google Scholar]
  19. Parhamifar, E.; Tyllgren, P. Assessment of asphalt binder viscosities with a new approach. In Proceedings of the E&E Congress 2016|6th Eurasphalt & Eurobitume Congress, Prague, Czech Republic, 1–3 June 2016. [Google Scholar]
  20. Castillo, E.; Hadi, A.S.; Conejo, A.; Fernndez-Canteli, A. A general method for local sensitivity analysis with application to regression models and other optimization problems. Technometrics 2004, 46, 430–444. [Google Scholar] [CrossRef]
  21. Alcazar, L.A.; Ancheyta, J. Sensitivity analysis based methodology to estimate the best set of parameters for heterogeneous kinetic models. Chem. Eng. J. 2007, 128, 85–93. [Google Scholar] [CrossRef]
  22. Diarov, I.N.; Batueva, I.U.; Sadikov, A.N.; Colodova, N.L. Chemistry of Crude Oil; Chimia Publishers: St.Peterburg, Russia, 1990; Volume 51. (In Russian) [Google Scholar]
  23. Fisher, I.P. Effect of feedstock variability on catalytic cracking yields. Appl. Catal. 1990, 65, 189–210. [Google Scholar] [CrossRef]
  24. Walther, C. Ueber die Auswertung von Viskosit€atsangaben. Erdoel Teer 1931, 7, 382–384. [Google Scholar]
  25. Solodov, M.V.; Svaiter, B.F. A globally convergent inexact Newton method for systems of monotone equations. In Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods; Springer: Boston, MA, USA, 1998; pp. 355–369. [Google Scholar]
  26. Zhou, W.J.; Li, D.H. A globally convergent BFGS method for nonlinear monotone equations without any merit functions. Math. Comput. 2008, 77, 2231–2240. [Google Scholar] [CrossRef]
  27. Demidenko, E. Is This the Least Squares Estimate? Biometrika 2000, 87, 437–452. [Google Scholar] [CrossRef]
  28. Takayama, A. Mathematical Economics; Cambridge University Press: New York, NY, USA, 1985; ISBN 0-521-31498-4. [Google Scholar]
  29. Burnham, K.P.; Anderson, D.R. Model Selection and Multimodel Inference, 2nd ed.; Springer: New York, NY, USA, 2002. [Google Scholar]
Figure 1. (a) Halton square with bases 2, 3. (b) Halton square with bases 4, 5.
Figure 1. (a) Halton square with bases 2, 3. (b) Halton square with bases 4, 5.
Resources 10 00099 g001
Figure 2. Graphs of functions F 1 a , a ˜ 2 , a ˜ 3 , a ˜ 4 , a ˜ 5 (blue) and F 1 a , a 2 0 , a 3 0 , a 4 0 , a 5 0 (red) in the interval 0.00009727..0.0000973 . Abscissa—variable a; Ordinate—variable F1
Figure 2. Graphs of functions F 1 a , a ˜ 2 , a ˜ 3 , a ˜ 4 , a ˜ 5 (blue) and F 1 a , a 2 0 , a 3 0 , a 4 0 , a 5 0 (red) in the interval 0.00009727..0.0000973 . Abscissa—variable a; Ordinate—variable F1
Resources 10 00099 g002
Figure 3. Normal distribution plot for the SG data of the VGOs from Table 1.
Figure 3. Normal distribution plot for the SG data of the VGOs from Table 1.
Resources 10 00099 g003
Figure 4. Normal distribution plot for the ABP data of the VGOs from Table 1.
Figure 4. Normal distribution plot for the ABP data of the VGOs from Table 1.
Resources 10 00099 g004
Figure 5. The boxed-graphs of sensitivity coefficients with respect to z, x, and y for all four methods. Each boxed-graph is based on the five-number statistical characteristics; minimum, first quartile, median, third quartile, and maximum (the central rectangle represents from the first quartile to the third quartile; the segment inside the rectangle is the median; the dot is the mean). (a)—Method 1; (b)—Method 2; (c)—Method 3; (d)—Method 4.
Figure 5. The boxed-graphs of sensitivity coefficients with respect to z, x, and y for all four methods. Each boxed-graph is based on the five-number statistical characteristics; minimum, first quartile, median, third quartile, and maximum (the central rectangle represents from the first quartile to the third quartile; the segment inside the rectangle is the median; the dot is the mean). (a)—Method 1; (b)—Method 2; (c)—Method 3; (d)—Method 4.
Resources 10 00099 g005
Figure 6. Histogram and probability density function of normal distribution, generated for the set of all derivatives (computed in optimal values of parameters) with respect to y i . (a): Method 1. (b): Method 4.
Figure 6. Histogram and probability density function of normal distribution, generated for the set of all derivatives (computed in optimal values of parameters) with respect to y i . (a): Method 1. (b): Method 4.
Resources 10 00099 g006
Table 1. Properties of primary and secondary VGOs used to develop the empirical model for the prediction of viscosity applying the four nonlinear regression methods.
Table 1. Properties of primary and secondary VGOs used to develop the empirical model for the prediction of viscosity applying the four nonlinear regression methods.
NrSampleSGT10%T50%T90%T95%ABP, °CKin. vis. at
80 °C, mm2/s
RI at 20 °CKwMW, g/molARI
1HAGO-10.95123433974554763987.31.538511.203422.2
2LVGO-10.971534341449351741712.11.550911.073642.5
3HVGO-10.985842649154856248849.91.552411.274613
4HAGO-20.95933539545848039613.61.544211.093392.3
5LVGO-20.985633041048850840915.21.561210.863542.7
6HVGO-21.008443048954055448662.11.568511.004583.4
7HAGO-30.951432337743946138012.91.540911.093222.1
8LVGO-30.976832439548250840016.71.556710.913442.5
9HVGO-30.99740547053455147034.81.562611.054343.1
10FCC SLO-10.98712322824124553093.61.576310.292542.4
11FCC SLO-21.05492923724755183809.91.61410.013193.3
12FCC SLO-31.057332939247149339716.21.613510.073373.5
13FCC SLO-41.067133740147649840521.31.619410.023463.6
14FCC SLO-51.062432439147149439517.41.617210.013353.5
15FCC SLO-61.095333140049152540733.81.63929.773463.9
16FCC SLO-71.078832639749353140524.21.6289.913453.7
17FCC SLO-81.06331738948452039718.51.617810.013373.5
18FCC SLO-91.083532740148050140328.51.63099.853423.8
19FCC SLO-101.177371435562634456312.81.69279.303955.1
20FCC SLO-111.101133239448253040321.21.6449.703403.9
21VGO blend0.916537644652554444914.21.508811.914041.7
22HAGO-40.90535742548950542481.502911.923711.4
23LVGO-40.9123224175285504228.61.508811.823691.6
24HVGO-40.92241148655256848327.21.508212.024531.8
25HAGO-50.971033839545948039713.01.553210.963412.5
26LVGO-50.986032039147049539413.01.564210.783372.6
27HVGO-51.015041947753154547657.51.575110.884423.4
28FCC SLO-121.097033339548754540522.21.64179.743433.9
29VBGO-10.939937644549550543914.71.525911.563912.1
30VBGO-20.944937343348649743113.51.530711.453812.1
31FCC SLO-131.052927836645948336814.51.61399.963063.2
32FCC SLO-141.076532138646949339216.21.62839.863303.6
33HTVGO-10.893936443350652143410.411.494912.133831.3
34HTVGO-20.89013604295045204319.571.492712.163781.2
35BG LIGHT 0.86503063764645143823.71.478612.213190.8
36PEMBINA0.89403404285226294307.81.493612.103781.2
37EKOFISK0.90303424445355774407.81.501312.043911.4
38BRENT0.89403224065025554108.41.499011.983531.3
39BOW RIVER 0.93203424215045704229.51.517111.563701.8
40COKER1.00933342951456042520.71.576110.703743.1
41BU ATTIFEL0.83803854455125504478.31.454113.013930.0
Note: Properties of VGOs under numbers 35–40 were taken from Fisher [23].
Table 2. Numerically calculated values of parameter a 0 = a 1 0 , a 2 0 , a 3 0 , a 4 0 , a 5 0 T .
Table 2. Numerically calculated values of parameter a 0 = a 1 0 , a 2 0 , a 3 0 , a 4 0 , a 5 0 T .
CoefficientLeast SquaresLeast abs. ErrorsSquared rel. ErrorsAbs. rel. Errors
Before SAAfter SABefore SAAfter SABefore SAAfter SABefore SAAfter SA
a 1 0 0.00009720.00009730.08887050.08887059 × 10−79 × 10−70.08417920.0841793
a 2 0 1.55426451.55426410.65733090.6573312.18512352.18512350.65330580.6533059
a 3 0 1.09461361.09461320.47848470.47848481.51937871.51937870.50752310.5075231
a 4 0 −1.5265719−1.5265719−5.5717615−5.571762−0.4953817−0.4953818−5.0323918−5.0323919
a 5 0 −1.4404829−1.4404824−2.4403382−2.4403381.90891831.90891840.03822310.0382233
Table 3. Calculated results for four estimation methods, calc.—calculated value; rel.error.—relative error (in %).
Table 3. Calculated results for four estimation methods, calc.—calculated value; rel.error.—relative error (in %).
Least Squares
(Method 1)
Least abs. Errors
(Method 2)
Squared rel. Errors (Method 3)Abs. rel. Errors
(Method 4)
Nr calc.Errorrel. Errorcalc.Errorrel. Errorcalc.Errorrel. Errorcalc.Errorrel. Error
1HAGO-19.79−2.5234.79.74−2.4733.98.48−1.2116.77.99−0.7210
2LVGO-113.25−1.179.713.09−1.018.412.34−0.262.111.60.484
3HVGO-150.96−1.052.149.760.150.351.55−1.643.347.022.895.8
4HAGO-29.983.6226.69.933.67278.694.9136.18.225.3839.5
5LVGO-213.331.8712.313.211.9913.112.412.7918.311.83.422.4
6HVGO-264.42−2.323.763.15−1.051.764.75−2.654.360.141.963.2
7HAGO-38.254.6536.18.274.6335.96.716.19486.436.4750.1
8LVGO-311.225.4832.811.145.5633.310.086.6239.79.587.1242.6
9HVGO-338.5−3.710.637.9−3.18.938.86−4.0611.736.37−1.574.5
10FCC SLO-15.72−2.1660.85.95−2.3967.23.72−0.164.53.93−0.3710.3
11FCC SLO-213.72−3.8238.513.77−3.8739.112.76−2.8628.912.8−2.929.3
12FCC SLO-317.82−1.621017.89−1.6910.417.19−0.996.117.27−1.076.6
13FCC SLO-421.53−0.231.121.68−0.381.821.10.20.921.43−0.130.6
14FCC SLO-517.88−0.482.817.97−0.573.317.250.150.917.41−0.010
15FCC SLO-628.275.4916.328.745.0214.928.015.751729.344.4213.1
16FCC SLO-723.890.321.324.150.060.223.540.672.824.2100
17FCC SLO-818.380.090.518.48−0.01017.780.693.817.960.512.8
18FCC SLO-923.74.8116.923.994.5215.923.335.1818.224.114.415.4
19FCC SLO-10312.70.10312.800288.0724.737.9316.69−3.891.2
20FCC SLO-1127.18−5.9427.927.66−6.4230.226.87−5.6326.528.3−7.0633.2
21VGO blend13.820.372.613.490.7513.041.158.111.682.5117.7
22HAGO-49.81−2.4132.69.68−2.2830.88.54−1.1415.47.77−0.375.1
23LVGO-410.25−2.6534.810.1−2.532.99.03−1.4318.88.25−0.658.5
24HVGO-423.448.1625.822.678.9328.323.488.1225.720.5911.0134.8
25HAGO-510.692.3117.710.632.3718.29.493.51279.013.9930.7
26LVGO-511.021.9815.310.972.0315.69.843.1624.39.433.5727.5
27HVGO-554.13.45.953.384.127.254.463.045.351.496.0110.4
28FCC SLO-1226.07−3.8717.426.49−4.2919.325.74−3.5415.926.98−4.7821.5
29VBGO-114.480.221.514.190.513.513.750.956.512.532.1714.8
30VBGO-213.440.060.513.20.32.212.580.926.811.561.9414.4
31FCC SLO-1311.5320.711.562.9420.310.324.1828.810.354.1528.6
32FCC SLO-1418.34−2.1413.218.5−2.314.217.71−1.519.318.12−1.9211.9
33HTVGO-110.370.030.310.190.2129.191.2111.78.272.1320.5
34HTVGO-29.86−0.262.79.7−0.11.18.6110.47.761.8419.2
35BG LIGHT 6.19−2.4967.46.32−2.6270.84.32−0.6216.74.2−0.513.5
36PEMBINA9.88−2.0826.79.73−1.9324.78.62−0.8210.57.800
37EKOFISK11.8−451.311.55−3.7548.110.8−338.59.67−1.8724
38BRENT8.280.121.48.250.151.86.781.6219.36.282.1225.3
39BOW RIVER 11.23−1.7318.211.07−1.5716.510.13−0.636.69.310.192
40COKER19.681.024.919.531.175.719.261.44718.522.1810.5
41BU ATTIFEL8.75−0.455.48.61−0.313.77.350.9511.46.51.821.7
AARE (%AAD) 17.3 17.5 15.2 16.0
Table 4. Standardized sensitivities for four estimation methods.
Table 4. Standardized sensitivities for four estimation methods.
Nr.Least SquaresLeast abs. ErrorsSquared rel. ErrorsAbs. rel. Errors
S z i S x i S y i S z i S x i S y i S z i S x i S y i S z i S x i S y i
1−0.830.170.17−0.960.20.21−1.280.930.95−1.340.80.87
2−0.390.130.13−0.960.240.250.060.120.120.94−0.86−0.86
3−0.350.850.931.01−0.43−0.570.130.290.320.34−1.27−1.41
41.2−0.25−0.251.010.10.11.07−1.12−1.120.6−0.47−0.45
50.62−0.22−0.211.010.070.060.69−0.82−0.820.66−0.71−0.68
6−0.772.592.75−0.960.961.130.120.410.440.31−1.41−1.52
71.54−0.23−0.221.010.120.121.2−1.11−1.10.54−0.35−0.32
81.81−0.47−0.461.010.090.080.93−1.23−1.220.5−0.49−0.45
9−1.222.072.17−0.960.560.64−0.041.021.1−0.141.31.46
10−0.710.050.04−0.960.170.17−0.550.180.16−2.920.570.52
11−1.260.490.42−0.960.250.25−1.862.171.94−1.141.231.16
12−0.540.310.28−0.960.30.3−0.050.410.38−0.51.121.08
13−0.080.060.05−0.960.350.350.19−0.06−0.06−0.311.141.1
14−0.160.090.08−0.960.30.30.19−0.06−0.05−0.411.061.02
151.81−2.14−1.871.01−0.15−0.150.39−1.08−0.960.41−1.13−1
160.1−0.1−0.081.01−0.08−0.080.22−0.19−0.17−0.251.191.13
170.03−0.02−0.02−0.960.310.310.27−0.24−0.210.68−1.08−0.96
181.59−1.45−1.271.01−0.08−0.080.44−1.06−0.960.45−1.04−0.92
190.03−0.85−0.741.01−6.15−6.090.17−1.06−0.940.122.222.06
20−1.962.191.89−0.960.440.43−0.682.532.24−0.471.681.55
210.12−0.04−0.051.010.070.050.44−0.4−0.450.73−0.71−0.78
22−0.80.150.17−0.960.20.21−1.120.820.91−1.250.720.86
23−0.870.190.2−0.960.20.21−1.411.051.16−1.260.770.91
242.7−2.16−2.481.01−0.04−0.090.49−1.22−1.440.36−0.68−0.78
250.76−0.18−0.181.010.10.090.97−0.99−0.980.69−0.57−0.54
260.65−0.17−0.161.010.090.090.92−0.94−0.920.71−0.61−0.57
271.12−3.04−3.151.01−0.5−0.620.21−0.44−0.470.31−1.26−1.33
28−1.281.341.16−0.960.420.42−0.281.371.22−0.391.511.4
290.07−0.03−0.031.010.060.040.38−0.33−0.360.73−0.76−0.81
300.02−0.01−0.011.010.070.050.41−0.34−0.370.79−0.75−0.78
310.99−0.29−0.241.010.080.080.92−1.12−0.980.64−0.65−0.55
32−0.710.440.38−0.960.310.31−0.180.670.59−0.531.221.14
330.01001.010.10.090.69−0.48−0.550.91−0.59−0.65
34−0.090.020.02−0.960.20.210.68−0.43−0.490.99−0.58−0.64
35−0.820.060.07−0.960.170.17−2.660.720.78−2.890.550.65
36−0.690.130.15−0.960.20.21−0.640.530.61−1.120.680.83
37−1.320.360.4−0.960.220.23−3.52.643.02−1.420.941.13
380.04−0.01−0.011.010.120.111.16−0.67−0.731.04−0.49−0.51
39−0.570.140.15−0.960.210.22−0.240.350.381.18−0.78−0.82
400.34−0.22−0.221.01−0.01−0.030.33−0.42−0.420.59−0.96−0.94
41−0.150.020.03−0.960.190.20.81−0.42−0.521.09−0.5−0.59
Note: The bold figures mean high values of standardized sensitivities.
Table 5. Means and standard deviations of derivatives.
Table 5. Means and standard deviations of derivatives.
Least SquaresLeast abs. ErrorsSquared rel. ErrorsAbsl. rel. Errors
μ σ μ σ μ σ μ σ
with   respect   to   z i 06.06−0.02−0.0200.04−0.010.1
with   respect   to   x i 02.81−0.28−0.280000.02
with   respect   to   y i 0.161409.28−129.95848.7802.54−0.168.36
Table 6. Independent data for gas oils (from light gas oil to VGO) to verify the capability of the four methods to predict viscosity at 80 °C.
Table 6. Independent data for gas oils (from light gas oil to VGO) to verify the capability of the four methods to predict viscosity at 80 °C.
Calculated Viscosity, mm2/sAbs. Relative Error, %
NrVGO and Light Gas OilsKin. vis. at 80 °C, mm2/sABPSGMethod 1Method 2Method 3Method 4Method 1Method 2Method 3Method 4
1HYDRA9.94390.886110.410.29.28.35.03.06.916.0
2EL BUNDUQ11.64340.924012.312.011.310.35.63.52.811.1
3SUNNILAND13.34440.942015.715.315.013.717.514.912.73.1
4Urals14.44450.923514.013.613.211.93.15.48.517.1
5INNES10.54350.87939.79.68.57.77.38.819.427.0
6LOKELE15.44410.958116.716.416.114.98.46.34.73.1
7Cold Lake8.04070.92919.69.68.37.820.619.54.02.3
8CANMET5.43760.94467.97.96.36.145.546.315.812.8
9VISBROKEN5.03820.96969.29.27.87.684.384.156.151.3
10CHAMPION EXPORT14.04260.972115.114.914.413.58.16.52.93.2
11UDANG9.34550.84609.79.58.57.54.82.48.619.2
12KAKAP4.84240.85708.07.96.56.066.665.434.524.0
13DAQUING 8.24460.865110.09.78.77.821.418.96.44.9
14SERGIPANO PLATFORMA9.24370.87159.59.38.27.43.21.511.019.5
15LAKE ARTHUR 8.64200.87668.48.47.06.41.92.719.025.2
16MARGHAM LIGHT6.34150.86917.97.96.35.924.824.30.16.8
17SYNTHETIC OSA STREAM 9.34110.943410.710.69.69.015.414.12.73.6
18COLD LAKE BLEND28.14630.965525.024.425.022.911.113.111.218.4
19DULANG4.84090.85047.07.05.35.044.645.39.33.4
20HARRIET5.64220.89029.19.07.77.163.361.438.627.7
21TIA JUANA P 26.14610.967324.423.924.422.56.48.46.614.0
22TIA JUANA M19.74500.937316.215.815.714.217.619.620.527.7
23SOUEDIE20.34540.952919.419.019.117.44.36.46.013.9
24ARAB HEAVY 11.74500.928515.315.014.713.330.827.525.213.3
25ARAB MEDIUM 8.24450.918313.513.212.711.565.461.555.340.4
26ARAB LIGHT10.24490.919614.314.013.612.240.236.632.919.9
27MAGNUS13.14510.899512.812.511.910.62.24.79.018.6
28GULLFAKS16.44530.920415.114.714.513.07.710.111.720.6
29FLOTTA BLEND16.44580.916815.615.215.013.44.67.28.317.9
30EKOFISK10.64440.896311.711.410.69.610.07.50.49.8
31HT Kerosene0.82050.80533.43.90.81.6323.5389.21.9101.8
32HTDiesel-21.22510.83103.74.21.31.9211.2249.15.561.0
33HTDiesel-32.13100.85764.54.82.22.7114.0129.76.226.2
34FCC LCO1.12500.94614.24.61.82.4281.6317.068.1119.8
35FCC HCO-12.23090.99605.96.13.94.2166.7176.776.989.3
36FCC HCO-23.43250.99506.46.64.64.789.194.234.139.7
37FCC HCO-34.43401.00647.47.65.75.869.071.730.432.7
38SRLVGO2.43140.88004.75.02.52.997.3109.95.420.7
39SRVGO-11.12460.83453.74.21.21.9236.9278.711.873.4
40SRVGO-21.372690.84563.94.41.52.1187.3217.811.354.9
41VBGO-31.72950.86184.34.72.02.5153.4174.517.445.7
42SRHVGO-17.754420.923013.313.012.511.372.168.361.146.4
43SRHVGO-112.394400.922713.112.812.211.15.43.11.710.6
%AAD 61.867.818.228.3
Table 7. Statistical analysis of the four methods for the data from Table 1.
Table 7. Statistical analysis of the four methods for the data from Table 1.
Method 1Method 2Method 3Method 4
Min E−67.3−70.8−38.5−33.3
Max E36.035.948.050.2
RE−232.4−217.0149.8296.5
SE3.13.15.13.7
RSE12.012.220.014.5
SSE2.42.51.51.7
%AAD17.317.515.216.0
R20.9960.99590.99480.9953
Slope0.9960.99540.92441.0118
Intercept0.10230.00950.5351−1.6381
AIC211175−14190
BIC220184−5198
Table 8. Statistical analysis of the four studied methods and the models of Aboul Seoud and Moharam (Aboul Seoud and Moharam, 1999), and Kotzakoulakis and George (Kotzakoulakis and George, 2017) for the data from Table 6.
Table 8. Statistical analysis of the four studied methods and the models of Aboul Seoud and Moharam (Aboul Seoud and Moharam, 1999), and Kotzakoulakis and George (Kotzakoulakis and George, 2017) for the data from Table 6.
Method 1Method 2Method 3Method 4Aboul Seoud and Moharam Kotzakoulakis and George
Min E−323.5−389.2−76.8−112.8−94.2−729.9
Max E17.619.620.528.135.257.2
RE−2526.6−2743.7−480.1−517.230.5−291151
SE2.62.71.82.22.77.1
RSE28.32919.923.628.977.3
SSE44.157.635.73.7141.1
%AAD61.867.818.227.121.889
R20.93240.93230.92940.92810.90380.4352
Slope0.7710.73110.86690.76030.72090.8797
Intercept3.663.931.481.751.53.45
AIC1921599153204316
BIC20116818162215326
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Stratiev, D.; Nenov, S.; Nedanovski, D.; Shishkova, I.; Dinkov, R.; Stratiev, D.D.; Stratiev, D.D.; Sotirov, S.; Sotirova, E.; Atanassova, V.; et al. Different Nonlinear Regression Techniques and Sensitivity Analysis as Tools to Optimize Oil Viscosity Modeling. Resources 2021, 10, 99. https://doi.org/10.3390/resources10100099

AMA Style

Stratiev D, Nenov S, Nedanovski D, Shishkova I, Dinkov R, Stratiev DD, Stratiev DD, Sotirov S, Sotirova E, Atanassova V, et al. Different Nonlinear Regression Techniques and Sensitivity Analysis as Tools to Optimize Oil Viscosity Modeling. Resources. 2021; 10(10):99. https://doi.org/10.3390/resources10100099

Chicago/Turabian Style

Stratiev, Dicho, Svetoslav Nenov, Dimitar Nedanovski, Ivelina Shishkova, Rosen Dinkov, Danail D. Stratiev, Denis D. Stratiev, Sotir Sotirov, Evdokia Sotirova, Vassia Atanassova, and et al. 2021. "Different Nonlinear Regression Techniques and Sensitivity Analysis as Tools to Optimize Oil Viscosity Modeling" Resources 10, no. 10: 99. https://doi.org/10.3390/resources10100099

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop