Next Article in Journal
Distributed Drive Autonomous Vehicle Trajectory Tracking Control Based on Multi-Agent Deep Reinforcement Learning
Previous Article in Journal
Bayesian Learning in an Affine GARCH Model with Application to Portfolio Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solving Contextual Stochastic Optimization Problems through Contextual Distribution Estimation

1
Faculty of Business, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China
2
Institute of Data and Information, Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(11), 1612; https://doi.org/10.3390/math12111612
Submission received: 23 March 2024 / Revised: 15 May 2024 / Accepted: 20 May 2024 / Published: 21 May 2024
(This article belongs to the Topic Big Data Intelligence: Methodologies and Applications)

Abstract

:
Stochastic optimization models always assume known probability distributions about uncertain parameters. However, it is unrealistic to know the true distributions. In the era of big data, with the knowledge of informative features related to uncertain parameters, this study aims to estimate the conditional distributions of uncertain parameters directly and solve the resulting contextual stochastic optimization problem by using a set of realizations drawn from estimated distributions, which is called the contextual distribution estimation method. We use an energy scheduling problem as the case study and conduct numerical experiments with real-world data. The results demonstrate that the proposed contextual distribution estimation method offers specific benefits in particular scenarios, resulting in improved decisions. This study contributes to the literature on contextual stochastic optimization problems by introducing the contextual distribution estimation method, which holds practical significance for addressing data-driven uncertain decision problems.

1. Introduction

Uncertainty is prevalent in various decision-making systems. For optimization problems with uncertain parameters in the objective function, traditional models typically assume that these uncertain parameters follow known probability distributions, thereby establishing stochastic optimization models to solve the problems [1]. However, obtaining the true distributions of uncertain parameters is often challenging. Abundant historical data on uncertain parameters and their related features present new perspectives for addressing uncertainty [2], in fields like transportation [3,4] and logistics [5,6,7].
Stochastic problems, without knowing feature information, can use a sample average approximation (SAA) to generate approximate stochastic models [8]. However, in the era of big data, we have access to informative features related to uncertain parameters, leading to contextual stochastic problems. When the objective function is linear in uncertain parameters, we can use predict-then-optimize and smart predict-then-optimize methods to predict the point values of uncertain parameters [9]. Then, these point predictions are plugged into downstream optimization problems to derive solutions. However, when the objective function is nonlinear in uncertain parameters, good predictions do not necessarily mean good decisions for uncertain optimization problems [10]. Various machine learning (ML) methods are used to approximate uncertain parameters as a function of features [2]. To some extent, predictive methods lack the capability to consider the influence of prediction errors on downstream decisions. Replacing a random parameter with its mean is widely recognized as potentially resulting in suboptimal solutions for a stochastic optimization model [11]. Therefore, we should estimate the conditional distributions of uncertain parameters [12]. Weighted SAA (w-SAA) is an advanced method that uses the empirical distributions of uncertain parameters to derive solutions by solving contextual stochastic problems [13]. A global predictive prescription method based on quantile regression is proposed to predict the distributions of the unknown parameters in the optimization model by Wang and Yan [14]. However, the authors do not estimate the mean and variance of the parameter distribution directly.
From the above literature, we emphasize that the existing studies mainly use empirical distributions to solve contextual stochastic problems, which lack the estimation of the exact underlying distributions of uncertain parameters. Consequently, this study builds on w-SAA but goes a step further; that is, we estimate the conditional distributions of uncertain parameters accurately using ML models and then utilize the estimated distribution to generate a set of estimates, which are then used to obtain approximate solutions to contextual stochastic problems. We use an energy scheduling problem as the case study, build the contextual stochastic optimization model, and conduct numerical experiments with real-world data, as well as four specific ML models, including k-nearest-neighbors (kNN), classification and regression tree (CART), random forest (RF), and kernel regression (KR). The results demonstrate that the proposed contextual distribution estimation method offers specific benefits in particular scenarios, resulting in improved solutions compared to w-SAA and providing new tools for addressing contextual stochastic optimization problems in practice.
The remainder of this study is organized as follows. Section 2 introduces w-SAA and contextual distribution estimation methods. Section 3 presents the mathematical model for our studied case and the results of numerical experiments. Section 4 concludes this study.

2. Methodology

This section provides an overview of the general form of the contextual stochastic optimization problem, introduces w-SAA and contextual distribution estimation methods, and defines the decision loss as the evaluation metric. In addition, we present the specific steps for implementing the two methods by using four ML models.

2.1. Contextual Stochastic Optimization Problem

Consider: (i) Y is a random variable ( Y Y R d y , where d y is the dimension of Y ), with the underlying true distribution μ Y and historical observed data { y 1 , , y N } ; (ii) z ( z Z R d z , where d z is the dimension of z ) represents the decision variable, where Z is the feasible solution set subject to deterministic constraints; and (iii) c ( z ; Y ) denotes the uncertain cost function. The traditional stochastic optimization problem is defined as follows:
v stoch = min z Z E [ c ( z ; Y ) ] ,
z stoch arg min z Z E c z ; Y .
If we have historical observed data of feature variables X X R d x (where d x is the dimension of X ) related to Y , denoted as x 1 , , x N , where x i and y i are observed simultaneously for i 1 , , N , we can establish a dataset:
S N = x 1 , y 1 , , x N , y N : x i X , y i Y , i { 1 , , N } .
If we have a new observation X = x 0 , the contextual stochastic optimization problem is established as follows:
v * x 0 = min z Z E c z ; Y | X = x 0 ,
z * x 0 Z * x 0 = arg min z Z E c z ; Y | X = x 0 .

2.2. The w-SAA Method

When c ( z ; Y ) is linear in the random variable Y and we have sufficient data, we can utilize an ML model to predict the expected value of the random variable Y , denoted as E ^ Y | X = x 0 = y ^ ( x 0 ) . By plugging this point estimate into the objective function, Problem (2) can be approximated as:
z ^ N point x 0 arg min z Z c z ; y ^ x 0 .
However, when c ( z ; Y ) is nonlinear in the random variable Y , a good prediction may not lead to a good solution. Given this case, the w-SAA method is an advanced alternative. This study focuses on stochastic problems whose objective functions are nonlinear in their uncertain parameters.
The w-SAA method assigns weights to each available data sample and then utilizes SAA for the solution [13]. This method approximates Problem (2) as:
z ^ N w SAA x 0 arg min z Z i = 1 N w N ,   i x 0 c z ; y i ,
where w N , i ( x 0 ) represents the weight assigned to the data sample x i , y i based on dataset S N , observation X = x 0 , and a specific ML method (such as kNN). It is evident that the w-SAA method directly uses the historical data as an empirical distribution, which does not estimate the mean and the variance of the parameter distribution directly.

2.3. The Contextual Distribution Estimation Method

We now aim to go a step further beyond w-SAA by estimating the mean and the variance of the conditional distribution of the random variable Y given an observation. Specifically, given X = x 0 , we seek to estimate the conditional mean E Y | X = x 0 and the conditional variance D Y | X = x 0 . To elaborate, the estimated conditional mean of Y is equivalent to the point prediction of the ML model, denoted as E ^ Y | X = x 0 = y ^ ( x 0 ) . The prediction error of the model on the training dataset is defined as ϵ i = E ^ Y | X = x i y i , i { 1 , , N } [15]. Consequently, the estimated conditional variance of Y can be calculated as:
D ^ Y | X = x 0 = 1 N i = 1 N ϵ i i = 1 N ϵ i N 2 .
Then, we plot the frequency distribution of { y 1 , , y N } to fit the random variable Y with a suitable distribution (e.g., a Gaussian distribution). Consequently, we obtain an estimate of the conditional distribution given X = x 0 , denoted as μ ^ Y | X = x 0 . Based on this estimated distribution, we can generate a total of U estimates of Y , represented as y ^ u ( x 0 ) , u 1 , 2 , , U . Subsequently, Problem (2) can be approximated as:
z ^ N distr _ esti x 0 arg min z Z 1 U u = 1 U c z ; y ^ u x 0 .
Finally, Algorithm 1 shows the procedures of the contextual distribution estimation method.
Algorithm 1. The pseudo-code of the contextual distribution estimation method.
Input:
A known dataset S N = x i , y i , x i X , y i Y , i = 1 , , N , and a new observation X = x 0 .
Output:
The approximate solution z ^ N distr _ esti x 0 for Problem (2).
Step 1.
Plot the frequency distribution of { y 1 , , y N } and determine the approximate distribution type of the random variable Y .
Step 2.
Employ machine learning models to obtain the estimated conditional mean of the random variable Y , given X = x 0 :
E ^ Y | X = x 0 = y ^ x 0 .
Calculate the prediction error of the ML model on the training dataset:
ϵ i = E ^ Y | X = x i y i , i 1 , , N .
Obtain the estimated conditional variance of Y , given X = x 0 :
D ^ Y | X = x 0 = 1 N i = 1 N ϵ i i = 1 N ϵ i N 2 .
Step 3.
Fit the random variable Y with the approximate distribution type determined in Step 1 and the estimated conditional mean and variance, i.e., E ^ Y | X = x 0 and D ^ Y | X = x 0 in Step 2, and obtain the estimated conditional distribution μ ^ Y | X = x 0 , given X = x 0 .
Step 4.
Generate a total of U samples from μ ^ Y | X = x 0 , represented as y ^ u ( x 0 ) , u 1 , 2 , , U .
Step 5.
Solve the following model and obtain the approximate solution:
z ^ N distr _ esti x 0 arg min z Z 1 U u = 1 U c z ; y ^ u x 0 .

2.4. The Evaluation Metric

To evaluate the effectiveness of the above methods in solving Problem (2), we introduce the decision loss. We define the test dataset T M as T M = x N + 1 , y N + 1 , , x N + M , y N + M : x j X , y j Y , j { N + 1 , , N + M } , where M denotes the number of test data points. Based on the dataset S N , we can obtain the decision z ^ N x j for the observation X = x j using various methods. The optimal objective function value under complete information is v * x j = min z Z c z ; y j , j N + 1 , , N + M . Therefore, the decision loss is defined as:
L N = 1 M j = N + 1 N + M c z ^ N x j ; y j v * x j ,
where c z ^ N x j ; y j denotes the actual cost resulting from decision z ^ N x j for the observation X = x j , with the difference between c z ^ N x j ; y j and v * x j representing the decision loss for the observation X = x j . The decision loss L N , of a certain method based on S N , is defined as the average of all decision losses on the test dataset T M .

2.5. ML Methods

Subsequently, we present the specific steps for implementing w-SAA and contextual distribution estimation methods by utilizing four ML models: kNN, CART, RF, and KR.

2.5.1. kNN

In the kNN regression model, this study adopts the Euclidean distance as the metric for measuring distances. The Euclidean distance between two data samples x i = ( x 1 i , x 2 i , , x d x i ) and x j = ( x 1 j , x 2 j , , x d x j ) can be represented as:
d x i x j Euclidean = x i x j = r = 1 d x x r i x r j 2 .
In a trained kNN model based on dataset S N , when X = x j , j N + 1 , , N + M , the predicted value of Y is given by:
y ^ k N N x j = 1 k i N k x j y i ,
where N k x j = i = 1 , , N : l = 1 N I x j x i x j x l k represents the index set of the k-nearest neighbors of x j ; here, k is the hyperparameter of the kNN regression model.
In the w-SAA method, the weights assigned to each training data sample x i , y i for x j are as follows:
w N ,   i k N N x j = 1 k I i N k x j ,       i 1 ,   ,   N .

2.5.2. CART

During the training process of a CART, we input the training dataset S N and set various hyperparameters of the tree model, including the maximum depth, d e p t h max , the minimum number of samples for splitting, s p l i t min , and the minimum number of samples for leaf nodes, l e a f min . By tuning these parameters, we obtain the final CART.
The construction process of a CART is as follows:
  • Input training dataset S N , hyperparameters d e p t h max , s p l i t min , and l e a f min .
  • For the training dataset D = x 1 ,   y 1 ,   ,   x m ,   y m of the current node:
  • If the number of samples is less than s p l i t min :
    m < s p l i t min ,
  • or if the tree depth is greater than or equal to d e p t h max :
    t r e e depth d e p t h max ,
  • return a decision subtree and stop recursion at the current node.
  • Otherwise, proceed to step 3.
  • Traverse all feature dimensions and feature values of dataset D , and select the feature dimension x r ,   r 1 ,   2 ,   ,   d x , and value s to split the dataset into two parts:
    D 1 r ,   s = ( x ,   y ) | x r s
    D 2 r ,   s = ( x ,   y ) | x r > s ,
  • which minimizes the sum of variances of the left and right subtrees:
    min r , s y i D 1 y i y ¯ D 1 2 + y j D 2 y j y ¯ D 2 2 ,
  • where y ¯ D 1 = y i D 1 y i D 1 ,   y ¯ D 2 = y j D 2 y j D 2 .
  • Recursively call steps 2–3 for the two datasets generated in step 3 until the termination condition is met.
In a trained CART, the predicted value of Y given x j ,   j N + 1 ,   ,   N + M is the mean of all y i ,   i { 1 , , N } contained in the leaf node assigned to x j :
y ^ CART x j = { i { 1 ,   ,   N } : R x i = R x j } y i l { 1 ,   ,   N } : R x l = R x j ,
where R x 1 ,   ,   N t denotes the leaf node corresponding to input x and N t is the number of leaf nodes in the CART.
In the w-SAA method, the weights assigned to each training data sample x i ,   y i for x j are as follows:
w N ,   i CART x j = I R x i = R x j l { 1 ,   ,   N } : R x l = R x j ,       i 1 ,   ,   N .

2.5.3. RF

RF is an algorithm that integrates multiple CARTs based on the concept of ensemble learning, with each CART as its basic unit. The hyperparameters of an RF include the forest size T (i.e., how many trees are constructed), the maximum depth of each tree, d e p t h max , the minimum number of samples for splitting, s p l i t min , and the minimum number of samples for leaf nodes, l e a f min . We first input the training dataset S N , then set various hyperparameters, and finally obtain the final model through hyperparameter tuning.
The process of building an RF is as follows:
  • Input training dataset S N with feature dimension d x and set the forest size T .
  • For each tree t , N training samples are randomly drawn from S N with replacement to form the training dataset for that tree.
  • Build each decision tree t using the CART algorithm.
  • For each new observation, obtain the final prediction result by averaging the prediction results of all decision trees considered.
The prediction function for decision tree t corresponding to input x can be represented as:
f t x = n = 1 N t y ^ t n I R x = n ,       t 1 ,   ,   T ,
where N t represents the number of leaf nodes in decision tree t , R x 1 ,   ,   N t denotes the leaf node assigned to input x , and y ^ t n represents the predicted value of leaf node n of decision tree t , n 1 ,   ,   N t . Therefore, the prediction function of RF can be expressed as:
f R F x = 1 T t = 1 T f t x .
In a trained RF, for input x j ,   j N + 1 ,   ,   N + M , the predicted value of Y is:
y ^ R F x j = f R F x j .
In the w-SAA method, the weights assigned to each training data sample x i ,   y i for x j are as follows:
w N ,   i R F x j = 1 T t = 1 T I R t x i = R t x j l { 1 ,   ,   N } : R t x l = R t x j ,       i 1 ,   ,   N .

2.5.4. KR

KR, as a method for fitting nonlinear models, essentially utilizes kernel functions as weight functions to establish nonlinear regression models. In this study, a Gaussian kernel function is employed to fit the data, shown as follows:
K i x = 1 h 2 π e x x i 2 2 h 2 ,       i 1 ,   ,   N ,
where x is a new observation, x i is a historical data sample, x x i is the Euclidean distance (L2 norm) between x and x i , and h is the bandwidth, serving as a hyperparameter of the KR model.
For the observation x j ,   j N + 1 ,   ,   N + M , the weights of each training data sample x i ,   y i ,   i 1 ,   ,   N to x j are calculated using the kernel function as:
w N ,   i K R x j = K i x j l = 1 N K l x j ,       i 1 ,   ,   N .
Thus, the prediction value of Y at x j is the weighted sum of all y i in the training dataset:
y ^ K R x j = i = 1 N w N ,   i K R x j y i .

3. Case Study

This section begins with an energy scheduling problem and its stochastic optimization model. Subsequently, we present the real-world data used in the numerical experiments and preprocess the data. Finally, we train four ML models and calculate the decision losses for w-SAA and contextual distribution estimation methods.

3.1. Energy Scheduling Problem and Its Mathematical Model

This section presents an energy production scheduling problem, where the future 24-hour energy prices are uncertain, requiring the company to plan corresponding energy production schedules and maximize the profit obtained from energy sales on the future day. The definitions of parameters and variables are shown in Table 1.
The settings of the parameters in Table 1 are designed based on the real situation of an energy company. For example, for energy-consuming enterprises, the larger the production quantity, the higher the unit production cost; therefore, we design a piecewise cost function with the gradually increasing unit production cost. This experimental setup can reflect the essence of a class of decision-making problems that enterprises face in reality. As the energy price per hour is an uncertain parameter, this study establishes the following stochastic optimization model:
Model A:
min E f z ; Y = E C t T y ~ t z t
subject to
z t a ,       t T
c t ξ z t ,       t T
c t 5 ξ + 1.25 ξ ( z t 5 ) ,       t T
c t 11.25 ξ + 1.5 ξ ( z t 10 ) ,       t T
c t 18.75 ξ + 1.75 ξ ( z t 15 ) ,       t T
C = t T c t
z t 0 ,   c t 0 ,   C 0 ,       t T .
Objective function (6) minimizes the expected total negative profits from energy sales on a future day. Constraint (7) represents the production capacity constraint per unit time. Constraints (8)–(11) denote the cost calculation formulas per unit time. Constraint (12) is the formula for total cost calculation. Constraint (13) specifies the variable domain constraints.
Assume that we have V estimated scenarios for energy prices, represented by y ^ v , t ,   v 1 , , V , t T , each with a likelihood of 1 V . Therefore, the approximation of Model A is shown as follows:
Model B:
min 1 V v = 1 V C t T y ^ v , t z t
subject to Constraints (7)–(13).

3.2. Introduction of Datasets

The energy price dataset used in this study consists of 14,592 records [16]. We divide it into a training dataset S N   ( N = 11640 ) and a test dataset T M   ( M = 2952 ) . Each record contains historical values of a random variable Y and feature variables X . Specifically, the feature vector X = x 1 ,   ,   x 9 X R 9 and the random variable Y Y R , along with their practical meanings and ranges, are detailed in Table 2.
Specifically, x 1 = 1 indicates January of the current year, x 2 = 1 indicates the first week of the current year, x 3 = 1 indicates Monday, x 4 = 1 indicates the first hour of the day, and so on. x 5 = 1 indicates that the day is a holiday.
Histograms and density curves of the frequency distribution of the random variable Y based on { y 1 ,   ,   y N } are plotted, as shown in Figure 1. From the graph, it can be observed that, without considering some extreme values, the distribution of the random variable Y approximates a normal distribution. Therefore, in the contextual distribution estimation method, this study chooses a normal distribution to fit the random variable Y .
Since the mathematical model in this study is aimed at scheduling energy production within a 24-hour period in the future, during testing on the test dataset T M , each of the 24 data samples form an input of a decision problem. Therefore, we construct a set of decision problems p w , where w 1 ,   ,   W and W = M / 24 = 2952 / 24 = 123 .
For problem p w ,   w 1 ,   ,   W , its input data is defined as:
T p w = x w 1 ,   y w 1 ,   ,   x w 24 ,   y w 24 ,
where
x p w = x w 1 ,   ,   x w 24 ,
y p w = y w 1 ,   ,   y w 24 .
In the w-SAA method, for x w l ,   l 1 ,   ,   24 , the weight w N ,   i x w l can be obtained through an ML model, where i 1 ,   ,   N . Therefore, there are N 24 potential combinations of energy prices in one day. The weight for each parameter combination is l = 1 24 w N ,   i l x w l , where i l 1 ,   ,   N . This study randomly selects V scenarios along with their weights to be inputted into the objective function of the mathematical model for solving, and obtains the decision z ^ N w SAA ( x p w ) . In the contextual distribution estimation method, for x w l ,   l 1 ,   ,   24 , the estimated distribution μ ^ Y | X = x w l can be obtained. Based on μ ^ Y | X = x w l , we obtain U estimates of Y , denoted as y ^ u x w l , u 1 ,   2 ,   ,   U . Therefore, there are U 24 potential combinations of energy prices in one day, with equal weight for each parameter combination. This study also randomly selects V scenarios to be inputted into the objective function of the mathematical model for solving, and obtains the decision z ^ N distr _ esti x p w .
The optimal objective function value of problem p w under complete information is v * ( x p w ) = min z Z f z ; y p w ; hence, the decision loss is:
L N = 1 W w = 1 W f z ^ N x p w ; y p w v * x p w .
Based on the training dataset S N , this study adopts four ML models, i.e., kNN, CART, RF, and KR, to compare the effectiveness of the w-SAA and contextual distribution estimation methods.
The kNN and KR models involve calculating distances between data samples, thus requiring data standardization before training the models, while the CART and RF models can be trained directly using the original dataset. The standardization method is used as follows: features x 1 to x 4 represent periodic features related to time; therefore, we employ Sine-Cosine encoding, where x = sin 2 π x T x , with x as the original value, x as the standardized value, and T x as the total periods of x . For example, the standardized data of the feature x 1 = ( x 1 1 ,   x 1 2 ,   ,   x 1 N ) representing months is x 1 = x 1 1 ,   x 1 2 ,   ,   x 1 N , where:
x 1 i = sin 2 π x 1 i 12 ,       i 1 ,   ,   N .
For features x 5 to x 9 , we apply the Z-score standardization method. For instance, for the feature x 6 = ( x 6 1 ,   x 6 2 ,   ,   x 6 N ) , the mean is calculated as μ 6 = 1 N i = 1 N x 6 i , and the standard deviation as σ 6 = 1 N i = 1 N x 6 i μ 6 2 , resulting in the standardized data x 6 = ( x 6 1 ,   x 6 2 ,   ,   x 6 N ) , where:
x 6 i = x 6 i μ 6 σ 6 ,       i 1 ,   ,   N .
The data after Z-score standardization have a mean of 0 and a standard deviation of 1.

3.3. Model Training

During the model training process, this study randomly splits the training dataset S N into S N train and S N valid in a 4:1 ratio, with S N train containing 9312 data samples and S N valid containing 2328 data samples.
In the w-SAA method, the study trains ML models based on S N train and tunes hyperparameters based on S N valid , with decision loss as the training metric. In the process of validation and testing, this study randomly selects V = 200 scenarios of energy prices to be inputted into the model for solving, yielding approximate solutions z ^ N w SAA and calculating the corresponding decision losses L N w SAA . The hyperparameter settings and tuning results for the four models are shown in Table 3.
The line charts of decision loss during the hyperparameter tuning process of the four models for w-SAA are depicted in Figure 2.
In the contextual distribution estimation method, this study trains ML models based on S N train and tunes hyperparameters based on S N valid , with decision loss as the training metric. In the process of validation and testing, this study generates U = 200 estimates of energy price based on the estimated distribution μ ^ Y | X = x w l ,   l 1 , , 24 and randomly selects V = 200 scenarios of energy prices to be inputted into the model for solving, yielding approximate solutions z ^ N distr _ esti and calculating the corresponding decision losses L N distr _ esti . The hyperparameter settings and tuning results for the four models are shown in Table 4.
The line charts of decision loss during the hyperparameter tuning process of the four models for contextual distribution estimation are depicted in Figure 3.

3.4. Discussion

The experiments are conducted on a computer with AMD Ryzen 5 4600U and 16 GB (3200 MHz) RAM under the Windows 10 operating system. The mathematical model in Section 3.1 is implemented in Python programming language using Gurobi 9.5.2 as the solver. This study trains four ML models, i.e., kNN, CART, RF, and KR, based on the dataset S N to implement two methods, w-SAA and contextual distribution estimation. The models are tested on the test dataset T M , and the corresponding decision losses L N are calculated and summarized in Table 5. Figure 4 illustrates the results.
From Table 5 and Figure 4, we can see that compared to the traditional w-SAA method, the proposed contextual distribution estimation method has similar decision loss under the kNN, CART, and RF models, and exhibits certain advantages under the KR models. Specifically, under the kNN model, the decision loss obtained by w-SAA is 1.78% lower than that of our proposed method; under the CART model, the decision loss obtained by w-SAA is 0.63% lower than that of our proposed method; under the RF model, the decision loss obtained by w-SAA is 3.33% lower than that of our proposed method; however, under the KR model, the decision loss obtained by our proposed method is 30.36% lower than that of w-SAA.
The results of the numerical experiments demonstrate that our proposed method, i.e., contextual distribution estimation, exhibits certain advantages under some particular scenarios, leading to a significant reduction in decision loss.

4. Conclusions

For contextual stochastic optimization problems whose objective functions are nonlinear in their uncertain parameters, this study builds on w-SAA and further estimates the conditional distributions of uncertain parameters accurately, thereby obtaining approximate solutions to the problems. Specifically, we use the point prediction of an ML model as an estimate of the conditional mean, the estimated variance of the differences between predicted values and actual values on the training dataset as estimates of the conditional variance, and fit the uncertain parameters with an appropriate distribution based on historical data. By generating a set of estimates from the estimated distribution and inputting them into the model, an approximate solution to the stochastic optimization problem can be obtained.
This study uses the energy scheduling problem as the case study. Four ML models, i.e., kNN, CART, RF, and KR, are trained based on a real-world energy price dataset to implement w-SAA and contextual distribution estimation methods, and the performance of the two methods is tested. From the experimental results, it is shown that the proposed contextual distribution estimation method in this study exhibits advantages in certain scenarios compared to w-SAA, significantly reducing decision losses.
This study introduces the contextual distribution estimation method for contextual stochastic optimization problems, which can be applied to address data-driven uncertain decision problems in the field of operations research and management science, such as transportation and logistics. In future research, extensive computational experiments with data from different fields, such as transportation, manufacturing, and logistics, should be conducted to validate the effectiveness of the contextual distribution estimation method.

Author Contributions

Conceptualization, X.T. and S.W.; methodology, X.T., B.J., K.-W.P., Y.G., Y.J. and S.W.; software, B.J.; validation, B.J., Y.G., Y.J. and K.-W.P.; formal analysis, B.J.; investigation, B.J.; resources, Y.J.; data curation, B.J.; writing—original draft preparation, B.J.; writing—review and editing, X.T. and S.W.; visualization, Y.G.; supervision, S.W.; project administration, K.-W.P.; funding acquisition, Y.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by AF Competitive Grants of The Hong Kong Polytechnic University (Project ID: P0046074).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments and constructive suggestions, which have greatly improved the quality of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Birge, J.R.; Louveaux, F. Introduction to Stochastic Programming; Springer Series in Operations Research and Financial Engineering; Springer: New York, NY, USA, 2011; ISBN 978-1-4614-0236-7. [Google Scholar]
  2. Qi, M.; Shen, Z.-J. (Max) Integrating Prediction/Estimation and Optimization with Applications in Operations Management. In Tutorials in Operations Research: Emerging and Impactful Topics in Operations; Chou, M., Gibson, H., Staats, B., Shier, D., Greenberg, H.J., Eds.; INFORMS: Catonsville, MD, USA, 2022; pp. 36–58. ISBN 978-0-9906153-7-8. [Google Scholar]
  3. Liu, Y.; Francis, A.; Hollauer, C.; Lawson, M.C.; Shaikh, O.; Cotsman, A.; Bhardwaj, K.; Banboukian, A.; Li, M.; Webb, A.; et al. Reliability of Electric Vehicle Charging Infrastructure: A Cross-lingual Deep Learning Approach. Commun. Transp. Res. 2023, 3, 100095. [Google Scholar] [CrossRef]
  4. Xu, M.; Di, Y.; Ding, H.; Zhu, Z.; Chen, X.; Yang, H. AGNP: Network-Wide Short-Term Probabilistic Traffic Speed Prediction and Imputation. Commun. Transp. Res. 2023, 3, 100099. [Google Scholar] [CrossRef]
  5. Qu, X.; Lin, H.; Liu, Y. Envisioning the Future of Transportation: Inspiration of ChatGPT and Large Models. Commun. Transp. Res. 2023, 3, 100103. [Google Scholar] [CrossRef]
  6. Zhen, L.; Xu, Z.; Wang, K.; Ding, Y. Multi-Period Yard Template Planning in Container Terminals. Transp. Res. Part B Methodol. 2016, 93, 700–719. [Google Scholar] [CrossRef]
  7. Zhen, L. Modeling of Yard Congestion and Optimization of Yard Template in Container Ports. Transp. Res. Part B Methodol. 2016, 90, 83–104. [Google Scholar] [CrossRef]
  8. Kleywegt, A.J.; Shapiro, A.; Homem-de-Mello, T. The Sample Average Approximation Method for Stochastic Discrete Optimization. SIAM J. Optim. 2002, 12, 479–502. [Google Scholar] [CrossRef]
  9. Elmachtoub, A.N.; Grigas, P. Smart “Predict, Then Optimize”. Manag. Sci. 2022, 68, 9–26. [Google Scholar] [CrossRef]
  10. Bertsimas, D.; Koduri, N. Data-Driven Optimization: A Reproducing Kernel Hilbert Space Approach. Oper. Res. 2022, 70, 454–471. [Google Scholar] [CrossRef]
  11. Shapiro, A.; Dentcheva, D.; Ruszczyński, A. Lectures on Stochastic Programming: Modeling and Theory; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2021; ISBN 978-0-89871-687-0. [Google Scholar]
  12. Tian, X.; Yan, R.; Wang, S.; Liu, Y.; Zhen, L. Tutorial on Prescriptive Analytics for Logistics: What to Predict and How to Predict. Electron. Res. Arch. 2023, 31, 2265–2285. [Google Scholar] [CrossRef]
  13. Bertsimas, D.; Kallus, N. From Predictive to Prescriptive Analytics. Manag. Sci. 2020, 66, 1025–1044. [Google Scholar] [CrossRef]
  14. Wang, S.; Yan, R. “Predict, Then Optimize” with Quantile Regression: A Global Method from Predictive to Prescriptive Analytics and Applications to Multimodal Transportation. Multimodal Transp. 2022, 1, 100035. [Google Scholar] [CrossRef]
  15. Sadana, U.; Chenreddy, A.; Delage, E.; Forel, A.; Frejinger, E.; Vidal, T. A Survey of Contextual Optimization Methods for Decision-Making under Uncertainty. Eur. J. Oper. Res. 2024, S0377221724002200. [Google Scholar] [CrossRef]
  16. Ifrim, G.; O’Sullivan, B.; Simonis, H. Properties of Energy-Price Forecasts for Scheduling. In Principles and Practice of Constraint Programming; Milano, M., Ed.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7514, pp. 957–972. ISBN 978-3-642-33557-0. [Google Scholar]
Figure 1. The distribution of the variable Y based on { y 1 ,   ,   y N } .
Figure 1. The distribution of the variable Y based on { y 1 ,   ,   y N } .
Mathematics 12 01612 g001
Figure 2. Decision loss on S N valid of the four models for w-SAA: (a) kNN; (b) CART; (c) RF; and (d) KR.
Figure 2. Decision loss on S N valid of the four models for w-SAA: (a) kNN; (b) CART; (c) RF; and (d) KR.
Mathematics 12 01612 g002
Figure 3. Decision loss on S N valid of the four models for contextual distribution estimation: (a) kNN; (b) CART; (c) RF; and (d) KR.
Figure 3. Decision loss on S N valid of the four models for contextual distribution estimation: (a) kNN; (b) CART; (c) RF; and (d) KR.
Mathematics 12 01612 g003
Figure 4. Decision loss of w-SAA and the contextual distribution estimation on the test dataset T M .
Figure 4. Decision loss of w-SAA and the contextual distribution estimation on the test dataset T M .
Mathematics 12 01612 g004
Table 1. The definitions of the parameters and variables.
Table 1. The definitions of the parameters and variables.
Set:
T Planning horizon, T = 1 ,   2 ,   ,   24
Parameters:
y ~ t Uncertain energy price per unit in period t , y ~ t Y ,       t T
ξ Base cost of producing one unit of energy, ξ = 200
a Production capacity (maximum output) per unit time, a = 20
Decision variables:
z t Production quantity in period t , z t 0 ,       z t z ,       t T
c t Production cost in period t , c t 0 , t T , defined as a piecewise function of the production quantity z t . When z t lies in the intervals of [0,5], (5,10], (10,15], (15,20], the cost per unit of production in the corresponding interval is ξ ,   1.25 ξ ,   1.5 ξ ,   1.75 ξ , respectively; that is
       c t z t = ξ z t , 0 z t 5 5 ξ + 1.25 ξ z t 5 , 5 < z t 10 11.25 ξ + 1.5 ξ z t 10 , 10 < z t 15 18.75 ξ + 1.75 ξ z t 15 , 15 < z t 20 .
C Total cost within the planning horizon, C 0
Table 2. Description of the data features.
Table 2. Description of the data features.
NotationPractical MeaningRange
x 1 month_of_year x 1 { 1 ,   2 ,   ,   12 }
x 2 week_of_year x 2 { 1 ,   2 ,   ,   52 }
x 3 day_of_week x 3 { 1 ,   2 ,   ,   7 }
x 4 hour_of_day x 4 { 1 ,   2 ,   ,   24 }
x 5 holiday_flag x 5 { 0 ,   1 }
x 6 forecast_wind_production x 6 0
x 7 forecast_system_load x 7 0
x 8 forecast_system_marginal_price x 8 0
x 9 CO2_intensity x 9 0
Y fuel_price Y 0
Table 3. Hyperparameter settings and tuning results based on decision loss for w-SAA.
Table 3. Hyperparameter settings and tuning results based on decision loss for w-SAA.
ModelHyperparametersSearch RangeOptimal Value
kNNk{1, 2, …, 50}36
CARTmax_depth{8, 10, 12, 14}10
min_samples_split{5, 10, 20, 30}20
min_samples_leaf{2, 5, 10, 15}5
RFn_estimators{100, 200}100
max_depth{8, 10, 12}12
min_samples_split{5, 10, 20}10
min_samples_leaf{2, 5, 10}2
KRbandwidth{1, 2, …, 20}4
Table 4. Hyperparameter settings and tuning results based on decision loss for contextual distribution estimation.
Table 4. Hyperparameter settings and tuning results based on decision loss for contextual distribution estimation.
ModelHyperparametersSearch RangeOptimal Value
kNNk{1, 2, …, 50}38
CARTmax_depth{8, 10, 12, 14}10
min_samples_split{5, 10, 20, 30}30
min_samples_leaf{2, 5, 10, 15}5
RFn_estimators{100, 200}200
max_depth{8, 10, 12}12
min_samples_split{5, 10, 20}5
min_samples_leaf{2, 5, 10}2
KRbandwidth{1, 2, …, 20}2
Table 5. Decision loss of w-SAA and contextual distribution estimation on the test dataset T M .
Table 5. Decision loss of w-SAA and contextual distribution estimation on the test dataset T M .
kNNCARTRFKR
w-SAA12,887.144753.404261.8818,191.83
distr_esti13,120.694783.654408.6012,668.46
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tian, X.; Jiang, B.; Pang, K.-W.; Guo, Y.; Jin, Y.; Wang, S. Solving Contextual Stochastic Optimization Problems through Contextual Distribution Estimation. Mathematics 2024, 12, 1612. https://doi.org/10.3390/math12111612

AMA Style

Tian X, Jiang B, Pang K-W, Guo Y, Jin Y, Wang S. Solving Contextual Stochastic Optimization Problems through Contextual Distribution Estimation. Mathematics. 2024; 12(11):1612. https://doi.org/10.3390/math12111612

Chicago/Turabian Style

Tian, Xuecheng, Bo Jiang, King-Wah Pang, Yu Guo, Yong Jin, and Shuaian Wang. 2024. "Solving Contextual Stochastic Optimization Problems through Contextual Distribution Estimation" Mathematics 12, no. 11: 1612. https://doi.org/10.3390/math12111612

APA Style

Tian, X., Jiang, B., Pang, K. -W., Guo, Y., Jin, Y., & Wang, S. (2024). Solving Contextual Stochastic Optimization Problems through Contextual Distribution Estimation. Mathematics, 12(11), 1612. https://doi.org/10.3390/math12111612

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop