Next Article in Journal
A Two-Stage Adaptive Differential Evolution Algorithm with Accompanying Populations
Next Article in Special Issue
Finite Element Analysis of Functionally Graded Mindlin–Reissner Plates for Aircraft Tapered and Interpolated Wing Defluxion and Modal Analysis
Previous Article in Journal
An Upper Bound for the Eternal Roman Domination Number
Previous Article in Special Issue
Hole Appearance Constraint Method in 2D Structural Topology Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Construction of Uniform Designs over a Domain with Linear Constraints

1
School of Statistics and Data Science, LPMC & KLMDASR, Nankai University, Tianjin 300071, China
2
National Elite Institute of Engineering, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(3), 438; https://doi.org/10.3390/math13030438
Submission received: 22 December 2024 / Revised: 22 January 2025 / Accepted: 25 January 2025 / Published: 28 January 2025

Abstract

:
Uniform design is a powerful and robust experimental methodology that is particularly advantageous for multidimensional numerical integration and high-level experiments. As its applications expand across diverse disciplines, the theoretical foundation of uniform design continues to evolve. In real-world scenarios, experimental factors are often subject to one or more linear constraints, which pose challenges in constructing efficient designs within constrained high-dimensional experimental spaces. These challenges typically require sophisticated algorithms, which may compromise uniformity and robustness. Addressing these constraints is critical for reducing costs, improving model accuracy, and identifying global optima in optimization problems. However, existing research primarily focuses on unconstrained or minimally constrained hypercubes, leaving a gap in constructing designs tailored to arbitrary linear constraints. This study bridges this gap by extending the inverse Rosenblatt transformation framework to develop innovative methods for constructing uniform designs over arbitrary hyperplanes and hyperspheres within unit hypercubes. Explicit construction formulas for these constrained domains are derived, offering simplified calculations for practitioners and providing a practical solution applicable to a wide range of experimental scenarios. Numerical simulations demonstrate the feasibility and effectiveness of these methods, setting a new benchmark for uniform design in constrained experimental regions.

1. Introduction

Experimental design can investigate the relationship between factors and responses by systematically arranging the combinations of factors and the number of tests. When the model relating the response to the factors is unknown, it can be explored using a space-filling design. As an implementation of space-filling design, quasi-Monte Carlo methods have been widely utilized in multidimensional numerical integration, statistical simulations, and other statistical domains due to their robust modeling capabilities. For instance, in optimization problems involving high-dimensional, nonlinear, and irregular objective functions, the uniformly distributed samples generated by quasi-Monte Carlo methods can better explore the search space, thereby improving optimization efficiency. In model training problems with high-dimensional feature spaces, quasi-Monte Carlo methods, through uniform sampling, can avoid the sample clustering issue seen in traditional methods, improving the learning efficiency and generalization ability of the model and having a faster convergence ratio than the Monte Carlo method [1]. Classical theory suggests that certain quasi-Monte Carlo methods can achieve convergence at a rate of O ( n 1 ( log n ) s ) , where n is the number of a point set and s is the smoothness of the integrand. Usually, the Monte Carlo method has the convergence ratio O ( n 1 / 2 ) , while the convergence ratio of a quasi-Monte Carlo method can be close to O ( n 1 ) when an appropriate low-discrepancy sequence is used in moderate-dimensional problems. For the problem of high-dimensional numerical integration, quasi-Monte Carlo methods may lead to the phenomenon of the curse of dimensionality occurring. Then, Huang and Zhou [2] developed a randomized quasi-Monte Carlo method for those cases by using Baker’s transformation, achieving significantly reduced errors and faster convergence than the Monte Carlo method. As an important type of quasi-Monte Carlo method, uniform design scatters the design points evenly over the integration domain. It is a deterministic method used to obtain a point set by optimizing a given criterion of uniformity, such as the star discrepancy and other discrepancies [3]. The optimization method for obtaining a uniform design is NP-hard and some approximate construction methods are summarized in [3].
Usually, uniform designs are obtained in a hypercube, which means that there are no constraints among the variables. In many cases, uniform designs within linearly constrained experimental regions are particularly useful for enhancing the predictive capability of the agent model within the constrained domain. Feng et al. [4] proposed an encrypted neural network inference framework for secure neural network inference between two computing parties. This encrypted inference framework involves numerous security constraints. Using linear constrained uniform design can ensure the uniform distribution of input data, thereby ensuring data privacy and enhancing inference performance during the encrypted inference process. In the study of resilience methods for urban rail transit systems, it is necessary to consider linear constraints among various factors, such as traffic flow, train speed, and station capacity. To ensure the comprehensiveness and reliability of the analysis, using uniform designs for experimental design can guarantee the uniformity of experimental points within the feasible region, thereby enhancing the representativeness of the experiments [5]. Moreover, in pharmacology, patients with multiple conditions often require several medications simultaneously, which can result in drug–drug interactions that significantly affect therapeutic outcomes. Using uniform design with linear constraints can be applied to optimize the drug dose to maximize the therapeutic effect and reduce side effects [6]. Similarly, in other fields, uniform designs within linear constrained regions can improve the experimental efficiency, the experimental precision, and the accuracy of model estimations in high-dimensional scenarios, such as the development of durable composites, the optimization of portfolio investment allocation, and experimental problems in automotive lightweighting. Thus, in these contexts, it is crucial to determine how to construct a uniform design under these linear constraints.
Currently, there have been some relevant studies that have focused on the construction of uniform designs in irregular regions. Experimental regions with linear constraints are typically considered as convex polyhedra or subsets of convex polyhedra within a regular region, and number-theoretic methods and heuristic algorithms are used to obtain uniform designs in such regions [7,8,9]. For example, Fang and Wang [7] used the transformation method to generate design points in the experimental region T s = { x = ( x 1 , , x s ) : x j 0 , j = 1 , , s , i = 1 s x i = 1 } . Wang and Fang [10] extended it to the region T s ( a , b ) = { x : 0 a x b 1 , i = 1 s x i = 1 } .
The construction of uniform designs in regions with more complex linear constraints remains a challenging problem. Two number-theoretic approaches have been developed for constrained mixture experiments in Borkowski and Piepel [11]. These methods effectively handle single and multiple constraints. In addition, the traditional uniformity criterion is no longer applicable to all experimental regions. Lin et al. [12] proposed using the central composite discrepancy (CCD) criterion to assess the uniformity of the design across arbitrary experimental regions and optimized the design using a threshold acceptance algorithm. Building on these, Liu and Liu [13] proposed a method for generating nearly uniform designs for mixture experiments with complex constraints using the switching algorithm. Furthermore, Zhang et al. [14] introduced the inverse Rosenblatt transformation (IRT) under the CCD criterion, facilitating the development of uniform designs in high-dimensional spaces. In terms of other deterministic sampling methods, minimum energy design can be used to obtain uniformly distributed points [15,16]. To address probability constraints, Huang et al. [17] incorporated sequentially constrained Monte Carlo principles into the minimum energy design method, introducing the constrained minimum energy design as a versatile sampling strategy for constrained spaces. Using Latin hypercube design principles, Schneider et al. [18] proposed an incremental approach for tightly constrained input spaces. This method projects candidate points onto the constrained region, sequentially selecting the optimal point to add to the design. Then, Schneider et al. [19] extended this approach by projecting onto constrained experimental regions and proposed maximally uniform Latin hypercube designs. Furthermore, Jourdan [20] introduced a new uniformity deviation measure by quantifying the discrepancy between the distribution functions of the design points and the Dirichlet distribution. This method aims to minimize the deviation and further enhance the uniformity. Most existing deterministic construction methods focus on the unit hypercube or the standard simplex, while construction results for uniform designs on regions with arbitrary linear constraints are relatively scarce.
In this paper, we propose a method for constructing uniform designs in experimental regions with arbitrary sets of linear equality constraints. Our approach is based on the IRT framework developed by Zhang et al. [14], which incorporates relaxed variables to provide greater flexibility in handling constraints [21]. This method is further extended to designs constrained to the hypersphere. We use the CCD criterion to evaluate the uniformity of the distribution of the resulting design within the experimental region. Through numerical examples, we compare our approach against two existing methods: the stochastic representation (SR) method by Fang and Wang [7] and acceptance–rejection (AR) method by Borkowski and Piepel [11]. Compared with other existing methods, our method can handle the problem of uniform design in regions with general linear constraints. The new construction results demonstrate competitiveness compared to traditional stochastic representations and acceptance rejection methods. This study extends the inverse Rosenblatt transformation framework to develop innovative methods for constructing uniform designs over arbitrary hyperplanes and hyperspheres within unit hypercubes. Explicit construction formulas for these constrained domains are derived, offering simplified calculations for practitioners and providing a practical solution applicable to a wide range of experimental scenarios.
This paper is organized as follows. Section 2 reviews the conceptual framework underlying the IRT method. Section 3 presents construction algorithms for uniform designs within linearly constrained regions, addressing single and multiple constraints. It also provides a numerical case in which the proposed method is applied to construct uniform designs for these regions, and discusses the properties and implementation of the algorithms. Section 4 presents construction algorithms for uniform designs within quadratic constrained regions, along with a numerical case where the method is applied to construct uniform designs for a given hypersphere. The results demonstrate the effectiveness of our approach in achieving both uniformity and robustness. Finally, Section 5 concludes with a brief discussion of the findings and potential directions for future research.

2. Preliminary Results

To assess whether a design is uniformly distributed in X , a uniformity criterion can be used, which quantifies the discrepancy between the empirical distribution function F P and the global uniform distribution function F * ( x ) . Extensive research has focused on the uniformity criteria for the unit hypercube C s = [ 0 , 1 ] s , including the L p star discrepancy, and the mixture discrepancy [3]. To measure the uniformity of the designs in arbitrary experimental regions, Lin et al. [12] proposed the CCD criterion. Let x ( i ) = { r R : x + a i < r < x + a i + 1 , i = 0 , 1 , a 0 = < a 1 < a 2 = ; the CCD can be written as
CCD 2 ( P ) = { ( 1 / V ( D ) ) D ( 1 / 2 s ) k = 1 2 s | ( N ( D k ( x ) , P ) / n ) ( V ( D k ( x ) / V ( D ) ) | 2 d x } 1 / 2 ,
where D k = { x ( i 1 ) × × x ( i K ) } D denotes the subdomains of D. The measures of the experimental domain D and its subdomains D k can be represented as V ( D ) and V ( D k ( x ) ) , respectively. The set P = { x 1 , , x n } represents the given n-point design. For improved computational efficiency, Chen et al. [8] proposed a set of particle-swarm-optimization-based algorithms that can effectively find optimal uniform designs according to the CCD criterion. As the dimensionality of the experimental region increases and its shape becomes more complex, numerical methods can be employed to approximate the CCD values [13].
Zhang et al. [14] proposed a deterministic construction method, the IRT method, based on the CCD criterion, providing a general framework for transforming uniformly distributed points from C s to arbitrary domains. The core idea of the IRT method is to obtain uniformly distributed experimental points in a region X R s by applying the inverse of the uniform distribution function. Let X = ( X 1 , , X s ) represent a random variable in X with a cumulative distribution function F ( X ) . We assume that F 1 ( · ) is the cumulative distribution function of X 1 , and F i | 1 , , i 1 ( · ) is the conditional cumulative distribution function of X i given X 1 , , X i 1 . As defined in Arnold et al. [22], a transformation is defined from ( X 1 , , X s ) to ( U 1 , , U s ) to produce uniformly distributed random variables. This transformation satisfies the following conditions:
  • For 1 i s , U 1 = F 1 ( X 1 ) , U i = F i | 1 , , i 1 ( X i | X 1 , , X i 1 ) ;
  • ( U 1 , , U s ) is uniformly distributed in the unit cube C s ;
  • The inverse transformation of this process exists;
  • The Jacobian matrix J involved in the transformation depends solely on the density function of ( X 1 , , X s ) .
It is important to note that this transformation is not permutation-invariant. Hence, we must consider all s ! permutations of ( X 1 , , X s ) . Specifically, we define the transformation as ( X i 1 , , X i s ) ( U 1 , , U s ) , denoted as T ( i 1 , , i s ) , which satisfies the following:
  • T i 1 ( X i 1 ) = U 1 ;
  • T i j | i 1 , , i j 1 ( X i j ) = U j , 2 j s ;
  • F ( X i 1 , , X i s ) = T i 1 ( X i 1 ) T i s | i 1 , , i s 1 ( X i s | X i 1 , , X i s 1 ) .
From this, we obtain x i 1 = T i 1 1 ( U 1 ) and x i j = T i j | i 1 , , i j 1 1 ( U j ) , for 2 j s , by applying the inverse transformation T ( i 1 , , i s ) . Finally, the best uniform design is obtained by selecting the smallest design points according to the CCD criterion.

3. Uniform Design of Experiments for Linear Constraints

In practical applications, experimental regions are often subject to multiple constraints. Linear constraints involve linear relationships between variables, while quadratic constraints involve quadratic terms. Geometrically, linear constraints form a hyperplane in higher-dimensional spaces, while quadratic constraints form a surface, making the feasible region more complex. Linear constraints are generally simpler to compute, and they often describe limitations on costs, time, or resources in practical applications. Quadratic constraints are used for more complex problems, such as when variables have non-linear relationships. For example, in portfolio optimization, the objective is to maximize returns. Using uniform designs for simulation, investment portfolios that yield higher returns can be identified. Suppose that the total investment amount is b, and there are s types of securities available for investment. Based on the prior information, the effect of each security is represented by x 1 , , x s , where 0 < x i < 1 for i = 1 , , s , and the corresponding weights are denoted by a 1 , , a s . A linear constraint can be formulated as a 1 x 1 + + a s x s = b . Geometrically, this constrained region corresponds to the intersection of C s and the given hyperplane. When additional prior information is available, multiple linear constraints can be introduced, expressed as a j 1 x 1 + + a j s x s = b j , for j = 1 , , t , t < s . In this case, when s = 3 and t = 2 , the constrained region is simplified to a line segment within the unit cube ( 0 , 1 ) 3 . In physical engineering systems, experimental factors may exhibit quadratic relationships. In such cases, the constraint can be expressed as a 1 x 1 2 + + a s x s 2 = b , which represents the intersection of a hypersphere and C s . When dealing with high-dimensional variables and limited resources, a uniform design is an effective approach to extracting a small number of samples to identify patterns within a specified region. A practical strategy involves first generating points that are uniformly distributed in C s . These points are then mapped to the target region through an appropriate transformation, ensuring that the resulting points maintain a uniform distribution within the target region.

3.1. One Arbitrary Linear Constraint

We aim to identify experimental points that follow a uniform distribution using the distribution function of random vectors with a uniform distribution. A key challenge in this process is to determine the measure of the experimental region, which involves computing high-dimensional integrals. Based on the work presented in Lasserre [23], which utilized the Laplace transform method to compute the measure of the intersection between a simplex in R m and C m , we derive the following results.
Lemma 1.
In the non-empty bounded space, X = { x = ( x 1 , , x m ) | 0 < x i < 1 , t 1 < k = 1 m c k x k < t 2 , i = 1 , , m } , where c j 0 for j = 1 , , m ; then, the measure of X is
V ( X ) = 1 Γ ( m + 1 ) k = 1 m c k k = 0 m ( 1 ) k i 0 , 1 i 1 < < i k m [ ( t 2 c T 1 k ) + m ( t 1 c T 1 k ) + m ] ,
where
i 0 ( t 2 c T 1 k ) + m = 1 , c = ( 0 , c 1 , , c m ) T , ( x ) + = max { 0 , x } .
Here, 1 k denotes a vector in R m + 1 , where the entries at the 1st, ( i i + 1 ) -th, … and ( i k + 1 ) -th positions are equal to 1, and all other positions are equal to 0.
Proof. 
We begin by proving that Lemma 1 holds when c j > 0 for j = 1 , , m and 0 < m s . Let f ( t ) = V 1 t d x , where V 1 t = M 1 t C m and
M 1 t = { x = ( x 1 , , x m ) | x i > 0 , k = 1 m c k x k t , i = 1 , , m } .
By the existence theorem for Laplace transforms, the Laplace transform L [ f ] of f ( t ) exists when the parameter s, or the real part of s, is positive. The Laplace transform of f ( t ) is defined as L [ f ] : f ( t ) F ( λ ) , where λ or the real part of λ is positive. According to the definition of the Laplace transform, we have
F ( λ ) = 0 + e λ t f ( t ) d t = 0 + e λ t V 1 t d x d t .
By applying Fubini’s theorem, we obtain
F ( λ ) = 1 λ ( 0 1 e λ c 1 x 1 d x 1 ) × × ( 0 1 e λ c m x m d x m ) .
This simplifies to
F ( λ ) = 1 λ m + 1 k = 1 m c k i = 1 m ( 1 e λ c i ) .
Therefore, we have
F ( λ ) = 1 Γ ( m + 1 ) k = 1 m c k k = 0 m ( 1 ) k i 0 , 1 i 1 < < i k m Γ ( m + 1 ) e λ ( c T 1 k ) λ m + 1 .
According to the proposition for the Laplace transform, Γ ( m + 1 ) e λ ( c T 1 k ) λ m + 1 is the Laplace transform of the function I ( 0 < c T 1 k t ) ( t c T 1 k ) m . Therefore, we have
L [ f ] = 1 Γ ( m + 1 ) k = 1 m c k k = 0 m ( 1 ) k i 0 , 1 i 1 < < i k m L [ I ( t c T 1 k ) ( t c T 1 k ) m ] .
Thus, we obtain
f ( t ) = L 1 [ f ] = 1 Γ ( m + 1 ) k = 1 m c k k = 0 m ( 1 ) k i 0 , 1 i 1 < < i k m ( t c T 1 k ) + m .
When
k = 0 m ( 1 ) k i 0 , 1 i 1 < < i k m ( t c T 1 k ) + m > 0 ,
the set X is non-empty. Taking t = t 1 , t 2 , we have
V ( X ) = 1 Γ ( m + 1 ) k = 1 m c k k = 0 m ( 1 ) k i 0 , 1 i 1 < < i k m [ ( t 2 c T 1 k ) + m ( t 1 c T 1 k ) + m ] .
Next, we will prove that Lemma 1 holds when c j > 0 for j = 1 , , q , c j < 0 for j = q + 1 , , m and 0 < q < m s . Let
f ( t ) = V 2 t d x , g ( t ) = I ( t + m d > 0 ) f ( t + m d ) , d = min { c i , i = 1 , , m } ,
where V 2 t = M 2 t C m and M 2 t = { x = ( x 1 , , x m ) | x i > 0 , k = 1 m c k x k t , i = 1 , , m } . When t + m d < 0 , it is clear that g ( t ) = 0 . When the parameter s or the real part of s is positive, the Laplace transforms L [ f ] of f ( t ) and L [ g ] of g ( t ) both exist. Define the Laplace transforms of g ( t ) and f ( t ) as
L [ f ] : f ( t ) F ( λ ) , L [ g ] : g ( t ) G ( λ ) ,
Similarly, let t + m d = t 1 , t 2 ; then, we have
V ( χ ) = 1 Γ ( m + 1 ) k = 1 m c k k = 0 m ( 1 ) k i 0 , 1 i 1 < < i k m [ ( t 2 c T 1 k ) + m ( t 1 c T 1 k ) + m ] .
When k = 0 m ( 1 ) k + m q i 0 , 1 i 1 < < i k m [ ( t 2 c T 1 k ) m ( t 1 c T 1 k ) m ] > 0 , the set X is non-empty and the formula is well defined.
If c i = 0 , the terms with c i = 0 can be excluded first, and then Lemma 1 can be applied to obtain the measure of the corresponding experimental region.    □
For convenience in subsequent calculations, we can derive the following result for the integral computation based on Lemma 1.
Lemma 2.
Let h ( p , k ) = 0 1 0 1 I ( 0 < t i < 1 , l 1 < i = 1 p c i t i < l 2 ) d t p d t k , 1 k p ; then, we have
h ( p , k ) = [ I ( i = 1 k 1 c i x i < l 2 ) j = 0 p k + 1 ( 1 ) j i 1 < < i j ( l 2 i = 1 k 1 c k , p T 1 j p k + 2 ) + p k + 1 ] / l = k p c l Γ ( p k + 2 ) [ I ( i = 1 k 1 c i x i < l 1 ) j = 0 p k + 1 ( 1 ) j i 1 < < i j ( l 1 i = 1 k 1 c k , p T 1 j p k + 2 ) + p k + 1 ] / l = k p c l Γ ( p k + 2 ) ,
where c k , p = ( 0 , c k , , c p ) T , 1 j p k + 2 is a vector in R p k + 2 with 1s at the i 0 , i 1 + 1 , , i j + 1 positions and 0s at all other positions.
Proof. 
Based on Lemma 1, by representing the measure in Lemma 1 as a high-dimensional integral, we treat some of the variables in the constraint conditions as constants. Through a coordinate transformation, we can obtain Lemma 2.    □
Lemmas 1 and 2 provide the foundation for determining the transformation that maps the uniformly distributed points from C m to a uniform distribution within the target region.
First, we consider the case where the experimental region is subject to an arbitrary single linear constraint. Define X 0 = { x = ( x 1 , , x s ) | 0 < x i < 1 , a 1 x 1 + + a s x s = b } , where a 1 , , a s , b R . Through a straightforward analysis, experimental regions with a single arbitrary equality constraint can be classified into the following cases:
  • The intercept term b is zero, and the coefficients consist of positive, negative, and zero values: X 01 = { x = ( x 1 , , x s ) 0 < x i < 1 ,   a 1 x 1 + + a m x m = 0 ,   i = 1 , , s ,   a 1 , , a t > 0 ,   a t + 1 , , a m < 0 ,   0 < t < m s } .
  • The intercept term is non-zero, with some coefficients being positive while the rest are zero. Let us now examine the case where b 0 . If all coefficients are positive, we arrange them in ascending order and denote the smallest coefficient as a 1 . If a 1 < 1 , X 02 = { x = ( x 1 , , x s ) | 0 < x i < 1 , i = 1 m a i x i = 1 , a 1 < 1 } , the corresponding variable x 1 is treated as a slack variable, allowing the experimental region to be relaxed into
    X 02 = { x = ( x 1 , , x s ) | 0 < x i < 1 , 1 a 1 < i = 2 m a i x i < 1 } .
    However, if a 1 1 , X 03 = { x = ( x 1 , , x s ) | 0 < x i < 1 ,   i = 1 m a i x i = 1 ,   a k 1 ,   k = 1 , , m } , the corresponding variable x 1 is treated as a slack variable, and the experimental region fully coincides with its intersection with the C s :
    X 03 = { x = ( x 1 , , x s ) | 0 < x i < 1 , 0 < i = 2 m a i x i < 1 } .
  • The intercept term is non-zero, and not all coefficients are positive: X 04 = { x = ( x 1 , , x s ) | 0 < x i < 1 , i = 1 m a i x i = 1 , where a 1 , , a t > 0 and a t + 1 , , a m < 0 , with 0 < t < m s }.
For case (i), let
X 01 = { x = ( x 1 , , x s ) | 0 < x i < 1 , 0 < i = 1 m 1 a i x i < a m } ,
X be a random vector uniformly distributed on X 01 with a density function
f ( x 1 , , x m 1 , x m + 1 , , x s ) = I ( 0 < x i < 1 , 0 < i = 1 m 1 a i x i < a m ) V ( X 01 ) ,
where V ( X 01 ) represents the measure of X 01 . By Lemma 1, we can obtain
V ( X 01 ) = k = 0 m 1 ( 1 ) k i 0 , 1 i 1 i k m 1 ( a m a 1 , m 1 T 1 k m ) + m 1 ( a 1 , m 1 T 1 k m ) + m 1 Γ ( m ) k = 1 m 1 a k ,
Following the approach outlined in Section 2, we assume that the marginal distribution function of X 1 is
F X 1 1 x 1 = 0 x 1 f X 1 x 1 d t 1 = 1 V X 01 0 x 1 A 1 d t 1 ,
where
A 1 = 0 1 0 1 I 0 < t i < 1 , 0 < k = 1 m 1 a k t k < a m d t m 1 d t 2 .
Additionally, the conditional distribution can be expressed as
F i 1 , , i 1 1 x i = 0 x i f i 1 , , i 1 t i d t i = 0 x i f t 1 , , t i f t 1 , , t i 1 d t i = 0 x i 0 1 0 1 I 0 < t i < 1 , 0 < k = 1 m 1 a k t k < a m d t m 1 d t i + 1 0 1 0 1 I 0 < t i < 1 , 0 < k = 1 m 1 a k t k < a m d t m 1 d t i d t i .
Thus, we have
F X 1 1 ( x 1 ) = k = 0 m 2 ( 1 ) k i 1 < < i k ( a m a 2 , m 1 T 1 k m 1 ) + m 1 I ( a m a 1 x 1 a 2 , m 1 T 1 k m 1 0 ) V ( X 01 ) Γ ( m ) k = 1 m 1 a k + k = 0 m 2 ( 1 ) k i 1 < < i k ( a 1 x 1 a 2 , m 1 T 1 k m 1 ) + m 1 V ( X 01 ) Γ ( m ) k = 1 m 1 a k k = 0 m 2 ( 1 ) k i 1 < < i k ( a m a 1 x 1 a 2 , m 1 T 1 k m 1 ) + m 1 V ( X 01 ) Γ ( m ) k = 1 m 1 a k k = 0 m 2 ( 1 ) k i 1 < < i k ( a 2 , m 1 T 1 k m 1 ) + m 1 I ( a 1 x 1 a 2 , m 1 T 1 k m 1 0 ) V ( X 01 ) Γ ( m ) k = 1 m 1 a k ,
F i 1 , , i 1 1 ( x i ) = k = 0 m i 1 ( 1 ) k i 1 < < i k K 1 ( a m k = 1 i 1 a k x k a i + 1 , m 1 T 1 k m i ) + m i Γ ( m i + 1 ) k = i m 1 a k I ( a m k = 1 i a k x k a i + 1 , m 1 T 1 k m i 0 ) + k = 0 m i 1 ( 1 ) k i 1 < < i k K 1 ( k = 1 i a k x k a i + 1 , m 1 T 1 k m i ) + m i Γ ( m i + 1 ) k = i m 1 a k k = 0 m i 1 ( 1 ) k i 1 < < i k K 1 ( a m k = 1 i a k x k a i + 1 , m 1 T 1 k m i ) + m i Γ ( m i + 1 ) k = i m 1 a k k = 0 m i 1 ( 1 ) k i 1 < < i k K 1 ( k = 1 i 1 a k x k a i + 1 , m 1 T 1 k m i ) + m i Γ ( m i + 1 ) k = i m 1 a k I ( k = 1 i a k x k a i + 1 , m 1 T 1 k m i 0 ) ,
where
K 1 = [ I ( k = 1 i a k x k < 0 , 2 i m t 1 ) + I ( m t < i m 1 ) ] / h ( m 1 , i , 0 , a m ) ,
h ( p , i , l 1 , l 2 ) = j = 0 p i + 1 ( 1 ) j i 1 < < i j ( l 1 k = 1 i 1 a k x k a i , p T 1 + p k + 2 ) + p i + 1 / [ Γ ( p k + 2 ) l = i p a l ] j = 0 p i + 1 ( 1 ) j i 1 < < i j ( l 2 k = 1 i 1 a k x k a i , p T 1 + p k + 2 ) + p i + 1 / [ Γ ( p k + 2 ) l = i p a l ] ,
a i , p = ( 0 , a i , , a p ) T .
Building on the concept from Section 2, let each component of the uniformly distributed random vector ( u 1 , , u s 1 ) in the unit hypercube correspond to the distribution function and conditional distribution function of ( X 1 , , X m 1 , X m + 1 , , X s ) ; we can derive the transformation formula by combining (1) and (2):
F X 1 1 ( x 1 ) = u i 1 , F j 1 , , j 1 1 ( x j ) = u i j , j = 2 , , m 1 , x m = i = 1 m 1 ( a i / a m ) x i , x k = u i k 1 , k = m + 1 , , s ,
where u = ( u i 1 , , u i s 1 ) are the test points uniformly distributed on the C s . In Equation (3), we first assume that ( X 1 , , X m 1 ,   X m + 1 , , X s ) is uniformly distributed in the experimental region. Then, the vector composed of its distribution function F X 1 1 ( x 1 ) and conditional distribution function F i | 1 , , i 1 i ( x i ) , i = 1 , , m 1 follows a uniform distribution in the unit hypercube of the same dimension. Thus, each component of the uniformly distributed random vector ( u 1 , , u s 1 ) in the unit hypercube corresponds to the distribution function and conditional distribution function of ( X 1 , , X m 1 , X m + 1 , , X s ) . The components that are not subject to linear constraints can be directly represented by the corresponding components in ( u 1 , , u s 1 ) . By utilizing the linear constraints, we can express the slack variable X m , and this relationship can be obtained in the equation.
We now turn our attention to the uniformity of the distribution of experimental points obtained through the transformation (3). Based on the above results, we can derive the following result.
Theorem 1.
In the non-empty bounded space X 01 = { x = ( x 1 , , x s ) 0 < x i < 1 ,   a 1 x 1 + + a m x m = 0 ,   i = 1 , , s ,   a 1 , , a t > 0 ,   a t + 1 , , a m < 0 ,   0 < t < m s } , let u = ( u 1 , , u s 1 ) be a random vector uniformly distributed over ( 0 , 1 ) s 1 . Then, the random vector obtained through the transformation in (3) is uniformly distributed over X 01 .
Proof. 
1.Let X = ( X 1 , , X s ) and U = U 1 , , U s 1 represent random vectors uniformly distributed over X 01 and ( 0 , 1 ) s 1 , respectively, and satisfying (3). The distribution function of X is given as
F X ( x ) = P ( X 1 x 1 , , X m 1 x m 1 , X m + 1 x m + 1 , , X s x s , 1 a m i = 1 m 1 a i X i x m ) = P ( 1 a m i = 1 m 1 a i X i x m X 1 x 1 , , X m 1 x m 1 ) × P ( X 1 x 1 , , X m 1 x m 1 , X m + 1 x m + 1 , , X s x s ) .
Let C = C 1 , , C m 1 , U m , , U s 1 represent a random vector uniformly distributed over X 01 , respectively. We know that X m = X 1 , , X m 1 , X m + 1 , , X s is a random vector over the space X 01 . Define g = ( F 1 1 , F 2 1 1 , , F m 1 1 , , m 2 1 , σ , , σ ) as an invertible transformation from U to X m such that g U = X m , where F 1 1 ( · ) is the inverse of the marginal distribution function F 1 ( · ) of C , and F 2 1 1 ( · ) , , F m 1 1 , , m 2 1 ( · ) are the inverse of the conditional distribution functions F i 1 , , i 1 · C 1 = c 1 , , C i 1 = c i 1 of C , respectively. The joint distribution functions of U and C are given by
F U ( u ) = u 1 u s 1 I 0 < u i < 1 , i = 1 , , s 1 ,
F C ( c ) = c 1 c m 1 u m u s 1 I c 1 , , c m 1 , u m , , u s 1 X 01 / V ( X 01 ) .
Thus, we have
P X 1 x 1 , , X m 1 x m 1 , X m + 1 x m + 1 , , X s 1 x s 1 = P g 1 U 1 x 1 , , g m 1 U m 1 x m 1 , U m x m + 1 , , U s 1 x s = P U 1 F 1 x 1 , , U m 1 F m 1 1 , , m 2 x m 1 , U m x m + 1 , , U s 1 x s = F 1 x 1 , , F m 1 1 , , m 2 x m 1 F m , , s 1 ( x m + 1 , , x s ) = I x 1 , , x m 1 , x m + 1 , , x s X 01 F C x 1 , , x m 1 , x m + 1 , , x s = x 1 x m 1 x m + 1 x s V ( X 01 ) I x 1 , , x m 1 , x m + 1 , , x s X 01 .
From the above discussion, we can conclude that
F X ( x ) = P ( 1 a m i = 1 m 1 a i X i x m X 1 x 1 , , X m 1 x m 1 ) x 1 x m 1 x m + 1 x s V ( X 01 )
× I x 1 , , x m 1 , x m + 1 , , x s X 01
= V ( X 01 ) P ( 1 a m i = 1 m 1 a i X i x m X 1 x 1 , , X m 1 x m 1 ) V ( X 01 ) × x 1 x m 1 x m + 1 x s V ( X 01 )
× I x 1 , , x m 1 , x m + 1 , , x s X 01 x 1 x s V ( X 01 ) I x 1 , , x s X 01 .
Therefore, X = ( X 1 , , X s ) , obtained through the transformation (3), is uniformly distributed over X 01 .    □
From Theorem 1, we can conclude that the transformed test points are uniformly distributed over X 01 .
For case (ii), consider the case where 0 < a 1 < 1 and 1 a 1 < i = 2 m a i x i = 1 a 1 x 1 < 1 . Treat x 1 as a slack variable, and the new experimental region can be represented as X 02 = { x = ( x 2 , , x s ) | 0 < x i < 1 , 1 a 1 < i = 2 m a i x i < 1 , a i > 0 , 0 < a 1 < 1 , i = 2 , , m } . Let X = ( X 2 , , X s ) be a uniformly distributed random vector in X 02 . We can derive the density function f ( x 2 , , x s ) and the measure of the region X 02
V ( X 02 ) = k = 0 m 1 ( 1 ) k i 1 < < i k [ ( 1 a 2 , m T 1 k m ) t 1 ( 1 a 1 a 2 , m T 1 k m ) m 1 ] / [ Γ ( m ) k = 2 m a k ] ,
Then, by Lemma 2, the corresponding marginal distribution function can be expressed as
F X 2 2 ( x 2 ) = K 2 k = 0 m 2 ( 1 ) k i 1 < < i k ( 1 a 3 , t T 1 k m 1 ) + m 1 I ( 1 a 2 x 2 a 3 , m T 1 k m 1 0 ) Γ ( m ) k = 1 m 1 a k + 1 + K 2 k = 0 m 2 ( 1 ) k i 1 < < i k ( 1 a 1 a 2 x 2 a 3 , m T 1 k m 1 ) + m 1 Γ ( m ) k = 1 m 1 a k + 1 K 2 k = 0 m 2 ( 1 ) k i 1 < < i k ( 1 a 2 x 2 a 3 , m T 1 k m 1 ) + m 1 Γ ( m ) k = 1 t 1 a k + 1 K 2 k = 0 t 2 ( 1 ) k i 1 < < i k ( 1 a 1 a 3 , m T 1 k m 1 ) + m 1 I ( 1 a 1 a 2 x 2 a 3 , m T 1 k m 1 0 ) Γ ( m ) k = 1 m 1 a k + 1 ,
where
K 2 = I ( 0 < a 2 x 2 < 1 ) V ( X 02 ) ,
and the conditional distribution functions
F i 2 , , i 1 2 ( x i ) = k = 0 m i ( 1 ) k i 1 < < i k K 3 ( 1 k = 2 i 1 a k x k a i + 1 , m T 1 k m i + 1 ) + m i + 1 I ( 1 k = 2 i a k x k a i + 1 , m T 1 k m i + 1 0 ) Γ ( m i + 2 ) k = i 1 m 1 a k + 1 + K 3 k = 0 t i ( 1 ) k i 1 < < i k ( 1 a 1 k = 2 i a k x k a i + 1 , m T 1 k m i + 1 ) + m i + 1 Γ ( m i + 2 ) k = i 1 m 1 a k + 1 K 3 k = 0 m i ( 1 ) k i 1 < < i k ( 1 k = 2 i a k x k a i + 1 , m T 1 k m i + 1 ) + m i + 1 Γ ( m i + 2 ) k = i 1 m 1 a k + 1 K 3 k = 0 m i ( 1 ) k i 1 < < i k ( 1 a 1 k = 2 i 1 a k x k a i + 1 , m T 1 k m i + 1 ) + m i + 1 Γ ( m i + 2 ) k = i 1 m 1 a k + 1 , × I ( 1 a 1 k = 2 i a k x k a i + 1 , m T 1 k m i + 1 0 ) ,
where
K 3 = I ( 0 < k = 2 i a k x k < 1 ) h ( m 1 , i 1 , 1 a 1 , 1 ) .
Building on the concept from Section 2, let each component of the uniformly distributed random vector ( u 1 , , u s 1 ) in the unit hypercube correspond to the distribution function and conditional distribution function of ( X 2 , , X s ) . The third equation in the expression can be obtained from the constraint condition; then, we can derive the transformation formula by combining (4) and (5):
F X 2 2 ( x 2 ) = u i 1 , F j 2 , , j 1 2 ( x j ) = u i j 1 , j = 3 , , m , x 1 = 1 a 1 1 a 1 i = 2 m a i x i , x k = u i k 1 , k = m + 1 , , s ,
where u = ( u i 1 , , u i s 1 ) are the test points uniformly distributed on the C s 1 . We now turn our attention to the uniformity of the distribution of the experimental points obtained by the transformation (6). Based on the previous results, we have the following conclusion.
Theorem 2.
In the non-empty bounded space X 02 = { x = ( x 1 , , x s ) 0 < x i < 1 , i = 1 m a i x i = 1 , 0 < a 1 < 1 } with a 1 < < a m , let u = ( u 1 , , u s 1 ) be a uniformly distributed random vector over ( 0 , 1 ) s 1 . Then, the random vector obtained through the following transformation (6) is also uniformly distributed over X 02 .
Thus, by Theorem 2, it is established that the transformed test points are also uniformly distributed over X 02 .
When a 1 1 , we have 0 < i = 2 t a i x i = 1 a 1 x 1 < 1 . In this scenario, we can treat x 1 as a slack variable, and the new experimental region can be represented as X 03 = { x = ( x 2 , , x s ) | 0 < x i < 1 , 0 < i = 2 m a i x i < 1 , a i 1 } . Let X = ( X 2 , , X s ) be a uniformly distributed random vector in X 03 . The density function can then be expressed as f ( x 2 , , x s ) = I ( 0 < x i < 1 , 0 < i = 2 m a i x i < 1 , a i 1 ) / V ( X 03 ) , where V ( X 03 ) = 1 / [ Γ ( m ) k = 2 m a k ] . By Lemma 2, the corresponding marginal distribution function can then be expressed as
F X 2 3 ( x 2 ) = [ 1 ( 1 a 2 x 2 ) m 1 ] I ( 0 < x 2 < 1 / a 2 ) ,
Additionally, the conditional distribution functions for i = 3 , , m can be expressed as
F i | 2 , , i 1 3 ( x i ) = { 1 [ 1 a i x i / ( 1 k = 2 i 1 a k x k ) ] m i + 1 } I ( 0 < x i < ( 1 k = 2 i 1 a k x k ) / a i < 1 / a i ) .
Similarly, we can derive the design using the transformation
x 1 = a 1 1 j = 0 t 1 ( 1 u i j ) 1 / ( t j ) , x k = a k 1 [ 1 ( 1 u i k 1 ) 1 / ( t k + 1 ) ] j = 0 k 2 ( 1 u i j ) 1 / ( m j ) , k = 2 , , m , x k = u i k 1 , k = m + 1 , , s , 0 < u i j < 1 , j = 1 , , s 1 ,
where u = ( u i 1 , , u i s 1 ) are the test points uniformly distributed on C s 1 . We now turn our attention to the uniformity of the distribution of the experimental points obtained by the transformation (9).
Theorem 3.
In the non-empty bounded space X 03 = { x = ( x 1 , , x s ) | 0 < x i < 1 , i = 1 m a i x i = 1 , 1 a 1 a m } , let u = ( u 1 , , u s 1 ) be a random vector uniformly distributed over ( 0 , 1 ) s 1 , and suppose that ( 1 u 0 ) 1 t = 1 . Then, the random vector obtained through the transformation (9) is also uniformly distributed over X 03 .
From Theorem 3, we can conclude that the transformed test points are uniformly distributed over X 03 . When all coefficients are equal to 1, this conclusion degenerates into the result of uniform designs over the standard simplex presented in Fang and Wang [7].
For the third case, consider the condition 1 < i = 1 m 1 a i x i = 1 a m x m < 1 a m . Treat x m as a slack variable, and the new experimental region can be represented as X 04 = { x = ( x 1 , , x m 1 , x m + 1 , , x s ) | 0 < x i < 1 , i = 1 , , s , 1 < i = 1 m 1 a i x i < 1 a m , a 1 , , a t > 0 , a t + 1 , , a m 1 < 0 } . Let X = ( X 1 , , X m 1 , X m + 1 , , X s ) be a uniform random variable in X 04 . The density function f ( x ) can then be obtained, and the measure of the region X 04 is given by
V ( X 04 ) = k = 0 m 1 ( 1 ) k i 0 , 1 i 1 < < i k m 1 [ ( 1 a m a 1 , m 1 T 1 k m ) m 1 ( 1 a 1 , m 1 T 1 k m ) m 1 ] Γ ( m ) k = 1 m a k .
By Lemma 2, the corresponding marginal distribution function can be expressed as
F X 1 4 ( x 1 ) = k = 0 m 2 ( 1 ) k i 1 < < i k ( 1 a m a 2 , m 1 T 1 k m 1 ) + m 1 I ( 1 a m a 1 x 1 a 2 , m 1 T 1 k m 1 0 ) V ( X 04 ) Γ ( m ) k = 1 m 1 a k + k = 0 m 2 ( 1 ) k i 1 < < i k ( 1 a 1 x 1 a 2 , m 1 T 1 k m 1 ) + m 1 V ( X 04 ) Γ ( m ) k = 1 m 1 a k k = 0 m 2 ( 1 ) k i 1 < < i k ( 1 a m a 1 x 1 a 2 , m 1 T 1 k m 1 ) + m 1 V ( X 04 ) Γ ( m ) k = 1 m 1 a k k = 0 m 2 ( 1 ) k i 1 < < i k ( 1 a 2 , m 1 T 1 k m 1 ) + m 1 I ( 1 a 1 x 1 a 2 , m 1 T 1 k m 1 0 ) V ( X 04 ) Γ ( m ) k = 1 m 1 a k ,
and the conditional distribution functions
F i 2 , , i 1 4 ( x i ) = K 4 k = 0 m i 1 ( 1 ) k i 1 < < i k ( 1 a m k = 1 i 1 a k x k a i + 1 , m 1 T 1 k m i ) + m i Γ ( m i + 1 ) k = i m 1 a k × I ( 1 a m k = 1 i a k x k a i + 1 , m 1 T 1 k m i 0 ) + K 4 k = 0 m i 1 ( 1 ) k i 1 < < i k ( 1 k = 1 i a k x k a i + 1 , m 1 T 1 k m i ) + m i Γ ( m i + 1 ) k = i m 1 a k K 4 k = 0 m i 1 ( 1 ) k i 1 < < i k ( 1 a m k = 1 i a k x k a i + 1 , m 1 T 1 k m i ) + m i Γ ( m i + 1 ) k = i m 1 a k K 4 k = 0 m i 1 ( 1 ) k i 1 < < i k ( 1 k = 1 i 1 a k x k a i + 1 , m 1 T 1 k m i ) + m i Γ ( m i + 1 ) k = i m 1 a k × I ( 1 k = 1 i a k x k a i + 1 , m 1 T 1 k m i 0 ) ,
where
K 4 = I ( k = 1 i a k x k < 0 ) + I ( m t < i m 1 ) h ( m 1 , i , 0 , a m ) .
Similarly, we can derive the design using the transformation
F X 1 4 ( x 1 ) = u i 1 , F j 2 , , j 1 4 ( x j ) = u i j , j = 2 , , m 1 , x m = 1 a m 1 a m i = 1 m 1 a i x i , x k = u i k 1 , k = m + 1 , , s .
Based on the previous results, we can derive the following result.
Theorem 4.
In the non-empty bounded space X 04 = { x = ( x 1 , , x s ) | 0 < x i < 1 , i = 1 m a i x i = 1 , a 1 , , a t > 0 , a t + 1 , , a m < 0 } , let u = ( u 1 , , u s 1 ) be a random vector uniformly distributed over ( 0 , 1 ) s 1 . Then, the random vector obtained through the following transformation (12) is also uniformly distributed over X 04 .
The proofs of Theorems 2–4 can be established in a manner similar to that of Theorem 1, and we omit them. Thus, by Theorem 4, we have established that the transformed test points are also uniformly distributed over X 04 .
Based on the aforementioned results, the steps for identifying a uniform design over the experimental region X 0 , subject to an arbitrary linear constraint, can be summarized in Algorithm 1.
Algorithm 1 General construction algorithm for one arbitrary linear constraint
1:
Input: The experiment domain X 0 ;
2:
Step 1: Compare the magnitude of b with 0. If b = 0 , proceed to step 2. If b 0 , divide both sides of the constraint equation by b to obtain a new constraint: a 1 x 1 + + a s x s = 1 , where a i = a i / b , Then, proceed to step 3;
3:
Step 2: If all a i > 0 (or a i 0 for i = 1 , , s , then X 0 is empty. Otherwise, reorder a 1 , , a s such that a 1 , , a t > 0 and a t + 1 , , a m < 0 , 0 < t < m s . Then, transform X 0 into X 01 ;
4:
Step 3: If all a i > 0 for i = 1 , , s , arrange them in ascending order. If a 1 < 1 , transform X 0 into X 02 ; If a 1 1 , transform X 0 to X 03 ; If a rearrangement exists such that a 1 , , a t > 0 and a t + 1 , , a m < 0 , with 0 < t < m s , transform X 0 into X 04 ; otherwise, X 0 is empty;
5:
Step 4: Calculate the corresponding marginal distribution function (1) (or (4), (7), (10)) and the conditional distribution functions (2) (or (5), (8), (11)) on X 01 (or X 02 , X 03 , X 04 );
6:
Step 5: Given a set of uniformly distributed test points in ( 0 , 1 ) s 1 generated using existing methods (such as the good lattice method), randomly permute each point and denote it as ( u i 1 , , u i s 1 ) . Then, for each point, we can obtain ( s 1 ) ! distinct design configurations uniformly distributed over X 0 using the transformation (3) (or (6), (9), (12));
7:
Output: Compare the CCD values of the ( s 1 ) ! designs and select the one with the smallest CCD as the final design points, denoted by P n .
In Step 2, based on the idea of slack variable models, as discussed by Schneider et al. [18] and Schneider et al. [19], we can choose x i as a slack variable in a linear programming problem. Using the existing constraints, the new experimental domain can be expressed as X 01 . Similarly, in Step 3, the new experimental domains can be denoted as X 02 , X 03 and X 04 . After identifying the uniformly distributed test points within these experimental domains, it remains to prove that these design points are also uniformly distributed in the original experimental domain.
In the following, we provide an example to facilitate a better understanding of Algorithm 1.
Example 1.
Consider
D 1 = { x = ( x 1 , x 2 , x 3 ) t | 0 < x i < 1 , 2 x 1 + 1 2 x 2 x 3 = 1 , i = 1 , 2 , 3 } .
By Algorithm 1, we obtain
u 1 = F ( x 1 ) = [ ( 2 2 x 1 ) 2 4 ] I ( 0 < x 1 < 1 ) + [ ( 1 2 x 1 ) 2 1 ] I ( 0 < x 1 1 / 2 ) + [ ( 3 / 2 2 x 1 ) 2 9 / 4 ] I ( 0 < x 1 3 / 4 ) [ ( 1 / 2 2 x 1 ) 2 1 / 4 ] I ( 0 < x 1 < 1 / 4 ) ,
and
u 2 = F ( x 2 | x 1 ) = ( 4 4 x 1 ) I ( 0 < x 1 1 ) I ( 2 2 x 1 x 2 / 2 0 ) + ( 2 4 x 1 x 2 ) I ( 2 4 x 1 x 2 0 ) ( 4 4 x 1 x 2 ) I ( 4 4 x 1 x 2 0 ) ( 2 4 x 1 ) I ( 2 4 x 1 0 ) I ( 2 4 x 1 x 2 0 ) .
We can solve Equations (13) and (14) to obtain
x 1 = 0.25 ( 1 + 2 u 1 ) I ( 0 u 1 < 0.25 ) + ( 0.5 u 1 + 0.375 ) I ( 0.25 u 1 < 0.75 ) + 0.5 ( 2 1 u 1 ) I ( 0.75 u 1 < 1 )
and
x 2 = [ 1 + 2 ( u 1 1 ) u 1 ] I ( 0 u 1 < 0.25 ) + u 2 I ( 0.25 u 1 < 0.75 ) + 2 u 2 1 u 1 I ( 0.75 u 1 < 1 )
Thus, based on constraint x 3 = ( 2 x 1 + x 2 / 2 ) 1 , we have
x 3 = u 2 u 1 I ( 0 u 1 < 0.25 ) + ( u 1 + 0.5 u 2 0.25 ) I ( 0.25 u 1 < 0.75 ) + [ 1 ( 1 u 2 ) 1 u 1 ] I ( 0.75 u 1 < 1 )
Using the modified good lattice point method to generate design points in ( 0 , 1 ) 2 , the corresponding design points on D 1 are obtained from Equations (15)–(17). The construction results are visualized in Figure 1, which shows the results of the IRT, AR, and SR methods for constructing a 10, 50, and 100-run uniform design on D 1 .
In Figure 1, the plus sign represents the design points obtained using the IRT-based method, the circle represents the design points obtained using the AR method, and the asterisk represents the design points obtained using the SR method. Based on Figure 1, we can directly observe that the design points obtained using the method proposed in this paper, based on the IRT method, are more widely and evenly distributed in the experimental space. In practical experiments, we can find the nearest design points to arrange the experiments.
Moreover, we run experiments for n = 10 , 20 , , 100 to compare the IRT, AR, and SR methods, with numerical results presented in Table 1. We observe that the IRT method consistently outperforms the other two methods. Additionally, the AR method takes the longest time. Both the method proposed in this paper and the SR method consume relatively less computational time. However, the designs generated using the proposed method have a smaller CCD scores in the experimental region, indicating better uniformity.

3.2. Multiple Arbitrary Linear Constraints

Regarding the case where the number of distinct linear constraints t (where 1 t s ) exceeds one, the feasible region for selecting experimental points is the intersection of multiple ( s t ) -dimensional hyperplane segments. In this case, algebraic methods can be used to compute the normal vector of the intersecting hyperplanes, thereby representing the general solution of intersecting parts. The problem is reduced to the case of a single linear constraint.
The experimental region X 1 = { x = ( x 1 , , x s ) | 0 < x i < 1 , a j 1 x 1 + + a j s x s = b j , j = 1 , , t , t < s } is considered, where a 1 , , a j s , b j R . Given the number of non-redundant linear constraints in this experimental region, we can directly determine the experimental points x within the region X 1 , assuming that X 1 is non-empty. Alternatively, we can express the components of x that are unconstrained by the linear conditions and then use the IRT method to construct a uniform design on X 1 . However, due to the arbitrary values of a j 1 , , a j s , b j and t, it is challenging to intuitively determine the number of non-redundant linear constraints within the experimental region, making it difficult to establish the dimension of X 1 and identify which components of x are constrained. A natural approach is to view the constraints in the experimental region as a system of linear equations over the real number field, denoted as (I): A x = b , where A is the coefficient matrix of the system, expressed as
A ( t × s ) = a 11 a 1 s a t 1 a t s ,
x = x 1 , , x s is the unknown vector in the linear system (I), and b = b 1 , , b t T is the constant vector. The augmented matrix of the system is indicated by A ¯ , and the dimension of the solution space of system (I) corresponds to the dimension of the experimental space X 1 . The dimension d of the experimental space X 1 can be determined based on the rank r 1 (where r 1 min { t , s } ) of the coefficient matrix A, the rank r 2 (where r 2 r 1 ) of the augmented matrix A ¯ , the number of linear constraints t, and the total dimensionality s of the space. Specifically, if the rank r 1 of the design matrix A is smaller than the dimension s of C s , the dimension of the experimental region is the absolute difference between s and r 1 ; that is, r 1 s = s r 1 .
When b = 0 , system (I) becomes a homogeneous system of linear equations, where r 1 = r 2 = r . Based on the solvability of homogeneous linear systems, if J < s or r < s t , the system will have non-zero solutions. If r = s t , the system will have only the trivial zero solution. When b 0 , system (I) is a non-homogeneous system of linear equations. Based on the solvability of non-homogeneous systems, if r 1 = r 2 < s , the system will have non-zero solutions. If r 1 = r 2 = s , the system will have a unique solution. If r 1 < r 2 , the system will not have a solution.
This construction process can be summarized in Algorithm 2.
Algorithm 2 General construction algorithm for multiple arbitrary linear constraints
1:
Input: The experimental domain X 1 = { x = ( x 1 , , x s ) | 0 < x i < 1 , a j 1 x 1 + + a j s x s = b j , j = 1 , , t , t < s }, where a 1 , , a j s , b j R ;
2:
Step 1: Express the constraints as a system of linear equations and determine whether solutions exist for the system;
3:
Step 2: Find the general solution to the system and use it to determine the normal vector of the hyperplane segment;
4:
Step 3: Use the obtained normal vector to represent the experimental region with a single linear equality constraint, then apply Algorithm 1 to obtain the design;
5:
output: The final design points P n .
In Step 1, it is essential to first analyze the magnitude of X 1 under conditions b = ( b 1 , , b t ) T = 0 and b 0 , as well as to derive the expressions for the experimental points in the space. Subsequently, the uniformly distributed experimental points can be determined for each case accordingly.

4. Uniform Design of Experiments for Quadratic Constraints

Next, we consider a more general case where the experimental region is the hyperspherical surface of an arbitrary hypersphere within ( 0 , 1 ) s : X 2 = { x = ( x 1 , , x s ) | 0 < x i < 1 , a 1 x 1 2 + + a s x s 2 = 1 , i = 1 , , s , a 1 , a s 1 } . By treating x s as a slack variable, the new experimental region can be represented as X 2 = { x = ( x 1 , , x s ) | 0 < x i < 1 , a 1 x 1 2 + + a s x s 2 = 1 , i = 1 , , s , a 1 , a s 1 } . Assume that X = ( X 1 , , X s 1 ) is a uniformly distributed random vector in X 2 . Following the construction principles of the IRT method, our aim is to derive the marginal distribution function of X 1 and the corresponding conditional distribution functions. Based on the Dirichlet distribution [24], Dirichlet integrals [25], and the results from the previous section, we can derive the density function as follows:
f ( x 1 , , x s 1 ) = [ i = 1 s 1 a i 2 s 1 Γ ( ( s + 1 ) / 2 ) ] I ( 0 < x i < 1 , 0 < k = 1 s 1 a k x k 2 < 1 , i = 1 , , s 1 ) π ( s 1 ) / 2 .
Then, the corresponding marginal distribution function can be expressed as
F X 1 5 ( x 1 ) = { 2 Γ ( ( s + 1 ) / 2 ) / [ π Γ ( s / 2 ) ] } I ( 0 < x 1 < 1 ) J s 1 ( a 1 x 1 ) ,
where
J n ( x ) = 0 arcsin x cos n t d t = x ( 1 x 2 ) n 1 2 / n + ( 1 1 / n ) J n 2 ( x ) ,
J 0 ( x ) = arcsin x , J 1 ( x ) = x , J 2 ( x ) = arcsin x / 2 + x 1 x 2 / 2 .
And, the conditional distribution functions are given by
F i | 2 , , i 1 5 ( x i ) = [ 2 Γ ( ( s i + 2 ) / 2 ) / π Γ ( ( s i + 1 ) / 2 ) ] J s i ( a i x i / 1 j = 1 i 1 a j x j 2 ) × I ( 0 < a i x i < 1 j = 1 i 1 a j x j 2 ) .
Similarly, we can derive the design as follows:
F X 1 5 ( x 1 ) = u i 1 , F j 2 , , j 1 5 ( x j ) = u i j , j = 2 , , s 1 , x s = a s 1 ( 1 i = 1 s 1 a i x i 2 ) .
Based on the previous results, we can derive the following conclusion.
Theorem 5.
In the non-empty bounded space X 2 = { x = ( x 1 , , x s ) | 0 < x i < 1 , i = 1 s a i x i 2 = 1 , a 1 , , a s 1 } , let u = ( u 1 , , u s 1 ) be a uniformly distributed random vector over ( 0 , 1 ) s 1 . Then, the random vector obtained through the transformation in (20) is also uniformly distributed over X 2 .
Proof. 
Let X = ( X 1 , , X s ) and C = C 1 , , C s 1 represent random vectors uniformly distributed over X 2 and X 2 , respectively. Let U = U 1 , , U s 1 represent random vectors uniformly distributed over ( 0 , 1 ) s 1 such that X and U satisfy (20). The distribution function of X is given by
F X ( x ) = P ( X 1 x 1 , , X s 1 x s 1 , 1 a s ( 1 i = 1 s 1 a i X i 2 ) x s 2 ) ,
From the proof of Theorem 1, it follows that
F X ( x ) = P ( 1 a s ( 1 i = 1 s 1 a i X i 2 ) x s 2 X 1 x 1 , , X m 1 x s 1 ) x 1 x s 1 V ( X 2 )
× I x 1 , , x s 1 X 2
= V ( X 2 ) P ( 1 a s ( 1 i = 1 s 1 a i X i 2 ) x s 2 X 1 x 1 , , X s 1 x s 1 ) V ( X 2 ) × x 1 x s 1 V ( X 2 )
× I x 1 , , x s 1 X 2
x 1 x s V ( X 2 ) I x 1 , , x s X 2 .
Therefore, X = ( X 1 , , X s ) , obtained through the transformation (20), is uniformly distributed over X 2 . □
Theorem 5 implies that the transformed test points are also uniformly distributed over X 2 . When all coefficients are equal to 1, this simplifies the result on uniform designs over the standard sphere. Fang and Wang [7] and Hardin and Saff [26] have investigated the construction of uniform designs on hyperspheres using number-theoretic methods and minimum energy approaches, respectively. In the following, we provide an example to facilitate a better understanding of the above conclusion.
Example 2.
Consider the following experimental region:
D 2 = { x = ( x 1 , x 2 , x 3 ) t | 0 < x i < 1 , 4 x 1 2 + 9 x 2 2 + x 3 2 = 1 , i = 1 , 2 , 3 } .
By treating x 3 as a slack variable, the new experimental region can be represented as
D 2 = { x = ( x 1 , x 2 ) t | 0 < x i < 1 , 0 < 4 x 1 2 + 9 x 2 2 < 1 , i = 1 , , s 1 } .
Let X = ( X 1 , , X s 1 ) be a uniform random variable vector in D 2 . By Equations (18) and (19), and applying a Taylor expansion, we can derive the marginal distribution function F ( x 1 ) and the corresponding conditional distribution functions F ( x 2 | x 1 ) :
F ( x 1 ) = ( 2 arcsin 2 x 1 ) / π + ( 4 x 1 1 4 x 1 2 ) / π 2.5 x 1 = u 1 ,
F ( x 2 | x 1 ) = [ 3 x 2 I ( 0 < x 2 < 1 4 x 1 2 / 3 ) ] / 1 4 x 1 2 = u 2 .
Using the modified good lattice point method to generate design points in ( 0 , 1 ) 2 , the corresponding design points on D 2 are obtained from Equations (21) and (22). The construction results are visualized in Figure 2, which shows the results of the IRT, AR, and SR methods for constructing a 10, 50, and 100-run uniform design on D 2 .
In Figure 2, the plus sign represents the design points obtained using the IRT-based method, the circle represents the design points obtained using the AR method, and the asterisk represents the design points obtained using the SR method. Based on Figure 2, we can directly observe that the design points obtained using the method proposed in this paper, based on the IRT method, are more widely and evenly distributed in the experimental space. In practical experiments, we can find the nearest design points to arrange the experiments.
Moreover, we run the experiments for some n = 10 , 20 , , 100 to compare the IRT, AR, and SR methods, with the numerical results presented in Table 2. It is evident that the IRT method consistently outperforms the other two methods. Additionally, the AR method takes the longest time. Both the method proposed in this paper and the SR method consume relatively less computational time. However, the designs generated using the proposed method have lower CCD scores in the experimental region, indicating better uniformity.

5. Conclusions

This paper applies the IRT method to experimental regions with linear and quadratic constraints, which offers simplified calculations for practitioners and provides a practical solution applicable to a wide range of experimental scenarios. By introducing the concept of slack variables, general formulas for constructing uniform designs within these constrained regions are derived. Compared to the existing AR and SR methods, this approach offers faster computational speed. Although the formulas become more complex with increasing dimensionality, numerical methods and Taylor expansions can approximate these formulas, simplifying computations. This method is versatile and effectively addresses the challenge of finding global optimal solutions within constrained experimental regions. Based on the CCD criterion, the designs obtained using this method exhibit superior uniformity compared to those derived from the AR and SR methods, thus facilitating the development of more complex models. However, the construction of uniform designs within regions with linear constraints based on the IRT method has certain limitations. For instance, the uniformity metrics used in this study are not completely comprehensive. The uniformity of the designs relies on their uniformity within the unit hypercube in the experimental space. Furthermore, because of the asymmetry of the experimental region, evaluating uniformity using CCD is computationally demanding. Approximate calculations or the development of simpler, more feasible uniformity measures could help to address this challenge. However, there are still some limitations that warrant further research:
  • From the perspective of uniformity, the uniformity measurement standard is not sufficiently refined. The uniformity of the design constructed in this paper depends on the uniformity of the corresponding design in the unit hypercube in the experimental space. Additionally, due to the asymmetry of the experimental region, the uniformity evaluation using the CCD requires repeated computations. Therefore, finding a more appropriate uniformity measure for such experimental regions and determining how to quickly identify the best experimental points with the most uniformity in the target experimental region are open problems for further research.
  • From the perspective of computational complexity, the computation is not simple enough. This paper provides analytical expressions of the marginal distribution function and conditional distribution function for uniformly distributed random vectors in the experimental region, but the formulas are still not concise enough. It is possible to find simpler functions to approximate these expressions, thus obtaining transformation formulas that can be computed more quickly.
  • From the perspective of the applicability of the conclusions, the scope of applicability is relatively limited. This paper studies experimental regions with linear equality constraints. When the constraints are linear inequality constraints, they can be transformed into equality constraints by adding slack and residual variables. However, the formulas provided in this paper for constructing uniform designs are only applicable to such experimental regions, while, in practical problems, experimental regions may be more complex.

Author Contributions

Conceptualization, L.Y. and Y.Z.; methodology, L.Y.; software, L.Y.; validation, L.Y. and X.Y.; formal analysis, L.Y.; investigation, L.Y.; resources, Y.Z. and X.Y.; writing—original draft preparation, L.Y.; writing—review and editing, Y.Z. and X.Y.; visualization, L.Y.; supervision, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China of 12131001, and the Fundamental Research Funds for the Central Universities.

Data Availability Statement

The codes for generating the figures and tables in this paper are available at the following website: https://github.com/Ylj-2001 (accessed on 22 January 2025).

Acknowledgments

The authorship is listed in alphabetical order.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
CCDCentral composite discrepancy
IRTInverse Rosenblatt transformation
ARAcceptance–rejection
SRStochastic representation

References

  1. Hua, L.K.; Wang, Y. Applications of Number Theory to Numerical Analysis; Springer: Berlin/Heidelberg, Germany, 1981. [Google Scholar]
  2. Huang, Y.; Zhou, Y. Convergence of uniformity criteria and the application in numerical integration. Mathematics 2022, 10, 3717. [Google Scholar] [CrossRef]
  3. Fang, K.; Liu, M.Q.; Qin, H.; Zhou, Y.D. Theory and Application of Uniform Experimental Designs; Spinger: Singaporer, 2018; Volume 221. [Google Scholar]
  4. Feng, J.; Wu, Y.; Sun, H.; Zhang, S.; Liu, D. Panther: Practical Secure 2-Party Neural Network Inference. IEEE Trans. Inf. Forensics Secur. 2025, 20, 1149–1162. [Google Scholar] [CrossRef]
  5. Ma, Z.; Yang, X.; Shang, W.; Wu, J.; Sun, H. Resilience analysis of an urban rail transit for the passenger travel service. Transp. Res. Part Transp. Environ. 2024, 128, 104085. [Google Scholar] [CrossRef]
  6. Tan, M.T.; Fang, H.B. Drug combination studies, uniform experimental design and extensions. In Contemporary Experimental Design, Multivariate Analysis and Data Mining: Festschrift in Honour of Professor Kai-Tai Fang; Springer: Cham, Germany, 2020; pp. 127–144. [Google Scholar]
  7. Fang, K.T.; Wang, Y. Number-Theoretic Methods in Statistics; CRC Press: Boca Raton, FL, USA, 1993; Volume 51. [Google Scholar]
  8. Chen, R.B.; Hsu, Y.W.; Hung, Y.; Wang, W. Discrete particle swarm optimization for constructing uniform design on irregular regions. Comput. Stat. Data Anal. 2014, 10, 282–297. [Google Scholar] [CrossRef]
  9. Qi, Z.F.; Yang, J.F.; Liu, Y.; Liu, M.Q. Construction of nearly uniform designs on irregular regions. Commun. Stat. Theory Methods 2017, 46, 8318–8327. [Google Scholar] [CrossRef]
  10. Wang, Y.; Fang, K.T. Uniform design of experiments with mixtures. Sci. China Ser. A 1996, 39, 264–275. [Google Scholar]
  11. Borkowski, J.J.; Piepel, G.F. Uniform designs for highly constrained mixture experiments. J. Qual. Technol. 2009, 41, 35–47. [Google Scholar] [CrossRef]
  12. Lin, D.K.; Sharpe, C.; Winker, P. Optimized U-type designs on flexible regions. Comput. Stat. Data Anal. 2010, 54, 1505–1515. [Google Scholar] [CrossRef]
  13. Liu, Y.; Liu, M.Q. Construction of uniform designs for mixture experiments with complex constraints. Commun.-Stat.-Theory Methods 2016, 45, 2172–2180. [Google Scholar] [CrossRef]
  14. Zhang, M.; Zhang, A.; Zhou, Y. Construction of uniform designs on arbitrary domains by inverse rosenblatt transformation. In Contemporary Experimental Design, Multivariate Analysis and Data Mining: Festschrift in Honour of Professor Kai-Tai Fang; Springer: Cham, Germany, 2020; pp. 111–126. [Google Scholar]
  15. Joseph, V.R.; Dasgupta, T.; Tuo, R.; Wu, C.J. Sequential exploration of complex surfaces using minimum energy designs. Technometrics 2015, 57, 64–74. [Google Scholar] [CrossRef]
  16. Joseph, V.R.; Wang, D.; Gu, L.; Lyu, S.; Tuo, R. Deterministic Sampling of Expensive Posteriors Using Minimum Energy Designs. Technometrics 2019, 61, 297–308. [Google Scholar] [CrossRef]
  17. Huang, C.; Joseph, V.R.; Ray, D.M. Constrained minimum energy designs. Stat. Comput. 2021, 31, 80. [Google Scholar] [CrossRef]
  18. Schneider, F.; Schüssler, M.; Hellmig, R.; Nelles, O. Constrained design of experiments for data-driven models. In Proceedings of the Workshop Computational Intelligence, Berlin, Germany, 1–2 December 2022; Volume 1, p. 193. [Google Scholar]
  19. Schneider, F.; Hellmig, R.J.; Nelles, O. Uniform Design of Experiments for Equality Constraints. In Proceedings of the Intelligent Data Engineering and Automated Learning, Évora, Portugal, 22–24 November 2023; Volume 14404, pp. 311–322. [Google Scholar]
  20. Jourdan, A. Space-filling designs with a Dirichlet distribution for mixture experiments. Stat. Pap. 2024, 65, 2667–2686. [Google Scholar] [CrossRef]
  21. Javier, C.S. Selecting the slack variable in mixture experiment. Ing. Investig. Tecnol. 2015, 16, 613–623. [Google Scholar] [CrossRef]
  22. Azikiwe, H.; Bello, A. Families of multivariate distributions involving the Rosenblatt construction. J. Am. Stat. Assoc. 2006, 101, 1652–1662. [Google Scholar]
  23. Lasserre, J.B. Volume of slices and sections of the simplex in closed form. Optim. Lett. 2015, 9, 1263–1269. [Google Scholar] [CrossRef]
  24. Sobel, M. Selected Tables in Mathematical Statistics; American Mathematical Society: Providence, RI, USA, 1970; Volume 4. [Google Scholar]
  25. Sobel, M.; Uppuluri, V.R.R.; Frankowski, K. Dirichlet Integrals of Type 2 and Their Applications; American Mathematical Society: Providence, RI, USA, 1985; Volume 9. [Google Scholar]
  26. Hardin, D.P.; Saff, E.B. Discretizing manifolds via minimum energy points. Not. Am. Math. Soc. 2004, 51, 1186–1194. [Google Scholar]
Figure 1. (a) Scatter plot of design points on D 1 for n = 10 . (b) Scatter plot of design points on D 1 for n = 50 . (c) Scatter plot of design points on D 1 for n = 100 .
Figure 1. (a) Scatter plot of design points on D 1 for n = 10 . (b) Scatter plot of design points on D 1 for n = 50 . (c) Scatter plot of design points on D 1 for n = 100 .
Mathematics 13 00438 g001
Figure 2. (a) Scatter plot of design points on D 2 for n = 10 . (b) Scatter plot of design points on D 2 for n = 50 . (c) Scatter plot of design points on D 2 for n = 100 .
Figure 2. (a) Scatter plot of design points on D 2 for n = 10 . (b) Scatter plot of design points on D 2 for n = 50 . (c) Scatter plot of design points on D 2 for n = 100 .
Mathematics 13 00438 g002
Table 1. CCD scores and the CPU time for uniform designs constructed on D 1 .
Table 1. CCD scores and the CPU time for uniform designs constructed on D 1 .
nIRTARSR
CCDTimeCCDTimeCCDTime
100.08660.00400.08662.36780.11180.0082
200.03530.00480.04995.41430.12240.0015
300.02880.00890.083311.3850.05520.0009
400.02160.01430.072816.7680.06840.0009
500.01730.01490.060815.5270.08180.0010
600.03630.05530.065619.7960.09640.0016
700.00710.08400.029423.7170.04680.0009
800.00620.06590.090524.9100.02500.0009
900.00550.1449 0.024231.0590.03720.0014
1000.31920.00700.048932.0490.02730.0010
Table 2. CCD scores and CPU time for uniform designs constructed on D 2 .
Table 2. CCD scores and CPU time for uniform designs constructed on D 2 .
nIRTARSR
CCDTimeCCDTimeCCDTime
100.05000.00820.05919.70240.06700.0073
200.03350.01050.050017.3670.04740.0009
300.02680.02620.038740.3350.03720.0010
400.02680.03730.033566.8320.03870.0013
500.02560.03710.028680.3100.03660.0020
600.02550.07520.028875.8840.03490.0024
700.01720.08590.030895.6330.02410.0034
800.02010.06780.0233110.380.02680.0019
900.01660.20010.0231113.100.02770.0019
1000.01640.45540.0187152.150.02770.0041
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, L.; Yang, X.; Zhou, Y. Construction of Uniform Designs over a Domain with Linear Constraints. Mathematics 2025, 13, 438. https://doi.org/10.3390/math13030438

AMA Style

Yang L, Yang X, Zhou Y. Construction of Uniform Designs over a Domain with Linear Constraints. Mathematics. 2025; 13(3):438. https://doi.org/10.3390/math13030438

Chicago/Turabian Style

Yang, Luojing, Xiaoping Yang, and Yongdao Zhou. 2025. "Construction of Uniform Designs over a Domain with Linear Constraints" Mathematics 13, no. 3: 438. https://doi.org/10.3390/math13030438

APA Style

Yang, L., Yang, X., & Zhou, Y. (2025). Construction of Uniform Designs over a Domain with Linear Constraints. Mathematics, 13(3), 438. https://doi.org/10.3390/math13030438

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop