Next Article in Journal
The Ergodicity and Sensitivity of Nonautonomous Discrete Dynamical Systems
Next Article in Special Issue
Efficient Solution of Burgers’, Modified Burgers’ and KdV–Burgers’ Equations Using B-Spline Approximation Functions
Previous Article in Journal
Underwater Image Enhancement Based on the Improved Algorithm of Dark Channel
Previous Article in Special Issue
The Improved Stability Analysis of Numerical Method for Stochastic Delay Differential Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Selectivity Estimation of Inequality Joins in Databases

1
Data Science Lab, Université libre de Bruxelles (ULB), 1050 Brussels, Belgium
2
Faculty of Computer and Information Sciences, Ain Shams University, Cairo 11566, Egypt
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(6), 1383; https://doi.org/10.3390/math11061383
Submission received: 1 February 2023 / Revised: 6 March 2023 / Accepted: 8 March 2023 / Published: 13 March 2023
(This article belongs to the Special Issue Numerical Methods for Approximation of Functions and Data)

Abstract

:
Selectivity estimation refers to the ability of the SQL query optimizer to estimate the size of the results of a predicate in the query. It is the main calculation based on which the optimizer can select the least expensive plan to execute. While the problem has been known since the mid-1970s, we were surprised that there are no solutions in the literature for the selectivity estimation of inequality joins. By testing four common database systems: Oracle, SQL-Server, PostgreSQL, and MySQL, we found that the open-source systems PostgreSQL and MySQL lack this estimation. Oracle and SQL-Server make fairly accurate estimations, yet their algorithms are secret. This paper, thus, proposes an algorithm for inequality join selectivity estimation. The proposed algorithm was implemented in PostgreSQL and sent as a patch to be included in the next releases. We compared this implementation with the above DBMS for three different data distributions (uniform, normal, and Zipfian) and showed that our algorithm provides extremely accurate estimations (below 0.1% average error), outperforming the other systems by an order of magnitude.

1. Introduction

Query optimization is the overall process of generating the most-efficient query plan given a SQL statement. The query optimizer, responsible for this process, applies equivalence rules to reorganize and merge the operations in the query to find the fastest execution plan and feeds it to the executor. It examines multiple access methods, such as sequential table scans or index scans, different join methods, such as nested loops and hash joins, different join orders, sub-query normalization, materialized views, and other possible transformations. Almost all existing systems adopt a cost-based optimization approach, with roots back to the architecture of System R [1] and Volcano/Cascades [2,3].
In cost-based query optimization, the optimizer estimates the cost of the alternative query plans and chooses the plan with the minimum cost. The cost is estimated in terms of the CPU and I/O resources that the query plan will use. A central component in cost estimation is the Selectivity Estimation (SE). SE collects statistics for all attributes in a relation, such as data distribution histograms, the most-common values, the null percentage, etc. These statistics are then used during planning time to estimate the number of tuples generated by a predicate in the query. A smaller selectivity value means a smaller size for intermediate results, which is favorable for a more efficient execution. The cost-based optimizer, thus, reorders the selection and join predicates to quickly reduce the sizes of intermediate results.
Since, in general, the cost of each operator depends on the size of its input relations, it is important to provide good estimations of their selectivity, that is of their result size, to the query optimizer [4,5]. Inaccurate selectivity estimations can lead to inefficient query plans being chosen, sometimes leading to orders of magnitude longer execution times [6].
There is a trade-off between the size of the stored statistics and the complexity of the estimation algorithm, on the one hand, and the estimation accuracy, on the other. Recent research thus focuses on using machine learning methods to capture the data distribution into compact models. While there are good results in this research direction [7], common relational database systems continue to use traditional statistics structures, mostly based on histograms. A histogram can be used as a discrete approximation of the probability density function of an attribute.
Despite the popularity of histograms, there is a lack of a theory of how to use them in estimating inequality join selectivity. In addition, we found that common database systems lack accurate implementations for such an estimator. This paper thus aimed to fill this gap and makes the following main contributions:
  • A formal model for describing a selectivity estimation problem using probability theory;
  • A novel algorithm for join selectivity estimation of inequality operators using histogram statistics;
  • An extension of the algorithm that also takes advantage of additional statistics (e.g., MCV), when available.
  • An implementation of the proposed algorithm in PostgreSQL. This implementation has been submitted as a patch for inclusion in a future release https://github.com/DRepas/postgres/tree/rangejoinsel (accessed on 7 March 2023).
Section 2 starts by describing the research problem, together with a running example, which will be used throughout the paper. In the same section, we also introduce some definitions, notations, and terminology. Section 3 then reviews the existing related work in selectivity estimation. The algorithm presented in this paper consists of mapping the problem of selectivity estimation to a probability theory problem, as described in Section 4. The algorithm of join selectivity estimation is developed in Section 5. This section also develops a way to incorporate the null values and Most-Common Values (MCVs) statistics in the estimation model. A mapping of this model for several range operators is also presented. Finally, an implementation in PostgreSQL and the experimental evaluation are provided in Section 6.

2. Preliminaries

This paper presents a formal model to compute two different selectivity estimation types:
  • Restriction selectivity estimation: when one of the sides of the operator is an attribute of a relation and the other is a constant value:
    Example: SELECT * FROM R1 WHERE R1.X < 100.
  • Join selectivity estimation: when both sides of the operator are attributes of different relations:
    Example: SELECT * FROM R1, R2 WHERE R1.X < R2.Y.
The selectivity estimation of operations where both sides of the operator are attributes of the same relation, with no join or Cartesian product involved (example: SELECT * FROM R1 WHERE R1.x < R1.y), is not addressed by this paper.
The selectivity of an operator is the fraction of values kept in the result after selection. In the case of restriction selectivity, the denominator is the input relation’s size. In the case of join selectivity, the denominator is the input relations’ Cartesian product size (their sizes multiplied). This fraction can be interpreted as the probability that a given randomly selected tuple from the input relation, or from the Cartesian product of input relations in the case of joins, is selected by the operator being considered.

2.1. Problem Definition

In this paper, we developed an algorithm to perform the join selectivity estimation of inequality joins described above. The restriction selectivity estimation is already implemented by all common database systems, thus not a novel contribution of this work. We, however, formulated it as a probability problem and developed the join selectivity estimation on top of it, to maximize the code reuse in these systems.
The focus of the next sections will be on the restriction selectivity estimation of the less than (<) operator. The restriction selectivity estimation of all scalar inequality operators was derived from this initial estimation. We also built/generalized it to develop the join selectivity estimation.
The attributes being restricted or joined were treated as random variables that follow a distribution modeled by a Probability Density Function (PDF) and/or a Cumulative Distribution Function (CDF).

2.2. Running Example

For demonstration purposes, relations R1 and R2 are used throughout this paper. For each relation, 12 integers were manually selected to cover as many corner cases as possible when using equi-depth histograms (introduced in Section 2.3), such as skew and common bin boundaries.
R 1 . X = { 10 , 11 , 12 , 20 , 21 , 22 , 24 , 25 , 30 , 35 , 38 , 45 }
R 2 . Y = { 15 , 16 , 17 , 20 , 30 , 35 , 38 , 39 , 40 , 42 , 45 , 50 }

2.3. Histogram Statistics

Histograms are commonly used to approximate the PDF of an attribute by grouping the values in uniform bins. Each bin is an interval of the form: B j = ] h i s t j , h i s t j + 1 ] , where h i s t i are values from the domain of the attribute. It is important to note that the side on which the interval is open or closed has nothing to do with the purposes of this paper, as all estimations correspond to integration over a continuous domain, where singular points do not affect the final result. By defining a bin this way (using intervals), this paper restricted itself to domains where total order exists. This excludes categorical data, for which the idea of an equi-depth histogram does not apply.
Let H j ( X ) = P ( X B j ) be the fraction of values of the attribute X that is represented by the histogram bin B j , i.e., the height/depth of the histogram bin:
  • Singleton histograms are such that each bin refers to the frequency of a single element, typically used to collect most-common values statistics (introduced in Section 5.3).
  • Equi-width histograms are such that each bin has the same width. That is, h i s t j + 1 h i s t j is constant.
  • Equi-depth histograms are such that each bin has the same depth (height), but a varying width. That is, j , P ( X B j ) = H j ( X ) = 1 n , where n is the total number of bins.
For selectivity estimation, equi-depth histograms are favored because the resolution of the histogram adapts to skewed value distributions. Typically, the histogram is smaller than the original data. Thus, it cannot represent the true distribution entirely, and some assumptions are induced, e.g., uniformity on a single attribute, with the independence assumption among different attributes. The use of equi-depth histograms is also motivated by their prevalence in current RDBMS. These are usually constructed through the use of random sampling of the original relations.
For demonstration purposes, Figure 1 shows a graphical representation of equi-depth histograms for the attributes R 1 . X and R 2 . Y of the running example. For both histograms, there are three bins, meaning that the fraction accounted for by each bin is 1 3 . For attribute R 1 . X , h i s t X = [ 10 , 20 , 25 , 45 ] , which means that B X 0 = [ 10 , 20 ] , B X 1 = ] 20 , 25 ] , and B X 2 = ] 25 , 45 ] . For attribute y, h i s t Y = [ 15 , 20 , 38 , 50 ] , which means that B Y 0 = [ 15 , 20 ] , B Y 1 = ] 20 , 39 ] , and  B Y 2 = ] 39 , 50 ] .
Using histograms as a statistical representation of attributes involves the following implicit assumptions:
  • The data are distributed uniformly inside each bin;
  • The histograms are complete (they account for all the data points), that is:
    h i s t 0 = m i n ( X ) ;
    h i s t n = m a x ( X ) .
In practice, these two assumptions do not strictly hold. The data are usually not uniformly distributed inside each bin. The more bins used in the histograms, the smaller the error introduced by this assumption is. Database systems, e.g., PostgreSQL, typically create the histogram using a random sample of the attribute values, especially when the number of tuples is too large. The assumption of the completeness of the histogram might be broken in the presence of sampling. When the sample is representative of the underlying data, the estimation is still fairly accurate.
Given the equi-depth histogram of an attribute X, with n bins, one can derive its approximate PDF and CDF as shown next. Let f X ( c ) and F X ( c ) denote the PDF and the CDF of X, respectively, at a given value c, then
f X ( c ) = 0 c < h i s t 0 1 n 1 ( h i s t j + 1 h i s t j ) c B j , 0 j < n 0 c h i s t n
F X ( c ) = 0 c < h i s t 0 1 n ( j + c h i s t j h i s t j + 1 h i s t j ) c B j , 0 j < n 1 c h i s t n
When c B j , 0 j < n , Formula (1) is derived from the definition of the equi-depth histogram, where each bin represents H j ( X ) = 1 n of the data, spread over a width of h i s t j + 1 h i s t j .
Formula (2) is derived from the following:
F X ( c ) = c f X ( x ) d x = h i s t 0 f X ( x ) d x + i = 0 j 1 h i s t i h i s t i + 1 f X ( x ) d x + h i s t j c f X ( x ) d x
where
h i s t 0 f X ( x ) d x = 0 i = 0 j 1 h i s t i h i s t i + 1 f X ( x ) d x = i = 0 j 1 H j ( X ) = i = 0 j 1 1 n = j n h i s t j c f X ( x ) d x = H j ( X ) c h i s t j h i s t j + 1 h i s t j d x = 1 n c h i s t j h i s t j + 1 h i s t j
This last formula performs linear interpolation within the bin where c is contained, thus assuming a uniform distribution of values within the bin. The assumption that the histogram is complete is reflected in substituting the infinite bounds by h i s t 0 , h i s t n . In the running example, the PDF and CDF of R 1 . X can be derived from the formulas presented in this section as follows:
f X ( c ) = 0 c < 10 1 30 10 c < 20 1 15 20 c < 25 1 60 25 c < 45 0 45 c
F X ( c ) = 0 c < 10 1 30 c 1 3 10 c < 20 1 15 c 1 20 c < 25 1 60 c + 1 4 25 c < 45 1 45 c

3. Review of Related Work

A survey on DBMS query optimizer was proposed in 2021 [6], which categorizes cardinality estimation methods into synopsis-based methods, sampling-based methods, and learning-based methods. Many learning-based methods [8,9,10,11,12] have been proposed in recent years and show better accuracy than traditional methods. However, there are still many missing parts to be solved to put them into real systems, such as the cost of model training and updating and the black-box property of learning algorithms [7]. Sampling-based methods estimate selectivity by executing a (sub)query on samples collected from tables, whose accuracy depends on the degree to which the samples fit the original data distribution [6]. These methods, however, suffer from a high cost of storage and retrieval time, especially when the tables are very large. Another limitation of sampling-based methods is that they currently only support equality join selectivity estimation [6].
A third class of methods collects data summaries in the form statistics, to then estimate the size of the query results [13]. Histograms, as a form of synopsis-based methods, have been extensively studied [6,8] and are widely adopted in common database systems [14] for the purpose of selectivity estimation, including MySQL, PostgreSQL, Oracle, and SQL-Server [15,16,17,18]. Histograms are commonly used in databases for other tasks, including, for instance, data security and watermarking [19,20].
MySQL uses two histogram types for selectivity estimation [15]. One is the singleton histogram, which stores the distinct values and their cumulative frequency. Another is the equi-depth histogram, called equi-height in the MySQL documentation. This histogram stores the lower and upper bounds, the cumulative frequency, and the number of distinct values for each bucket. However, the usage of histograms is limited to restriction selectivity estimation [15], i.e., the selection operator. For join selectivity estimation, MySQL naively returns a constant: 0.1 for equality joins and 0.3333 for inequality joins [21]. Listing 1 displays an excerpt of the MySQL source code in which these constants are defined [22]:
Listing 1. 
MySQL Defaul Selectivity Values.
  • /// Filtering effect for equalities: col1 = col2
  • #define COND_FILTER_EQUALITY 0.1f
  • /// Filtering effect for inequalities: col1 > col2
  • #define COND_FILTER_INEQUALITY 0.3333f
PostgreSQL also uses the histogram as the optimizer statistics [16]. By analyzing its manual [23], as well as its source code, it uses equi-depth histograms. In contrast to MySQL, the number of distinct values in each bucket is not stored. For this reason, PostgreSQL does not use these histogram statistics in estimating equi-join selectivity. It rather uses a singleton histogram of the Most-Common Values (MCVs) [23]. As for inequality join selectivity estimation (<, ≤, >, ≥), a default constant value of 0.3333 is returned [24].
Listing 2 displays an excerpt of the PostgreSQL source code in which these constants are defined [25]:
Listing 2. 
PostgreSQL Defaul Selectivity Values.
  • /∗ default selectivity estimate for equalities such as "A = b"∗/
  • #define DEFAULT_EQ_SEL 0.005
  • /∗ default selectivity estimate for inequalities such as "A < b"∗/
  • #define DEFAULT_INEQ_SEL 0.3333333333333333
Oracle Database uses three types of histograms to capture the data distribution of a relation’s attribute [17]: singleton histograms (referred to as frequency histograms and top frequency histograms in the official documentation), equi-depth histograms (referred to as height-balanced in the official documentation), and hybrid histograms (a combination of equi-depth and frequency histograms). The type of histogram is determined based on specific criteria to fit different situations. The official documentation [17] also states some factors behind their selectivity estimation algorithms, such as endpoint numbers (the unique identifier of a bucket, e.g., the cumulative frequency of buckets in frequency and hybrid histograms) and values (the highest value in a bucket), and whether column values are popular (an endpoint value that appears multiple times in a histogram) or non-popular (every column value that is not popular). However, the details of these estimation algorithms are not published. Few online articles, in the form of hacker blogs, made experimental analyses to guess how selectivity estimation works in Oracle Database, but did not yield a clear algorithm [26,27].
SQL-Server is another popular closed-source DBMS. Due to its proprietary nature, the implementation details are scarce. According to the official documentation [18], a proprietary kind of histogram with a density vector associated is built in three steps for each attribute. The official documentation [28,29] describes four core assumptions for the selectivity estimation: independence when no correlation information is available, uniformity in histogram bins, inclusion when filtering a column with a constant, and containment when joining distinct values from two histograms [30]. Although the white paper [29] is a publication from SQL-Server that deals with the problem of selectivity estimation, it does not explain the algorithm used for join selectivity. Similar to Oracle, the implemented algorithm is a secret.
To identify if any informed selectivity estimation is taking place when performing inequality joins in SQL-Server and Oracle, we performed the following experiment. Two different attributes, T1 and T2, with 1000 and 200 rows, respectively, were randomly generated by sampling the range [0, 100] uniformly. They were then joined using the < (less than) operator. Both databases made a quite accurate selectivity estimation of this inequality join. Oracle Database had an estimation error of 3%, and SQL-Server had a smaller error of just 0.29%. As such, we know that both systems implement good estimation algorithms.
In conclusion, although learning-based methods have become a popular research direction for selectivity estimation in recent years, histograms are still the most-commonly used statistics in existing DBMS for this purpose. The recurring types of used histogram statistics are equi-depth histograms approximating the distribution of values and singleton histograms of most-common values. As our investigation indicates, MySQL and PostgreSQL do not have algorithms implemented for join selectivity estimation, and they use predefined constants. On the other hand, popular commercial DBMS (SQL-Server and Oracle) have implemented some algorithms based on the histograms, but we could not find any source describing them. This paper addresses this gap by proposing such an algorithm.

4. A Formal Model for Selectivity Estimation

We first start by formalizing the problem of restriction selectivity estimation for the less than (<) operator. Suppose that the goal is to estimate the selectivity of the following operation (expressed in SQL):
  • SELECT ∗
  • FROM R1
  • WHERE R1.X < c
where c is a constant. Treating the attribute R 1 . X as a random variable X, estimating the selectivity of the above operation is equivalent to finding P ( X < c ) . Given the PDF or the CDF of X, f X , or F X , respectively, the selectivity of the operation above can be formalized as
P ( X < c ) = c f X ( x ) d x = F X ( c )
Suppose now that the goal is to estimate the join selectivity for the less than (<) operator. That is, we want to estimate the selectivity of the following operation (expressed in SQL):
  • SELECT ∗
  • FROM R1, R2
  • WHERE R1.X < R2.Y
Treating the attributes R 1 . X and R 2 . Y as random variables X and Y, respectively, estimating the selectivity of the above operation can be formulated as finding P ( X < Y ) .
Consider the joint distribution of X and Y, P ( X , Y ) . The probability that a sample (a, b) taken at random from the Cartesian product of the values in R 1 . X and R 2 . Y can be defined as follows:
a R 1 . X , b R 2 . Y , P ( X = a , Y = b ) = P ( X = a ) × P ( Y = b )
or, equivalently,
P ( X , Y ) = P ( X ) P ( Y )
which is the definition of independent random variables. Note that, when a Cartesian product is involved, either explicitly as in the SQL statement above or implicitly through a join clause, the two variables are independent.
Given a joint PDF of X and Y, f X , Y . With X and Y being independent, it is known that f X , Y ( x , y ) = f X ( x ) f Y ( y ) , with  f X and f Y the PDFs of X and Y, respectively. Considering F X to be the CDF of X, the selectivity of the less than (<) operator can be formalized as follows:
P ( X < Y ) = + y f X , Y ( x , y ) d x d y = + y f X ( x ) f Y ( y ) d x d y = + ( y f X ( x ) d x ) f Y ( y ) d y = + F X ( y ) f Y ( y ) d y
This formula, thus, presents a solution for estimating the join selectivity estimation. Next, we discuss how to translate it into an algorithm.

5. Implementation in a Database System

In RDBMS implementations, histograms are used as a discrete approximation of the PDF and CDF of attributes. This section maps the theory above into an implementable solution in databases using equi-depth histograms.

5.1. Selectivity Estimation

5.1.1. Restriction Selectivity Estimation

Recall that restriction selectivity estimation is about estimating the selectivity of a predicate in the following form:
  • SELECT ∗
  • FROM R1
  • WHERE R1.X < c
As described in Section 4, restriction selectivity estimation can be calculated using the CDF of X (Equation (3)). Deriving the CDF from an equi-depth histograms is presented in Equation (2). To find the bin, B j , where c is contained, one can perform a binary search over the histogram boundaries.
Algorithm 1 illustrates the estimation of restriction selectivity. The  h i s t array represents the equi-depth histogram, stored as an ordered array of bin boundaries. The function binary_search returns the greatest index in this array that is less than or equal to a given constant, effectively finding the bin where such a constant falls. The rest of the algorithm computes the CDF at c using Equation (2).
Algorithm 1: Restriction selectivity estimation for the expression R1.X < c.
Mathematics 11 01383 i001
Using the running example and the following query:
  • SELECT ∗
  • FROM R1
  • WHERE R1.X < 30
This query yields eight rows, which corresponds to a selectivity of 2 3 . Using Algorithm 1 with the histograms presented in the running example, the number 30 will be found in B X 2 , meaning that the estimated selectivity will be 2 3 + 1 3 30 25 20 = 9 12 . After multiplying by the attribute’s cardinality (12), we obtain the estimated row count of nine, which is a close estimate to the actual result size.
Figure 2 shows a graphical depiction of the PDF of the R 1 . X , which is directly obtainable from its equi-depth histogram. The integral in Equation (2), as well as the estimation calculated above using Algorithm 1 correspond to the highlighted area in the figure.

5.1.2. Join Selectivity Estimation

Join selectivity estimation is mapped into a double integral involving the two PDFs. Equation (4) illustrates that join selectivity can be estimated by using the CDF of X and the PDF of Y. These can be calculated using Equations (1) and (2).
The CDF of X is linear piecewise, each piece being defined in a bin of X’s histogram. The PDF of Y is a step function, i.e., constant piecewise, where each piece is defined in a bin of Y’s histogram. This leads to the conclusion that their product, which is needed in Equation (4), is a linear piecewise function, with every piece being defined in an intersection of X and Y’s bins (see Figure 3 for a graphical depiction of this using the running example introduced in Section 2.2). By merging the bounds of the two histograms of X and Y in a single sorted array, s y n c , the intersections of X and Y’s bins will be of the form [ s y n c j , s y n c j + 1 [ .
From Equation (1), we know that f Y is 0 for all values until the first value of h i s t Y , and it is 0 after the last value of h i s t Y . From Equation (2), we know that F X is 0 for all values until the first value of h i s t X , and it is 1 after the last value of h i s t X .
Analyzing the product F X · f Y piecewise:
( F X · f Y ) ( c ) = 0 c m a x ( h i s t X 0 , h i s t Y 0 ) F X ( c ) · f Y ( c ) m a x ( h i s t X 0 , h i s t Y 0 ) < c < m i n ( h i s t X n X 1 , h i s t Y n Y 1 ) f Y ( c ) h i s t X n X 1 c < h i s t Y n Y 1 0 c m a x ( h i s t X n X 1 , h i s t Y n Y 1 )
Since the product of F X and f Y is linear in each interval of the form [ s y n c j , s y n c j + 1 [ , Equation (4) can be discretized as follows:
P ( X < Y ) = + ( F X · f Y ) ( y ) d y = k = 0 n X + n Y 1 s y n c k s y n c k + 1 ( F X · f Y ) ( y ) d y = k = 0 n X + n Y 1 ( F X · f Y ) ( s y n c k ) + ( F X · f Y ) ( s y n c k + 1 ) 2 ( s y n c k + 1 s y n c k )
To maximize code re-use, it is possible to reorganize this equation into a sum of the CDFs of both X and Y, i.e., so that we can reuse Algorithm 1. The following is derived directly from Equation (3):
f Y ( s y n c k ) ( s y n c k + 1 s y n c k ) = f Y ( s y n c k + 1 ) ( s y n c k + 1 s y n c k ) = s y n c k s y n c k + 1 f Y ( y ) d y = F Y ( s y n c k + 1 ) F Y ( s y n c k )
and the first two steps rely on the fact that f Y is constant in the interval [ s y n c k , s y n c k + 1 [ .
Equation (6) can now be re-written using only CDFs of X and Y as follows:
P ( X < Y ) = k = 0 n X + n Y 1 ( F X · f Y ) ( s y n c k ) + ( F X · f Y ) ( s y n c k + 1 ) 2 ( s y n c k + 1 s y n c k ) = 1 2 k = 0 n X + n Y 1 ( F X ( s y n c k ) + F X ( s y n c k + 1 ) ) ( F Y ( s y n c k + 1 ) F Y ( s y n c k ) )
Algorithm 2 illustrates the estimation of the join selectivity of the less than (<) operator using this equation. It, thus, has the advantage of re-using Algorithm 1 to compute F X and F Y . The creation of the s y n c array in Algorithm 2 comes at the expense of the time and space of O( n X + n Y ), for duplicating and merging the two sorted histograms. To optimize this, the two histograms can be scanned in parallel, without the need to materialize the sync array. This also allows for further optimization. The algorithm only needs to iterate over the overlapping region of both histograms. All the partial sums before that will be zero, as can be verified in Figure 4. After the overlapping region, the remaining partial sums are equal to what is left of the histogram of Y, 1 c u r F Y , because all remaining values of Y will be greater than the maximum value in X. We adopted these optimizations in our implementation, yet we omitted them here for the clarity of presentation.
Algorithm 2: Join selectivity estimation algorithm for the less than (<) operator re-using Algorithm 1 as F X and F Y .
Mathematics 11 01383 i002
The goal of Algorithm 2 is to calculate the area under the curve of the product F X · f Y , represented in Figure 4. Taking the three-bin histograms calculated in Section 2.3 from the running example attributes R 1 . X and R 2 . Y , a materialized s y n c array would have the values [10, 15, 20, 25, 39, 45, 50] after merging, sorting, and removing duplicates. These correspond to the boundaries of the pieces in which the product F X · f Y is linear. Stepping through Algorithm 2 with k from 1 to 5, corresponding to the six pieces represented by the sync array, we arrive at the following sum:
P ( R 1 . X < R 2 . Y ) = 1 2 ( 0 + 1 6 ) × ( 0 0 ) + ( 1 6 + 1 3 ) × ( 1 3 0 ) + ( 1 3 + 2 3 ) × ( 8 19 1 3 ) + ( 2 3 + 9 10 ) × ( 2 3 8 19 ) + ( 9 10 + 1 ) × ( 28 33 2 3 ) + ( 1 + 1 ) × ( 1 28 33 ) = 24221 37620 0.643833
The correct result will have 95 rows, which corresponds to a selectivity of 95 144 0.659722 . After multiplying by the cardinality of the Cartesian product of both attributes (144) and rounding the result to the nearest integer, we obtain the estimated row count of 92.712 93 , which is close to the correct result size.
Figure 4 shows a graphical depiction of the product of the CDF of R 1 . X and the PDF of R 1 . Y , which is directly obtainable from their equi-depth histograms. The integral in Equation (6), as well as the estimation calculated above using Algorithm 2 correspond to the area under the curve in the figure.
Figure 4 illustrates the multiplication of the CDF( R 1 . X ) and the PDF( R 2 . Y ). The integral in Equation (4), as well as its estimation in Algorithm 4 correspond to calculating the area under the curve in the figure.
Note that the code re-use in Algorithm 2 has a small performance impact. This algorithm has a time complexity of O( ( n X + n Y ) l o g ( n X + n Y ) ), since it performs a binary search (twice) for each element of each histogram. This binary search is not necessary since the two histograms are being scanned sequentially and the current indices are known at each iteration. One way to avoid this overhead would be to optionally specify j as an input parameter of Algorithm 1, thus reducing the time complexity to O( n X + n Y ).

5.2. Extending to All Scalar Inequality Operators

Given the restriction and join selectivity estimators for the less than inequality, all scalar inequality operators can be implemented by noting the following equivalences:
Restriction selectivity:
  • P ( X > = c ) = 1 P ( X < c )
  • P ( X > c ) = 1 P ( X < c ) P ( X = c )
  • P ( X < = c ) = P ( X < c ) + P ( X = c )
Join selectivity:
  • P ( X > = Y ) = 1 P ( X < Y )
  • P ( X > Y ) = P ( Y < X )
  • P ( X < = Y ) = P ( X < Y ) + P ( X = Y )
Estimators for equality selections and joins are already implemented by almost all common systems. In case they are missing, one could assume P(X = c) and P(X = Y) to be zero, thus leading to under-/over-estimation of the selectivity.

5.3. Making Use of Other Statistics

Typically, RDBMS will collect statistics about nulls, in the form of a fraction of null values, and Most-Common Values (MCVs), in the form of a singleton histogram. Histograms will, thus, be constructed for the remaining part of the data. When the histogram statistics only refer to a fraction of the data, the methods described up to this point only provide an estimation for this fraction. The final estimation must, thus, take nulls and MCV into account.
As a general way to integrate such other statistics in the estimation besides the histograms, we note the following: given a non-overlapping partitioning of the data, if each partition j corresponds to a fraction p j of the original data and selectivity within that partition is s j , the final selectivity can be calculated by the inner product p · s .
Since a value is either null, a most-common value, or accounted for by the histogram, the overall restriction selectivity can be calculated by the following formula:
p · s = p n u l l s n u l l + p m c v s m c v + p h i s t s h i s t

5.3.1. Null Values

All inequality operators are strict; this means that the selectivity of null values is 0. For this reason, the first term in Equation (9) is also 0.

5.3.2. Most-Common Values

MCV statistics maintain pairs of values and their frequencies in the table. They are maintained for the top k frequent values, where k is a statistics collection parameter. Since MCVs represent the data in their original form, it is possible to accurately compute the selectivity for these values.
To estimate the restriction selectivity of an operator using the most-common values, Algorithm 3 can be used. This algorithm computes the selectivity of a Boolean operator on a list of most-common values by adding the frequencies of the most-common values satisfying this Boolean condition.
Algorithm 3: mcv_selectivity(values, fractions, n, op), estimates the restriction selectivity, using only the MCV statistics, for a given Boolean operator op.
Mathematics 11 01383 i003
For join selectivity estimation, since there is the need to combine statistics from null values, MCVs, and equi-depth histograms for both X and Y, there are nine cases that need consideration, depending on the combination of the values of X and Y, as shown in Figure 5.
For strict operators, such as inequalities, only Cases 5, 6, 8, and 9 need to be calculated. This is because nulls result in empty joins. Case 9 was already handled in Algorithm 2. For Case 5, we iterate over the values in the MCV of Y. For each value, we multiply the fraction represented by that value in Y times the MCV restriction selectivity of the operator in question with the current value of the MCV of Y as the constant. This process is described in Algorithm 4.
Algorithm 4: Join selectivity estimation algorithm for any binary Boolean operator re-using Algorithm 3 as mcv_selectivity(values, fractions, n, op).
Mathematics 11 01383 i004
    For Cases 6 and 8, Algorithm 5 is used, by swapping the arguments. For each common value in the statistics of X, multiply its fraction by the histogram restriction selectivity of Y using the current value of X as the constant.
Algorithm 5: Join selectivity estimation algorithm for the less than (<) operator re-using Algorithm 1 as F Y .
Mathematics 11 01383 i005
    In terms of Algorithms 4 and 5, the selectivity of the less than (<) operator involving histograms and most-common values can be estimated by the following formula:
h i s t _ m c v _ s e l e c t i v i t y = p h i s t X p h i s t Y s h i s t X × h i s t Y + p h i s t X p m c v Y s h i s t X × m c v Y + p m c v X p h i s t Y s m c v X × h i s t Y + p m c v X p m c v Y s m c v X × m c v Y
The final selectivity taking null values into account can be estimated as follows:
s e l e c t i v i t y = ( 1 p n u l l ) × h i s t _ m c v _ s e l e c t i v i t y

5.4. Implementation for Ranges and Multiple Ranges

The algorithms described above are for scalar types. An advanced type, which is implemented by many database systems, is the range type. A range type is a tuple (left, right, lc, rc), where left <= right are two values of a domain with a total order. lc and rc specify whether, respectively, the left and right bounds are included in the range. The range type can be parameterized by the type of its bounds, e.g., range (float), range (timestamp), etc. In this section, we describe how the selectivity estimation in previous sections can be applied to the range type and the respective operators.
PostgreSQL, as an example of a DBMS that has range types, collects the statistics for range attributes in the form of two equi-depth histograms: one for the lower bounds of the ranges and one for the upper bounds. In the following, let X , Y be attributes of the same range type. Furthermore, let X . l o w e r be the variable that represents all the lower bounds of X, X . u p p e r be the variable that represents all the upper bounds of X, and similarly for Y. Then, it is possible to estimate the selectivity of the different range operators as follows:
  • P ( X < < Y ) = P ( X . u p p e r < Y . l o w e r ) , where < < , reads strictly left of, yields true when X ends before Y starts;
  • P ( X > > Y ) = P ( X . l o w e r > Y . u p p e r ) , where > > , reads strictly right of, yields true when X starts after Y ends;
  • P ( X & < Y ) = P ( X . u p p e r < Y . u p p e r ) , where &<, reads X does not extend to the right of Y, yields true when X ends before the end of Y;
  • P ( X & > Y ) = P ( X . l o w e r < Y . l o w e r ) , where &>, reads X does not extend to the left of Y, yields true when X starts before Y starts;
  • P ( X & & Y ) = 1 P ( X < < Y ) P ( X > > Y ) , where & & indicates the overlapping between X and Y;
  • And so on.
It is, however, not possible to accurately estimate the join selectivity of the operators that express total or partial containment. This is mainly because the lower and upper histograms assume independence between the range bounds. For containment operators, we need to relate the two bounds, which explicitly breaks this assumption.
Another consideration in estimating the selectivity of range operators is the fraction of empty ranges since these are not accounted for by the histograms. Depending on the operator, empty ranges are either always included or always excluded when compared to non-empty ranges and, similarly, when compared to other empty ranges.

6. Experiments

This section evaluates the proposed algorithm in two ways. Firstly, we evaluated the join selectivity estimation accuracy of the algorithm and its relation to the size of the histogram statistics, i.e., the number of bins. In a second step, the accuracy of the proposed solution was compared to the existing solutions in Oracle, SQL-Server, MySQL, and PostgreSQL. The algorithm was implemented in PostgreSQL 15, including support for range operators as described in Section 5.4. The code has been submitted as a patch, and it is currently under review for inclusion in the next release of PostgreSQL. The patch is also included as an artifact with this paper.
The experiments described in this section were run using our implementation in PostgreSQL 15-develop on a Debian virtual machine with 32 GB of disk and 8 GB of RAM. For the first set of experiments, we created two relations, R1 and R2, with range attributes R1.X and R2.Y and cardinalities of 20,390 and 20,060 rows, respectively. Note that, for range types, we used the same algorithm as for scalar types, so the results of this experiment hold for both.
The range values in the two relations were generated to cover a mixture of short, medium, and long ranges. We also included corner cases such as ranges with infinite bounds, empty ranges, and null values in the two relations. The following query was executed varying the number of histogram bins:
  • SELECT ∗
  • FROM R1, R2
  • WHERE x << y
Note that << denotes the “strictly left of” operator, which returns true if and only if the upper bound of x is less than the lower bound of y. PostgreSQL collects bounds’ histogram statistics for range attributes. To estimate the selectivity of the query above, the histograms of the upper bound of X and lower bound of Y were used as the input for Algorithm 2.
The optimizer statistics collector of PostgreSQL has a parameter called statistics target, which controls the number of bins in the equi-depth histograms. The default value is 100, and it can be increased up to 10,000. The experiments described in this section were run by incrementing the statistics target by steps of 100 starting with the default value till the maximum value.
In the experiment, we observed two quantities: (1) the planning time, which is the time taken by the query optimizer to enumerate the alternative query plans and estimate their costs, and (2) the cost estimation error defined as the absolute difference between the estimated and the actual number of rows returned by the query, divided by the cardinality of the Cartesian product of the two relations, which is 409,013,259.
Figure 6 shows the change in planning time (in milliseconds) as the statistics target increases. Apart from two outliers, the planning time showed approximately linear behavior, indicating that the binary search does not have a significant impact on it in the allowed range of statistics targets. Recall that the analytical complexity is O( ( n X + n Y ) l o g ( n X + n Y ) ).
Figure 7 shows the estimation error (in a logarithmic scale) against the number of histogram bins, i.e., by varying the statistics target. As expected, using more bins lead to a lower estimation error. Even with a small histogram of 100 histogram bins, the estimation error was insignificant, amounting to about 1.112%. This error dropped rapidly to less than 0.002% at 900 bins, which corresponds to less than 5% of each relation size. The error stayed consistently around this value for the histograms bigger than 900 bins.
Figure 8 plots the selectivity estimation error against the planning time in milliseconds. The significance of this figure is to illustrate the relation of the expenditure in terms of planning time versus the gain in terms of reduced error. This figure shows that, for the relations used, the planning time did not need to exceed 2 milliseconds to obtain extremely accurate estimations.
Next, we compared our algorithm with the existing solutions used in Oracle, SQL-Server, MySQL, and PostgreSQL. For this experiment, we generated three relations, U, N, and Z, with 1000 rows each. Relation U contains 10 columns (U.A to U.J) with uniformly sampled data from a range starting at 0 , 100 , 200 , 300 , 400 , 500 , 600 , 700 , 800 , 900 , respectively, with each range being 1000 units wide. Relations N and Z each contain 10 column (N.A to N.J, and Z.A to Z.J) with data sampled from a normal distribution with a mean of 1000 and a standard deviation of 300 and a Zipfian distribution with coefficient s = 1.3 , respectively. The following three queries were then run for each pair of columns:
  • Q1: SELECT ∗ FROM U AS U1, U AS U2 WHERE U1.A < U2.column
  • Q2: SELECT ∗ FROM U, N WHERE U.column < N.column
  • Q3: SELECT ∗ FROM U, Z WHERE U.column < Z.column
Figure 9 shows the average estimation error for each query, as returned by the tested DBMS. Our solution is denoted as PostgreSQL Patched. Note that we used a logarithmic scale for the y-axis; otherwise, our solution would not be visible on the graph. As we can see, the results for MySQL and the existing PostgreSQL solution were equally bad. This was expected, as both system returned a constant selectivity of 1 / 3 for all queries. Oracle and SQL-Server returned better estimates for Query 1 (below 0.5% error), but struggled with the normal and Zipfian distribution. Our estimator consistently computed fairly accurate estimates for all data distributions, with average estimation errors of 0.02%, 0.05%, and 0.04% for Q1, Q2, and Q3 respectively.
As explained in the Introduction, providing accurate selectivity estimations is crucial for the database optimizer to choose the best execution plan for a specific query. To illustrate the impact of choosing a good or a bad plan, the following simple query will be used:
  • SELECT ∗ FROM U, N, Z WHERE U.A < N.A AND U.A < Z.A
Since this query combines two inequality joins, it is beneficial to first compute the join with the lowest selectivity. In this case, this corresponds to the join between U and Z. However, the execution plan returned by the current PostgreSQL optimizer computes the join between U and N first. This is because the returned selectivity estimates are not accurate enough for the optimizer to apply the reordering. In our current implementation, however, the optimizer receives accurate selectivity estimates and is able to reorder the joins. This reordering decreases the query duration from 89 s to 48 s, which is a 1.8× increase in speed. This shows how important accurate selectivity estimations can be when combining multiple joins in the same query.

7. Conclusions and Possible Future Work

This paper proposed an algorithm for estimating the selectivity of inequality join predicates. It is a fundamental problem in databases that has not been solved before to our knowledge. This work covered this theoretical gap and further implemented the solution in the open-source database PostgreSQL.
We estimated the inequality join selectivity of two relations using the joint probability density function of the common join attribute. In database systems, the integral is discretized using equi-depth histogram synopses for the join attributes. We further structured the join selectivity calculations to reuse the selectivity estimation when one of the arguments is a constant. Since the latter is already implemented in common database systems, the estimator of join selectivity builds on existing implementations in a modular fashion. The experimental results validated the high accuracy of estimation, while the query planning time remained in the acceptable range of a few milliseconds.
This paper generalized the selectivity estimation algorithm to support scalar and 1D range types. As an extension, it would be interesting to support multi-dimensional ranges. This is of particular interest to spatial (2D, 3D) and spatiotemporal (3D, 4D) databases.

Author Contributions

Conceptualization, D.R. and Z.L.; Methodology, D.R. and Z.L.; Software, D.R. and Z.L.; Validation, D.R. and Z.L.; Formal analysis, D.R., Z.L., M.S. (Maxime Schoemans) and M.S. (Mahmoud Sakr); Writing—original draft, D.R. and Z.L.; Writing—review & editing, M.S. (Maxime Schoemans) and M.S. (Mahmoud Sakr); Supervision, M.S. (Maxime Schoemans) and M.S. (Mahmoud Sakr). All authors have read and agreed to the published version of the manuscript.

Funding

Maxime Schoemans is a Research Fellow of the Fonds de la Recherche Scientifique—FNRS.

Data Availability Statement

The datasets and queries used in the experiments section of this study are available through email upon request to any of the corresponding authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chamberlin, D.D.; Astrahan, M.M.; Blasgen, M.W.; Gray, J.N.; King, W.F.; Lindsay, B.G.; Lorie, R.; Mehl, J.W.; Price, T.G.; Putzolu, F.; et al. A History and Evaluation of System R. Commun. ACM 1981, 24, 632–646. [Google Scholar] [CrossRef]
  2. Graefe, G. The Cascades Framework for Query Optimization. IEEE Data Eng. Bull. 2014, 18, 19–29. [Google Scholar]
  3. Graefe, G.; McKenna, W.J. The volcano optimizer generator: Extensibility and efficient search. In Proceedings of the IEEE 9th International Conference on Data Engineering, Vienna, Austria, 19–23 April 1993; pp. 209–218. [Google Scholar]
  4. Pitoura, E. Selectivity Estimation. In Encyclopedia of Database Systems; Springer: New York, NY, USA, 2009; p. 2548. [Google Scholar]
  5. Park, Y.; Zhong, S.; Mozafari, B. QuickSel: Quick Selectivity Learning with Mixture Models. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, Portland, OR, USA, 14–19 June 2020; pp. 1017–1033. [Google Scholar]
  6. Lan, H.; Bao, Z.; Peng, Y. A Survey on Advancing the DBMS Query Optimizer: Cardinality Estimation, Cost Model, and Plan Enumeration. Data Sci. Eng. 2021, 6, 86–101. [Google Scholar] [CrossRef]
  7. Wang, X.; Qu, C.; Wu, W.; Wang, J.; Zhou, Q. Are We Ready for Learned Cardinality Estimation? Proc. VLDB Endow. 2021, 14, 1640–1654. [Google Scholar] [CrossRef]
  8. Hasan, S.; Thirumuruganathan, S.; Augustine, J.; Koudas, N.; Das, G. Deep Learning Models for Selectivity Estimation of Multi-Attribute Queries. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, Portland, OR, USA, 14–19 June 2020; pp. 1035–1050. [Google Scholar]
  9. Kipf, A.; Kipf, T.; Radke, B.; Leis, V.; Boncz, P.A.; Kemper, A. Learned Cardinalities: Estimating Correlated Joins with Deep Learning. In Proceedings of the 9th Biennial Conference on Innovative Data Systems Research, Asilomar, CA, USA, 13–16 January 2019. [Google Scholar]
  10. Yang, Z.; Liang, E.; Kamsetty, A.; Wu, C.; Duan, Y.; Chen, X.; Abbeel, P.; Hellerstein, J.M.; Krishnan, S.; Stoica, I. Deep Unsupervised Cardinality Estimation. Proc. VLDB Endow. 2019, 13, 279–292. [Google Scholar] [CrossRef]
  11. Azhir, E.; Jafari Navimipour, N.; Hosseinzadeh, M.; Sharifi, A.; Darwesh, A. Deterministic and non-deterministic query optimization techniques in the cloud computing. Concurr. Comput. Pract. Exp. 2019, 31, e5240. [Google Scholar] [CrossRef]
  12. Hu, X.; Liu, Y.; Xiu, H.; Agarwal, P.K.; Panigrahi, D.; Roy, S.; Yang, J. Selectivity Functions of Range Queries Are Learnable. In Proceedings of the 2022 International Conference on Management of Data, Philadelphia, PA, USA, 12–17 June 2022; pp. 959–972. [Google Scholar]
  13. Papenbrock, T.; Bergmann, T.; Finke, M.; Zwiener, J.; Naumann, F. Data profiling with metanome. Proc. Vldb Endow. 2015, 8, 1860–1863. [Google Scholar] [CrossRef] [Green Version]
  14. Cormode, G.; Garofalakis, M.; Haas, P.J.; Jermaine, C. Synopses for Massive Data: Samples, Histograms, Wavelets, Sketches. Found. Trends Databases 2012, 4, 1–294. [Google Scholar] [CrossRef]
  15. MySQL Team at Oracle. MySQL 8.0 Reference Manual. Available online: https://dev.mysql.com/doc/refman/8.0/en/optimizer-statistics.html (accessed on 7 March 2023).
  16. PostgreSQL Global Development Group. Statistics Used by the Planner. 1996–2022. Available online: https://www.postgresql.org/docs/14/planner-stats.html (accessed on 7 March 2023).
  17. Oracle. SQL Tuning Guide. 2021. Available online: https://docs.oracle.com/en/database/oracle/oracle-database/21/tgsql/histograms.html (accessed on 7 March 2023).
  18. SQL-Server. Statistics—SQL-Server. Microsoft. 2022. Available online: https://docs.microsoft.com/en-us/sql/relational-databases/statistics/statistics?view=sql-server-ver15 (accessed on 7 March 2023).
  19. Fragoso-Navarro, E.; Cedillo-Hernandez, M.; Garcia-Ugalde, F.; Morelos-Zaragoza, R. Reversible Data Hiding with a New Local Contrast Enhancement Approach. Mathematics 2022, 10, 841. [Google Scholar] [CrossRef]
  20. Juarez-Sandoval, O.U.; Garcia-Ugalde, F.J.; Cedillo-Hernandez, M.; Ramirez-Hernandez, J.; Hernandez-Gonzalez, L. Imperceptible–Visible Watermarking to Information Security Tasks in Color Imaging. Mathematics 2021, 9, 2374. [Google Scholar] [CrossRef]
  21. MySQL Team at Oracle. MySQL-Server. 2000. Available online: https://github.com/mysql/mysql-server/tree/mysql-8.0.29 (accessed on 7 March 2023).
  22. MySQL Team at Oracle. Source Code File Item.h. 2011. Available online: https://github.com/mysql/mysql-server/blob/8d8c986e5716e38cb776b627a8eee9e92241b4ce/sql/item.h#L100 (accessed on 7 March 2023).
  23. PostgreSQL Global Development Group. Row Estimation Examples. 1996–2022. Available online: https://www.postgresql.org/docs/14/row-estimation-examples.html (accessed on 7 March 2023).
  24. PostgreSQL Global Development Group. Selectivity Functions for Standard Operators. 1996–2022. Available online: https://github.com/postgres/postgres/blob/REL_14_STABLE/src/include/utils/selfuncs.h (accessed on 7 March 2023).
  25. PostgreSQL Global Development Group. Source Code File Selfuncs.h. 1996–2022. Available online: https://github.com/postgres/postgres/blob/349803b18fee2476c7c9c84039d04c900ce8d499/src/include/utils/selfuncs.h#L33 (accessed on 7 March 2023).
  26. Lewis, J. Join Cardinality. In Cost-Based Oracle Fundamentals; Apress: Berkeley, CA, USA, 2006; pp. 265–305. [Google Scholar]
  27. Dell’Era, A. Join Over Histograms. 2008. Available online: http://www.adellera.it/static_html/investigations/join_over_histograms/JoinCardinalityEstimationWithHistogramsExplained.pdf (accessed on 7 March 2023).
  28. SQL-Server. Cardinality Estimation (SQL-Server)—SQL-Server. Microsoft. 2021. Available online: https://docs.microsoft.com/en-us/sql/relational-databases/performance/cardinality-estimation-sql-server?view=sql-server-ver15 (accessed on 7 March 2023).
  29. Sack, J. Optimizing Your Query Plans with the SQL-Server 2014 Cardinality Estimator. Microsoft. 2014. Available online: https://learn.microsoft.com/en-us/previous-versions/dn673537(v=msdn.10) (accessed on 7 March 2023).
  30. Bruno, N.; Chaudhuri, S. Exploiting statistics on query expressions for optimization. In Proceedings of the ACM SIGMOD International Conference on Management of Data, Portland, OR, USA, 14–19 June 2020; pp. 263–274. [Google Scholar]
Figure 1. Equi-depth histograms of R 1 . X and R 2 . Y with 3 bins each.
Figure 1. Equi-depth histograms of R 1 . X and R 2 . Y with 3 bins each.
Mathematics 11 01383 g001
Figure 2. Restriction selectivity estimation of R 1 . X < 30 .
Figure 2. Restriction selectivity estimation of R 1 . X < 30 .
Mathematics 11 01383 g002
Figure 3. Piece of product F X · f Y used for join selectivity estimation.
Figure 3. Piece of product F X · f Y used for join selectivity estimation.
Mathematics 11 01383 g003
Figure 4. Product F X · f Y used for join selectivity estimation.
Figure 4. Product F X · f Y used for join selectivity estimation.
Mathematics 11 01383 g004
Figure 5. Nine cases for join selectivity estimation.
Figure 5. Nine cases for join selectivity estimation.
Mathematics 11 01383 g005
Figure 6. Query planning time (ms) vs. the number of histogram bins in Algorithm 2.
Figure 6. Query planning time (ms) vs. the number of histogram bins in Algorithm 2.
Mathematics 11 01383 g006
Figure 7. Log selectivity estimation error vs. the number of histogram bins.
Figure 7. Log selectivity estimation error vs. the number of histogram bins.
Mathematics 11 01383 g007
Figure 8. Log selectivity estimation error vs. planning time (ms).
Figure 8. Log selectivity estimation error vs. planning time (ms).
Mathematics 11 01383 g008
Figure 9. Comparison of join selectivity estimation errors of common DBMS and a patched version of PostgreSQL implementing the proposed algorithm.
Figure 9. Comparison of join selectivity estimation errors of common DBMS and a patched version of PostgreSQL implementing the proposed algorithm.
Mathematics 11 01383 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Repas, D.; Luo, Z.; Schoemans, M.; Sakr, M. Selectivity Estimation of Inequality Joins in Databases. Mathematics 2023, 11, 1383. https://doi.org/10.3390/math11061383

AMA Style

Repas D, Luo Z, Schoemans M, Sakr M. Selectivity Estimation of Inequality Joins in Databases. Mathematics. 2023; 11(6):1383. https://doi.org/10.3390/math11061383

Chicago/Turabian Style

Repas, Diogo, Zhicheng Luo, Maxime Schoemans, and Mahmoud Sakr. 2023. "Selectivity Estimation of Inequality Joins in Databases" Mathematics 11, no. 6: 1383. https://doi.org/10.3390/math11061383

APA Style

Repas, D., Luo, Z., Schoemans, M., & Sakr, M. (2023). Selectivity Estimation of Inequality Joins in Databases. Mathematics, 11(6), 1383. https://doi.org/10.3390/math11061383

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop