Next Article in Journal
Nonequilibrium Molecular Velocity Distribution Functions Predicted by Macroscopic Gas Dynamic Models
Previous Article in Journal
Oscillation Criteria for Delay Difference Equations with Continuous Time, Piecewise Linear Delay Functions, and Oscillatory Coefficients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lightweight Implicit Approximation of the Minkowski Sum of an N-Dimensional Ellipsoid and Hyperrectangle

by
Martijn Courteaux
,
Bert Ramlot
,
Peter Lambert
and
Glenn Van Wallendael
*
IDLab, Ghent University—imec, 9052 Ghent, Belgium
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(8), 1326; https://doi.org/10.3390/math13081326
Submission received: 11 March 2025 / Revised: 11 April 2025 / Accepted: 16 April 2025 / Published: 18 April 2025
(This article belongs to the Section B: Geometry and Topology)

Abstract

:
This work considers the Minkowski sum of an N-dimensional ellipsoid and hyperrectangle, a combination that is extremely relevant due to the usage of ellipsoid-adjacent primitives in computer graphics for work such as 3D Gaussian splatting. While parametric representations of this Minkowski sum are available, they are often difficult or too computationally intensive to work with when, for example, performing an inclusion test. For performance-critical applications, a lightweight approximation of this Minkowski sum is preferred over its exact form. To this end, we propose a fast, computationally lightweight, non-iterative algorithm that approximates the Minkowski sum through the intersection of two carefully constructed bounding boxes. Our approximation is a super-set that completely envelops the exact Minkowski sum. This approach yields an implicit representation that is defined by a logical conjunction of linear inequalities. For applications where a tight super-set of the Minkowski sum is acceptable, the proposed algorithm can substantially improve the performance of common operations such as intersection testing.

1. Introduction

The Minkowski sum, a binary operation in geometry, combines two subsets of a Euclidean space to form a new set. Formally, the Minkowski sum A B of two sets of vectors A and B is defined as follows:
A B = { a + b a A , b B } .
Additionally, the Minkowski difference A B of two sets of vectors A and B is defined as follows:
A B = A ( B ) = { a b a A , b B } .
The Minkowski sum and difference play a critical role in both theoretical research and practical applications across a diverse range of fields, including robotics [1,2], energy [3,4,5], algebraic geometry [6], and computational biology [7], to name a few. In mathematical morphology, the Minkowski sum is better known as dilation, one of the four basic operations. Mathematical morphology has its roots in 2D image processing but has since expanded to related fields, such as computer graphics [8]. The Minkowski sum is also used more broadly in computer graphics for tasks such as constraint enforcement for vector graphics [9] and collision detection [10]. These works typically focus on traditional representations, e.g., vector graphics [9] and meshes [10], with little attention being paid to newer representations. One class of non-traditional representations that has become popular is those based on multivariate Gaussian primitives, e.g., the Steered Mixture of Experts [11] and 2D/3D/4D Gaussian splatting [12,13,14]. Multivariate normal distributions are closely related to ellipsoids, as the level sets of their probability density function (or Mahalanobis distance) form concentric ellipsoids, which are uniformly scaled versions of one another. Ellipsoids have therefore become of more and more interest within computer graphics.
When an origin-centered ellipsoid is the second operand in a Minkowski difference, it becomes equivalent to a Minkowski sum due to the point symmetry of ellipsoids. Numerous methods for computing the boundary of a Minkowski sum involving ellipsoids exist, but most are parametric [15,16,17,18]. Although parametric representations provide precise solutions, they fall short in their applicability in tests that must be run extremely quickly. This is especially relevant in computer graphics applications, where we want to conduct millions to billions of tests every second [19]. To achieve such speeds, we are willing to accept a more crude approximation of the Minkowski sum, given that the approximation is guaranteed to envelop the entire Minkowski sum. Furthermore, we are especially interested in the Minkowski sum of an ellipsoid and hyperrectangle (commonly referred to as a box), as this operation naturally occurs during common computer graphics operations, such as during the frustum culling and block-based rasterization of 3D Gaussian splatting [13]. The construction of the exact Minkowski sum used in this scenario is visualized in Figure 1. An ellipsoid and a box only intersect when the origin (i.e., zero vector) lies within their Minkowski difference. Note that this is true for any pair of shapes [20]. More intuitively, taking advantage of the ellipsoid’s point symmetry, the ellipsoid intersects with the box only if its center lies within the Minkowski sum of the box and its origin-centered counterpart. An example of how this overlap is detected when relying on the Minkowski sum is demonstrated in Figure 2.
To this end, we propose a non-iterative, fast, and computationally lightweight test to determine whether an N-dimensional ellipsoid overlaps with an N-dimensional box’s domain. Our method creates an implicit representation by defining the approximated sum as a logical conjunction of 2 N linear inequalities. This approach not only guarantees a ‘tightly fitted’ approximation that fully contains the exact solution, but it also offers a simple-to-evaluate and computationally efficient solution. One significant practical implication of this method is its efficiency in identifying, within a specified tolerance, the multivariate normal distributions that intersect a given hyperrectangular subdomain from a large set.

2. Background and Related Work

In this section, we provide an overview of the fundamental mathematical concepts that underlie our work and review relevant existing methods for computing Minkowski sums.
The Minkowski sum, named after Hermann Minkowski, is a binary operation that combines two subsets of a Euclidean space by adding each vector in the first set to each vector in the second set. While this operation has closed-form solutions for certain combinations of shapes, some are non-trivial. Examples of cases where the Minkowski sum has straightforward solutions and for which algorithms are available include the following:
  • Convex polygons [21];
  • Non-convex polygons (through their decomposition into convex polygons) [21,22];
  • Three-dimensional triangle meshes [21];
  • Algebraic planar curves [23].
Existing methods for computing Minkowski sums often involve complex mathematical procedures or algorithms. Some methods yield results in a parametric form [15,16,17,23,24,25], while others result in a set of vertices with connectivity information [21,22]. While these solutions are mathematically precise, their form or algorithmic complexity limits their practical application. In general, implicit formulations are preferred for tasks such as intersection testing.
This work focuses on the Minkowski sum of a box and an ellipsoid. To our knowledge, there is no prior work directly addressing this specific combination that has led to a practically efficient and implicit representation.A key insight involves leveraging the properties of zonohedra. A box is a zonohedron and can be expressed as the Minkowski sum of its N orthogonal generator line segments, where N is the dimensionality of the space. Let the i-th generator line segment, centered at the origin, be aligned with the i-th axis and have a half-length of s i . We can represent this line segment as a ‘collapsed’ ellipsoid E i whose covariance matrix R i has exactly one non-zero eigenvalue:
R i = e i e i T s i 2 , i = 1 , , N ,
where e i is the i-th standard basis vector. This transforms the problem of computing the Minkowski sum of an ellipsoid E (with covariance matrix R) and a box into computing the Minkowski sum of m = N + 1 ellipsoids: E E 1 E N .
The boundary of the Minkowski sum of m ellipsoids E i (defined by the covariance matrices R i ) can be parameterized exactly [15] as follows:
S ( n ) = i = 1 m R i n n T R i n , n : n = 1 ,
where parameter n represents the normal vector to the boundary surface pointing outward [15]. However, this parametric form is generally impractical for efficiently determining point containment (i.e., whether a point lies inside the sum). Furthermore, the presence of collapsed ellipsoids (with rank-deficient R i ) requires careful handling, as n T R i n can be zero for normals that are orthogonal to the i-th standard basis vector.
Given the limitations of the exact parametric form, approximation techniques are often employed, particularly for constructing an outer ellipsoidal bound. A fundamental tool for constructing the outer bound of a Minkowski sum of convex shapes is the support function h K of the convex body K:
h K ( u ) = sup x K x · u , u R N .
We restrict ourselves to normalized vectors, i.e., u S N 1 . As a result, the support function describes the signed distance from the origin to the supporting hyperplane of K with outward normal u , i.e., the farthest extent of K in direction u . It possesses the additivity property for Minkowski sums [26,27]:
h K 1 K 2 = h K 1 + h K 2 .
For an ellipsoid E defined by a positive definite covariance matrix R, the support function is [15] as follows:
h E ( u ) = u T R u .
As for the origin-centered collapsed ellipsoid E i with the covariance matrix from Equation (1), we interpret it as an origin-centered line segment extending from s i e i to s i e i to allow us to apply the known support function for a line segment [28]:
h E i ( u ) = max { ( s i e i ) · u , ( s i e i ) · u } = | s i u i | .
Leveraging the additivity of support functions, a commonly studied family of ellipsoids guaranteed to contain the Minkowski sum of a collection of ellipsoids with covariance matrices R 1 , , R m can be constructed. An ellipsoid within this family E ( γ ) is parameterized by a vector γ = ( γ 1 , , γ m ) R > 0 m , with its covariance matrix R ( γ ) given by [29,30,31]:
R ( γ ) = i = 1 m γ i R i
where the parameters must satisfy
i = 1 m 1 γ i = 1 .
Note that this constraint implies that γ i 1 . Durie et al. [31] show that this ensures that the support function of the resulting ellipsoid h E ( γ ) ( u ) = u T R ( γ ) u bounds the sum of the individual support functions h E i ( u ) , thus guaranteeing containment: E 1 E m E ( γ )  [31].
The primary objective is often to find the tightest bound within this family, specifically the Minimum-Volume Outer Ellipsoid (MVOE), which is obtained by minimizing the determinant of R ( γ ) :
γ MVOE = arg min γ det ( R ( γ ) ) subject to ( 8 ) .
While this is a convex optimization problem [31], γ MVOE generally lacks a closed-form solution. It can be found using numerical optimization algorithms, such as Sequential Least Squares Programming (SLSQP) [32], but this can be computationally demanding. For the specific case of m = 2 , a fast fixed-point algorithm exists [33]. Halder proposed using this fixed-point algorithm as the basis for a heuristic for m > 2 by recursively applying this pairwise optimal algorithm to achieve an outer ellipsoid bound ( γ halder ) [33]. While this fixed-point algorithm simplifies for collapsed ellipsoids, the recursive nature of the heuristic means that it remains iterative.
An alternative approach involves minimizing the trace of the covariance matrix, tr ( R ( γ ) ) , instead of its determinant. This alternative optimization problem does have a known closed-form solution for the m = 2 case [30]:
γ 1 = 1 + tr ( R 2 ) tr ( R 1 ) and γ 2 = 1 + tr ( R 1 ) tr ( R 2 )
Inspired by this, a computationally tractable heuristic choice for the general m case is as follows [15]:
γ chirikjian , i = j = 1 m tr ( R j ) tr ( R i ) , i { 1 , , m }
This provides a non-iterative method for obtaining the γ of an outer ellipsoid, potentially sacrificing tightness compared to the MVOE in exchange for computational speed.

3. Proposed Approximation Algorithm

This section details our proposed algorithm for efficiently approximating the Minkowski sum of an N-dimensional ellipsoid and an axis-aligned box, Ω M = Ω ellipsoid Ω box . The core strategy of this algorithm is to construct two distinct oriented bounding boxes, Ω bbA and Ω bbB , which are each guaranteed to contain the Minkowski sum Ω M . The intersection of these two boxes, Ω M = Ω bbA Ω bbB , yields a tighter approximation of Ω M than either box alone while retaining a simple implicit representation suitable for efficient queries.
We begin by formally defining the input shapes, which are centered at the origin. We assume, without a loss of generality, that the box is axis-aligned. If the initial box were arbitrarily oriented using an invertible linear transformation B R N × N , the problem could be transformed to the axis-aligned case by applying an inverse transformation to the ellipsoid, resulting in a modified covariance matrix R = B 1 R ( B 1 ) T .
The axis-aligned box is defined by its per-dimension half-sizes, s box R > 0 N . The region Ω box can then be defined as
Ω box = x R N : i = 1 N | x i | s box , i .
The ellipsoid, defined by its covariance matrix R, occupies the region Ω ellipsoid :
Ω ellipsoid = x R N : x T R 1 x 1 .
Our approximation Ω M relies on constructing two tight bounding boxes, Ω bbA and Ω bbB . The key tool for determining the minimal size of a bounding box that encloses Ω M in any given direction is the support function of the Minkowski sum h Ω M . Using the additivity property (Equation (4)), the support function of the ellipsoid (Equation (5)), and the support function of the box (Equation (6)), we derive the following:
h Ω M ( u ) = h Ω ellipsoid ( u ) + h Ω box ( u ) = h Ω ellipsoid ( u ) + i = 1 N h E i ( u ) = u T R u + i = 1 N | s box , i u i | = u T R u + s box · | u | .
where | u | denotes the element-wise absolute value of u .
An oriented bounding box Ω bb , aligned with an orthonormal basis { b 1 , , b N } and tightly enclosing Ω M , can be constructed using the support function h Ω M . Recall that h Ω M ( u ) gives us the maximum projection of Ω M onto the u direction. For the box to tightly contain Ω M , its boundary planes orthogonal to b i must align with the maximum extents of Ω M in the ± b i directions, which are given by h Ω M ( b i ) and h Ω M ( b i ) . Thus, the smallest bounding box Ω bb is given by
Ω bb = x R N : i = 1 N b i · x h Ω M ( b i ) ( b i ) · x h Ω M ( b i ) = x R N : i = 1 N | b i · x | h Ω M ( b i )
since h Ω M ( u ) = h Ω M ( u ) due to the symmetry of the box and ellipsoid about the origin. We choose two specific orientations for the basis vectors b i to construct Ω bbA and Ω bbB , aiming to capture different geometric features of Ω M .

3.1. The Axis-Aligned Bounding Box (bbA)

The first bounding box, Ω bbA , is aligned with the standard coordinate axes, meaning that its basis vectors are b i = e i . Its half-sizes, s bbA , are determined by evaluating the support function (Equation (14)) along these axes:
s bbA , i = h Ω M ( e i ) = R i i + s box , i , i = 1 , , N .
Intuitively, the half-size along the i-th axis is the sum of the ellipsoid’s projected extent onto that axis ( R i i ) and the box’s half-size along that axis ( s box , i ). The region Ω bbA is then implicitly defined according to Equation (15):
Ω bbA = x R N : i = 1 N | x i | s bbA , i .
This construction is illustrated for N = 2 in Figure 3.

3.2. The Covariance-Oriented Bounding Box (bbB)

The second bounding box, Ω bbB , is aligned with the principal axes of the ellipsoid. These axes are defined by the normalized eigenvectors { v 1 , , v N } of the ellipsoid’s covariance matrix R and correspond to the eigenvalues { λ 1 , , λ N } . These are obtained via an eigendecomposition: R = V Λ V T , where V = [ v 1 | | v N ] and Λ = diag ( λ 1 , , λ N ) . (We note that in applications like 3D Gaussian splatting [13], R is often stored in terms of its eigensystem, avoiding the need for explicit decomposition).
The half-sizes of Ω bbB , denoted as s bbB , are determined by evaluating the support function (Equation (14)) along the eigenvectors b i = v i :
s bbB , i = h Ω M ( v i ) = λ i + s box · v i , i = 1 , , N ,
for which we used the definition of an eigenvector: R v i = λ i v i . Intuitively, this is the sum of the ellipsoid’s semi-axis length along v i and the maximal extent of the box projected onto the direction v i . The region Ω bbB is implicitly defined by using the basis V according to Equation (15):
Ω bbB = x R N : i = 1 N | ( V T x ) i | s bbB , i .
The construction of this bounding box is visualized in Figure 4.

3.3. Intersecting the Bounding Boxes

To compute the approximated Minkowski sum Ω M , we intersect the two bounding boxes:
Ω M = Ω bbA Ω bbB = x R N : i = 1 N ( | x i | s bbA , i ) ( | ( V T x ) i | s bbB , i ) .
The resulting intersection is illustrated in Figure 5.
Testing whether a point lies within the approximated Minkowski sum region, as shown in Equation (20), is straightforward and computationally efficient. Specifically, the required operations are as follows:
  • 1 eigendecomposition (*);
  • 2 N square root computations (*);
  • 1 matrix–vector multiplication;
  • 2 N absolute value computations;
  • 2 N comparisons.
Operations marked with (*) can be precomputed when the box and ellipsoid are static. Additionally, depending on the use case, the eigenvectors and corresponding eigenvalues might already be known, making the eigendecomposition unnecessary.
In NumPy [34], this logic can be implemented as in Listing 1:
Listing 1. Reference implementation of our proposed algorithm in Python 3 using NumPy 2.
Listing 1. Reference implementation of our proposed algorithm in Python 3 using NumPy 2.
1import numpy as np
2
3# Input definitions
4cov = np.array([\dots]) # NxN matrix
5half_block_size = np.array([\dots]) # N-vector
6x = np.array([\dots]) # point to test, N-vector
7
8# Compute bounding box sizes (precomputable):
9evals, evecs = np.linalg.eigh(cov)
10bbA = np.sqrt(np.diag(cov)) + half_block_size
11bbB = np.sqrt(evals) + np.sum(half_block_size[:,None] * np.abs(evecs), axis=0)
12
13# Testing the point:
14in_bbA = np.all(np.abs(x) <= bbA)
15in_bbB = np.all(np.abs(np.dot(evecs.T, x)) <= bbB)
16in_minkowski_approx = np.logical_and(in_bbA, in_bbB)
The above implementation can be trivially extended to efficiently test an array of points.

4. Results and Discussion

This section uses a Monte Carlo estimation to evaluate the performance of the proposed Minkowski sum approximation, Ω M = Ω bbA Ω bbB , and compare it with alternative approaches. It also discusses the approximation’s characteristics and identifies areas for future investigation.

4.1. Evaluation Methodology

To quantify the tightness of the approximation, we use the Volume Ratio (VR), which is defined as VR ( A ) = Vol ( A ) / Vol ( Ω M ) , where A is a fully encompassing approximation of the exact Minkowski sum Ω M . A perfect approximation has a VR of 1.
A direct comparison with prior work is challenging due to the lack of specific methods for approximating the box–ellipsoid Minkowski sum. To address this, we establish comparisons by viewing the box as a sum of degenerate ellipsoids (as discussed in Section 2) and by comparing it against approximations from the literature on multi-ellipsoid summation. Here, we note that the computational complexity of the point containment tests for a single box is roughly comparable to that of a single ellipsoid. Therefore, we report VR values not only for the proposed intersection Ω M but also for its components, Ω bbA and Ω bbB .
The evaluation was performed via Monte Carlo simulations with the dimensions N = 2 to N = 6. In each trial, the lengths of the ellipsoid semi-axes ( λ i ) and the half-sizes of the box ( s box , i ) were independently sampled from U ( 0.2 , 1.8 ) , simulating a balanced scenario. The ellipsoids were also given a random orientation. All VRs were computed using 100,000 trials.

4.2. Results and Analysis

The estimated average VRs are presented in Table 1. The results clearly show that the proposed intersection Ω M yields the lowest average VR across the tested dimensionalities in this balanced scenario, outperforming all discussed outer ellipsoid bounds (even without considering Ω bbB ), validating this intersection approach.
While generally effective, the performance of Ω M exhibits sensitivity to the specific geometric configuration considered. Firstly, if the ellipsoid’s principal axes align closely with the coordinate axes, Ω bbA and Ω bbB become similar, reducing the benefit gained from their intersection. Secondly, their relative scales matter. If the size of the ellipsoid is negligible compared to that of the box, Ω bbA is a near-perfect bound by itself. Conversely, if the size of the box is negligible, the true sum is nearly an ellipsoid, and single-ellipsoid approximations, such as E ( γ halder ) and E ( γ chirikjian ) , will offer a tighter fit than our box-based intersection.
Furthermore, Table 1 reveals that the VR tends to increase with the number of dimensions N. We hypothesize that this trend arises from two main factors. First, as N increases, a larger portion of the Minkowski sum’s volume lies near the edges, where approximations introduce errors. Second, the boundary of the Minkowski sum involves interactions between the ellipsoid and all k-faces of the N-dimensional box. Since the number of these faces grows exponentially with N, this significantly increases geometric complexity. Our approximation only accounts for the box’s vertices (0-faces) and facets ( ( N 1 ) -faces), intentionally simplifying the structure to maintain computational efficiency. This simplification becomes increasingly limiting in higher dimensions, where intermediate k-faces contribute more substantially to the true boundary.

4.3. Future Directions

The observed performance characteristics of our method, and particularly its degradation in higher dimensions, suggest avenues for future work. A promising direction would be to enhance the approximation by intersecting more than two bounding volumes. Investigating systematic methods for selecting additional oriented bounding boxes, potentially targeting different k-faces or even incorporating other shapes like ellipsoids into the intersection, could yield tighter bounds. The challenge lies in balancing the improved accuracy with the added computational cost of constructing and querying a more complex intersection region.

5. Conclusions

This paper presents a novel, non-iterative, computationally efficient algorithm for approximating the Minkowski sum of an N-dimensional ellipsoid and box. The proposed method defines the approximate sum as the logical conjunction of the conditions defining two carefully constructed bounding boxes: one axis-aligned and the other oriented along the ellipsoid’s eigenvectors. These bounding boxes are designed to each fully contain the exact Minkowski sum, ensuring that the approximation fully envelopes the exact solution. By considering the overlap between the bounding boxes, the algorithm provides an approximation that is closely fitted to the exact solution. Its computational simplicity and non-iterative nature make it suitable for performance-critical applications, such as the efficient culling of multivariate normal distributions in computer graphics.

Author Contributions

Conceptualization, M.C., P.L. and G.V.W.; methodology, M.C.; software, M.C.; validation, M.C. and B.R.; formal analysis, M.C., B.R., P.L. and G.V.W.; investigation, M.C., B.R., P.L. and G.V.W.; resources, M.C., P.L. and G.V.W.; data curation, M.C.; writing—original draft preparation, M.C. and B.R.; writing—review and editing, B.R., P.L. and G.V.W.; visualization, M.C.; supervision, P.L. and G.V.W.; project administration, P.L. and G.V.W.; funding acquisition, M.C., P.L. and G.V.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded in part by the Research Foundation–Flanders (FWO) under Grant 1SA7919N, IDLab (Ghent University–imec), Flanders Innovation & Entrepreneurship (VLAIO), and the European Union.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ruan, S.; Poblete, K.L.; Wu, H.; Ma, Q.; Chirikjian, G.S. Efficient Path Planning in Narrow Passages for Robots With Ellipsoidal Components. IEEE Trans. Robot. 2023, 39, 110–127. [Google Scholar] [CrossRef]
  2. Liao, Y.; Wu, Y.; Zhao, S.; Zhang, D. Unmanned Aerial Vehicle Obstacle Avoidance Based Custom Elliptic Domain. Drones 2024, 8, 397. [Google Scholar] [CrossRef]
  3. Wu, Z.; Zou, Y.; Zheng, F.; Liang, N. Research on Optimal Scheduling Strategy of Microgrid Considering Electric Vehicle Access. Symmetry 2023, 15, 1993. [Google Scholar] [CrossRef]
  4. Wang, X.; Wang, X.; Liu, Y.; Xiao, C.; Zhao, R.; Yang, Y.; Liu, Z. A Sustainability Improvement Strategy of Interconnected Data Centers Based on Dispatching Potential of Electric Vehicle Charging Stations. Sustainability 2022, 14, 6814. [Google Scholar] [CrossRef]
  5. Öztürk, E.; Rheinberger, K.; Faulwasser, T.; Worthmann, K.; Preißinger, M. Aggregation of Demand-Side Flexibilities: A Comparative Study of Approximation Algorithms. Energies 2022, 15, 2501. [Google Scholar] [CrossRef]
  6. Ðapić, P.; Pavkov, I.; Crvenković, S.; Tanackov, I. Generating Integrally Indecomposable Newton Polygons with Arbitrary Many Vertices. Mathematics 2022, 10, 2389. [Google Scholar] [CrossRef]
  7. Bernholt, T.; Eisenbrand, F.; Hofmeister, T. Constrained Minkowski sums: A geometric framework for solving interval problems in computational biology efficiently. Discret. Comput. Geom. 2009, 42, 22–36. [Google Scholar] [CrossRef]
  8. Suriyababu, V.K.; Vuik, C.; Möller, M. Towards a High Quality Shrink Wrap Mesh Generation Algorithm Using Mathematical Morphology. Comput.-Aided Des. 2023, 164, 103608. [Google Scholar] [CrossRef]
  9. Minarčík, J.; Estep, S.; Ni, W.; Crane, K. Minkowski Penalties: Robust Differentiable Constraint Enforcement for Vector Graphics. In Proceedings of the SIGGRAPH’24: ACM SIGGRAPH 2024 Conference Papers, Denver, CO, USA, 27 July–1 August 2024; Association for Computing Machinery: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
  10. Govindaraju, N.K.; Lin, M.C.; Manocha, D. Fast and reliable collision culling using graphics hardware. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology, New York, NY, USA, 10–12 November 2004; VRST’04. pp. 2–9. [Google Scholar] [CrossRef]
  11. Verhack, R.; Sikora, T.; Van Wallendael, G.; Lambert, P. Steered Mixture-of-Experts for Light Field Images and Video: Representation and Coding. IEEE Trans. Multimed. 2020, 22, 579–593. [Google Scholar] [CrossRef]
  12. Huang, B.; Yu, Z.; Chen, A.; Geiger, A.; Gao, S. 2D Gaussian Splatting for Geometrically Accurate Radiance Fields. In Proceedings of the SIGGRAPH’24: ACM SIGGRAPH 2024 Conference Papers, Denver, CO, USA, 27 July–1 August 2024; Association for Computing Machinery: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
  13. Kerbl, B.; Kopanas, G.; Leimkuehler, T.; Drettakis, G. 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Trans. Graph. 2023, 42, 4. [Google Scholar] [CrossRef]
  14. Yang, Z.; Yang, H.; Pan, Z.; Zhang, L. Real-time Photorealistic Dynamic Scene Representation and Rendering with 4D Gaussian Splatting. arXiv 2024, arXiv:2310.10642. [Google Scholar] [CrossRef]
  15. Chirikjian, G.S.; Shiffman, B. Applications of convex geometry to Minkowski sums of m ellipsoids in R N : Closed-form parametric equations and volume bounds. Int. J. Math. 2021, 32, 2140009. [Google Scholar] [CrossRef]
  16. Ruan, S.; Chirikjian, G.S. Closed-form Minkowski sums of convex bodies with smooth positively curved boundaries. Comput.-Aided Des. 2022, 143, 103133. [Google Scholar] [CrossRef]
  17. Lee, I.K.; Kim, M.S.; Elber, G. Polynomial/Rational Approximation of Minkowski Sum Boundary Curves. Graph. Model. Image Process. 1998, 60, 136–165. [Google Scholar] [CrossRef]
  18. Ruan, S.; Poblete, K.L.; Li, Y.; Lin, Q.; Ma, Q.; Chirikjian, G.S. Efficient Exact Collision Detection between Ellipsoids and Superquadrics via Closed-form Minkowski Sums. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 1765–1771. [Google Scholar] [CrossRef]
  19. Courteaux, M.; Mareen, H.; Ramlot, B.; Lambert, P.; Van Wallendael, G. Dimensionality Reduction for the Real-Time Light-Field View Synthesis of Kernel-Based Models. Electronics 2024, 13, 4062. [Google Scholar] [CrossRef]
  20. Gilbert, E.; Johnson, D.; Keerthi, S. A fast procedure for computing the distance between complex objects in three-dimensional space. IEEE J. Robot. Autom. 1988, 4, 193–203. [Google Scholar] [CrossRef]
  21. Ghosh, P.K. A unified computational framework for Minkowski operations. Comput. Graph. 1993, 17, 357–378. [Google Scholar] [CrossRef]
  22. Agarwal, P.K.; Flato, E.; Halperin, D. Polygon decomposition for efficient construction of Minkowski sums. Comput. Geom. 2002, 21, 39–61. [Google Scholar] [CrossRef]
  23. Kaul, A.; Farouki, R.T. Computing Minkowski Sums of Plane Curves. Int. J. Comput. Geom. Appl. 1995, 05, 413–432. [Google Scholar] [CrossRef]
  24. Yan, Y.; Chirikjian, G.S. Closed-form characterization of the Minkowski sum and difference of two ellipsoids. Geom. Dedicata 2015, 177, 103–128. [Google Scholar] [CrossRef]
  25. Fahim Golestaneh, A. A Closed-Form Parametrization and an Alternative Computational Algorithm for Approximating Slices of Minkowski Sums of Ellipsoids in R3. Mathematics 2023, 11, 137. [Google Scholar] [CrossRef]
  26. Gruber, P.M. Convex and Discrete Geometry; Springer: Berlin/Heidelberg, Germany, 2007; Volume 336. [Google Scholar] [CrossRef]
  27. Schneider, R. Convex Bodies: The Brunn–Minkowski Theory; Cambridge University Press: Cambridge, UK, 2013; Volume 151. [Google Scholar] [CrossRef]
  28. Ghosh, P.K.; Kumar, K.V. Support function representation of convex bodies, its application in geometric computing, and some related representations. Comput. Vis. Image Underst. 1998, 72, 379–403. [Google Scholar] [CrossRef]
  29. Sholokhov, O. Minimum-volume ellipsoidal approximation of the sum of two ellipsoids. Cybern. Syst. Anal. 2011, 47, 954–960. [Google Scholar] [CrossRef]
  30. Kurzhanski, A.; Vályi, I. Ellipsoidal Calculus for Estimation and Control; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
  31. Durieu, C.; Walter, E.; Polyak, B. Multi-input multi-output ellipsoidal state bounding. J. Optim. Theory Appl. 2001, 111, 273–303. [Google Scholar] [CrossRef]
  32. Kraft, D. A software package for sequential quadratic programming. In Forschungsbericht- Deutsche Forschungs- und Versuchsanstalt fur Luft- und Raumfahrt; DLR German Aerospace Center—Institute for Flight Mechanics: Koln, Germany, 1988. [Google Scholar]
  33. Halder, A. On the Parameterized Computation of Minimum Volume Outer Ellipsoid of Minkowski Sum of Ellipsoids. In Proceedings of the 2018 IEEE Conference on Decision and Control (CDC), Miami Beach, FL, USA, 17–19 December 2018; pp. 4040–4045. [Google Scholar] [CrossRef]
  34. Harris, C.R.; Millman, K.J.; van der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J.; et al. Array programming with NumPy. Nature 2020, 585, 357–362. [Google Scholar] [CrossRef]
Figure 1. Illustration of the construction of the exact Minkowski sum boundary (shown in solid green) constructed for a rectangle (shown in blue) and an ellipse. For visual clarity, only ellipses positioned at the rectangle’s corner points (shown as dashed green outlines) are shown as they define the boundary’s extremes.
Figure 1. Illustration of the construction of the exact Minkowski sum boundary (shown in solid green) constructed for a rectangle (shown in blue) and an ellipse. For visual clarity, only ellipses positioned at the rectangle’s corner points (shown as dashed green outlines) are shown as they define the boundary’s extremes.
Mathematics 13 01326 g001
Figure 2. Illustration showing how the Minkowski sum of an ellipse and a rectangle can be used to determine their intersection. When the center point of the ellipse (shown in gray) lies within the Minkowski sum (depicted in green), it indicates that the ellipse intersects or touches the rectangle (shown in blue).
Figure 2. Illustration showing how the Minkowski sum of an ellipse and a rectangle can be used to determine their intersection. When the center point of the ellipse (shown in gray) lies within the Minkowski sum (depicted in green), it indicates that the ellipse intersects or touches the rectangle (shown in blue).
Mathematics 13 01326 g002
Figure 3. Example construction of the 2D axis-aligned bounding box Ω bbA .
Figure 3. Example construction of the 2D axis-aligned bounding box Ω bbA .
Mathematics 13 01326 g003
Figure 4. Example construction of the 2D eigenvector-aligned bounding box Ω bbB .
Figure 4. Example construction of the 2D eigenvector-aligned bounding box Ω bbB .
Mathematics 13 01326 g004
Figure 5. Example construction of the 2D intersection of Ω bbA and Ω bbB .
Figure 5. Example construction of the 2D intersection of Ω bbA and Ω bbB .
Mathematics 13 01326 g005
Table 1. Monte Carlo estimates (100,000 samples per N) of the average Volume Ratio (VR), with 99.9 % confidence intervals, for N = 2 to 6 dimensions. The proposed intersection method ( Ω M ) and its components ( Ω bbA and Ω bbB ) are compared against several of the outer ellipsoid approximations discussed in Section 2. Lower VR values indicate tighter approximations.
Table 1. Monte Carlo estimates (100,000 samples per N) of the average Volume Ratio (VR), with 99.9 % confidence intervals, for N = 2 to 6 dimensions. The proposed intersection method ( Ω M ) and its components ( Ω bbA and Ω bbB ) are compared against several of the outer ellipsoid approximations discussed in Section 2. Lower VR values indicate tighter approximations.
MethodIterative
Creation?
Number of Dimensions (N)
23456
Ellipsoid
bounds E ( γ )
γ MVOE Yes (Slow)1.231 ± 0.0011.597 ± 0.0022.134 ± 0.0032.913 ± 0.0054.034 ± 0.008
γ halder Yes (Fast)1.255 ± 0.0011.684 ± 0.0022.350 ± 0.0053.382 ± 0.0094.972 ± 0.015
γ chirikjian No1.256 ± 0.0011.656 ± 0.0022.250 ± 0.0033.126 ± 0.0064.413 ± 0.009
Ours Ω bbA No1.094 ± 0.0011.261 ± 0.0011.513 ± 0.0021.872 ± 0.0032.371 ± 0.005
Ω bbB No1.390 ± 0.0022.362 ± 0.0044.669 ± 0.01010.40 ± 0.02525.54 ± 0.068
Ω M No1.028 ± 0.0001.160 ± 0.0011.388 ± 0.0011.725 ± 0.0022.205 ± 0.004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Courteaux, M.; Ramlot, B.; Lambert, P.; Van Wallendael, G. Lightweight Implicit Approximation of the Minkowski Sum of an N-Dimensional Ellipsoid and Hyperrectangle. Mathematics 2025, 13, 1326. https://doi.org/10.3390/math13081326

AMA Style

Courteaux M, Ramlot B, Lambert P, Van Wallendael G. Lightweight Implicit Approximation of the Minkowski Sum of an N-Dimensional Ellipsoid and Hyperrectangle. Mathematics. 2025; 13(8):1326. https://doi.org/10.3390/math13081326

Chicago/Turabian Style

Courteaux, Martijn, Bert Ramlot, Peter Lambert, and Glenn Van Wallendael. 2025. "Lightweight Implicit Approximation of the Minkowski Sum of an N-Dimensional Ellipsoid and Hyperrectangle" Mathematics 13, no. 8: 1326. https://doi.org/10.3390/math13081326

APA Style

Courteaux, M., Ramlot, B., Lambert, P., & Van Wallendael, G. (2025). Lightweight Implicit Approximation of the Minkowski Sum of an N-Dimensional Ellipsoid and Hyperrectangle. Mathematics, 13(8), 1326. https://doi.org/10.3390/math13081326

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop