Next Article in Journal
Symmetries in Dynamic Models of Biological Systems: Mathematical Foundations and Implications
Previous Article in Journal
Analytical and Data-Driven Wave Approximations of an Extended Schrödinger Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mesh Clustering and Reordering Based on Normal Locality for Efficient Rendering

School of Computer Science and Engineering, Chung-Ang University, 84 Heukseok-ro, Dongjak-gu, Seoul 06974, Korea
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(3), 466; https://doi.org/10.3390/sym14030466
Submission received: 17 January 2022 / Revised: 16 February 2022 / Accepted: 24 February 2022 / Published: 25 February 2022
(This article belongs to the Section Computer)

Abstract

:
Recently, the size of models for real-time rendering has been significantly increasing for realism, and many graphics applications are being developed in mobile devices with relatively insufficient hardware power. Therefore, improving rendering speed is still important in graphics. Back-face culling is one of the core speed-up techniques to remove the back-facing polygons that are not drawn in the result image. In this paper, we present a mesh clustering and reordering method based on normal coherence for efficient back-face culling at an earlier stage than the current method, which removes back faces after the vertex shader on the GPU. In the pre-computation, our method first vertically clusters the mesh into multiple stripes based on the latitude of the face normal vector and sorts each stripe in ascending order of longitude. At runtime, our method computes a potentially visible set of faces at the current camera view by excluding back faces from the clustered and reordered faces list, and draws only the potentially visible set. Experiments have shown that the rendering using our method is more efficient than traditional methods, especially for large and static models.

1. Introduction

Hidden surface removal or hidden surface determination has been studied since the very early days of computer graphics and has played a major role in improving performance. This is especially important for interactive or real-time applications, as the size of models has been drastically increasing for the realistic representation of objects. Hidden surface removal can be largely divided into view-frustum culling, occlusion culling, and back-face culling. View-frustum culling excludes polygons that lie outside the view frustum, and occlusion culling excludes pixels or polygons obscured by other objects depending on the depth. In back-face culling, applicable when rendering closed surface objects, polygons whose normal vectors are facing the opposite direction of the camera are excluded. The general idea of these visibility culling methods is to save the computational resources for processing geometric primitives that do not contribute to the final image by excluding those primitives at an early stage of the graphics pipeline [1,2]. Many visibility-culling methods use spatial and temporal coherence to improve the efficiency since adjacent geometric primitives or frames show similar visibilities [3,4,5]. Geometric clustering is also useful for visibility culling by processing adjacent primitives together [6,7,8,9,10,11]. Mesh clustering is widely used for other applications as well, such as reducing the mesh transmission time [12], simplification [13], segmentation [14], etc. [15,16,17]. Geometric primitives could be reordered [3,18], structured [19], or processed in the GPU [3,20] to improve efficiency of visibility culling.
Among these techniques, the back-face culling is a basic technique used for most real-time graphics applications. When the vertices in every polygon are oriented, the polygon normal vector can be computed. A polygon is back facing when its normal is facing in the opposite direction to the view position. By avoiding rendering these back-facing polygons, the renderer can save roughly half of the computing resources for polygon operations. In modern graphics libraries, such as OpenGL, the rendering pipeline on the GPU is roughly divided into three main stages: vertex processing, rasterization, and fragment processing. The back-facing polygons are excluded in the primitive assembly stage, where vertex data are collected and composed into primitives, such as triangles, after the vertex processing. Since this back-face culling usually happens on the GPU, back faces go through many processes before they are culled [1]. In this paper, we propose a method to reduce the amount of geometry computation in the GPU by excluding in the CPU most of the back faces based on normal coherence. Our method is a conservative visibility culling, which includes all the visible primitives from a specific view point and may include some invisible primitives as well. This concept is frequently called a potentially visible set (PSV) [21].
In this paper, we present a mesh clustering and reordering method based on the normal vector coherence for a high-performance back-face culling. Our method pre-computes the mesh clustering and reordering in which the faces of the original mesh are reordered based on the coherence of their normal vectors. This is used to compute a potentially visible set for a given view point that contains all the front faces and may include some back faces as well. This operation is performed at a low cost in the CPU, and this potentially visible set is determined at an early stage and transferred to the GPU pipeline, saving a lot of computational resources. This method can be easily implemented and is expected to be useful for efficient rendering in real-time rendering applications, such as games and virtual reality. This paper presents the following contributions.
  • We propose a mesh clustering and reordering method based on the latitudes and longitudes of the face normal vectors.
  • We conducted experiments for determining the sizes of mesh clustering for efficient rendering.
  • We present a simple and efficient back-face culling method by determining potentially visible sub-lists of faces from the clustered and reordered mesh.

2. Previous Works

There has been much research to efficiently and quickly render 3D scenes, as the sizes of 3D modeling datasets used in modern real-time graphics applications keep increasing. In particular, the visibility culling technique for pre-excluding geometric data that will not contribute to the final image at an early stage is one of the representative rendering speed-up techniques. The visibility culling technique has been studied in various forms. Cohen-Or et al. [2] summarized various methods of visibility discrimination, such as frustum culling, back-face culling, and occlusion culling that were studied in the past. Yoon et al. [22] proposed a method for interactive rendering of massive models. They presented a model with a clustered hierarchy of progressive meshes and used it for occlusion culling and out-of-core rendering by fetching and drawing new visible patches from the disk. Serpa et al. [23] performed tests for analyzing the number of triangles and the draw calls with space partitioning and visibility culling methods. They also proposed a method based on draw-call-wise optimization. They improved their work to apply to dynamic scenes. They focused on the trade-off between the visibility culling ratio and the number of draw calls by using a spatial partitioning structure to reduce the number of draw calls [24].
Xue et al. [20] presented an efficient rendering method for cloud computing environments using GPU-based visibility culling and a parallel rendering with a cluster of virtual machines. Dong et al. [25] presented a crowd rendering system which integrates level-of-detail and visibility culling techniques for efficiently rendering an animated crowd. Anglada et al. [4] proposed a method to estimate the visibility at two different levels for animated scenes. Koch and Wimmer [21] presented a visibility computation method by sampling locations and determining a potentially visible set of triangles using ray casting. Lee et al. [19] also used ray casting for fine-grained visibility. They first found coarse groups of potential occludees in the bounding volume hierarchy and then refined them with ray casting. Gonakhchyan [5] presented an occlusion culling method considering spatial and temporal coherence of visibility. His method checks the occlusion in the CPU instead of checking a depth buffer downloaded from the GPU. He also proposed a rendering method for dynamic scenes, which estimates the number of hardware occlusion queries required for efficient rendering [26]. Zhang et al. [27] regularly partitioned the terrain scene and calculated its potential visible patch set using a visibility analysis algorithm for efficient terrain rendering. Ibrahim et al. [28] presented probabilistic occlusion culling for molecular dynamics simulations. They used confidence maps to estimate probabilistic visibility.
Methods for rearranging information on vertices or polygons in a 3D model or creating and utilizing them in a new form are widely used as well. Sander et al. [29] proposed a performance improvement technique to minimize the cache miss ratio for vertex operations inside the GPU by rearranging the polygon order. Han et al. [30] proposed a triangle reordering approach for efficient rendering by reducing overdraw as well as maintaining cache efficiency for animated meshes. The method of abstracting and organizing the normal vector information of the 3D model is mainly used in areas such as silhouette extraction [31] and visibility culling [6,7,8,9,10]. In particular, Kumar et al. [6] and Pastor [9] used mesh patches that cluster geographically close polygons for efficient rendering.
Kumar et al. [6,7] proposed a method that divides the polygon model into layered clusters. After that, back-face culling is performed for each cluster based on the correlation between frames. Zhang et al. [8] proposed a back-face culling method using normal masks, where each bit is associated with a cluster of normals. Their method determines the back faces using bit operations of polygon’s normal masks by finding clusters whose normals are close to the view direction. Pastor [9] proposed a visibility culling method by generating polygonal patches and taking images of the model from spherically sampled viewpoints. The model is rendered by rebuilding the visible parts by combining visible patches at sampled viewpoints. Diaz-Gutierrez et al. [10] presented a method to represent a mesh with a single triangle strip for high-performance rendering. Their method generates a single strip based on various constraints for efficient visibility culling or vertex cache coherence. The visibility culling method proposed by de Lucas et al. [3] reorders objects in depth order using temporal coherence. Then, it discards occluded surfaces using an early depth test of the GPU. Unterguggenberger et al. [11] presented an efficient rendering method using small geometry clusters, called meshlets, by enabling visibility culling for animated meshlets.

3. Method

In this chapter, we describe the efficient rendering by excluding the back faces using the mesh clustering and reordering method. Our method receives a triangle mesh with face normal vectors as input. Our method first goes through pre-computations to cluster and reorder the faces in the mesh based on its normal vectors. At runtime, the range of normal vector directions that can be facing front is computed according to the direction of the viewpoint, and the set of faces that can be front faces is defined as a potentially visible set (PVS). The PVS contains all the front faces. By rendering the potentially visible set, most of the back faces can be eliminated in the beginning of the rendering process, resulting in improved performance. Figure 1 shows the overall process of our method.
In Section 3.1, we present the basic idea of our method, and Section 3.2 describes the mesh clustering and reordering method, and Section 3.3 describes how we compute a potentially visible set (PVS) at runtime and efficiently render the scene using the PVS.

3.1. Overview

The back-face culling is a method to improve a rendering speed by excluding the invisible back faces among the polygons of the mesh model. For doing this, the normal vector of the polygon is important. If the angle between the normal vector and the view direction is greater than 90 degrees or less than 90 degrees, it can be determined as a back face. In three-dimensional space, the direction of a normal vector can be expressed as a 3D unit vector or as two angles of latitude and longitude.
Let us first consider a simple two-dimensional scene with an orthographic projection. In this case, a normal vector can be represented by one angle θ . Let us define a potentially visible set (PVS) based on the range of normal vectors of faces where they can face front. Then, the potentially visible set of normal vectors is the set of θ whose angle with the view direction is between 90 and 90 degrees. For example, Figure 2a shows faces in a rendering scene for a given viewpoint (camera). The arrows represent normal vectors of the faces, and the faces with red normal vectors are front faces. Since the faces with gray arrows are back faces, we do not need to draw them. Figure 2b shows normal vectors of the faces in (a) and clearly shows that red arrows are facing to the view direction V. Here, if the faces are sorted by an increasing order of a normal direction θ , we can easily obtain a sub-list of front faces. In Figure 2b, the gray bar shows a list of faces sorted by their normal directions and the red bar shows front faces, which is a sub-list whose angle with the view direction is between 90 and 90 degrees (in the gray right semicircle). Since we sorted the faces by their normal direction, all the front faces are adjacent. Therefore, we can have one or two lists of front faces which can be drawn by one or two draw calls.
A potentially visible set (PVS) is determined differently according to a projection method. In orthographic projection, the view direction is the same at all positions. Therefore, potentially visible normal vectors are the same for all surface points as shown in Figure 3a. However, in perspective projection, the view directions are different for different surface points, resulting in different potentially visible normal vectors for different positions. In Figure 3b, the yellow shaded region shows the potentially visible normal vectors at point A and the blue shaded region shows the potentially visible normal vectors at point B. Since we want to obtain a visible subset of the sorted face list, the PVS must include all the possibly visible faces based on their normal vectors, regardless of their positions. Therefore, a PVS of a mesh model must include the potentially visible normal vectors at all points, which is the whole shaded region in the figure.
Let the potentially visible set (PVS) of faces for a mesh model M be P, the PVS of normal vectors for M be P n , and the bounding sphere of M be S. Additionally, let the PVS of normal vectors at a point x be P n ( x ) . Then, in orthographic projection, a global view direction v is defined, which is the same for all x . Therefore, all P n ( x ) are the same, so the PVS of normal vectors P n is defined as follows.
P n = { n | n · v 0 }
In perspective projection, a global view position v is defined, and the view direction at each face becomes v x . The PVS of normal vectors P n ( x ) for each point is defined as follows.
P n ( x ) = { n | n · ( v x ) 0 }
Since the view direction changes according to the position of the face in perspective projection, the PVS of normal vectors also changes. To pre-compute the sorted list of faces and compute the PVS of faces at runtime, the PVS of normal vectors P n for a mesh model is defined as the sum of all P n ( x ) .
P n = x S P n ( x )
In two-dimensional cases, the PVS of normal vectors P n is defined as a range of normal direction θ . Finally, the PVS of faces P is defined as follows, and it is a sub-list of the pre-computed sorted face list.
P = { f i | n i P n , where n i is a normal vector of f i M }
The conventional back-face culling method determines the back face independently on each face, which is executed after the vertex shader on the GPU. In our method, the CPU can quickly determine the PVS of faces for the model through mesh reordering, and only the PVS can be transferred to the GPU to improve the rendering speed.

3.2. Mesh Clustering and Reordering

In three-dimensional cases, normal vectors can be represented as two angles of latitude θ and longitude ϕ . The latitude θ is defined as the angle between the Y axis and the normal vector, in a range of 0 θ π . The longitude ϕ is the angle between the Z axis and the normal vector, in a range of 0 ϕ 2 π . Here, we assume that Y is the upward direction.
First, suppose that we reorder the mesh only with the longitude. Then, the faces of the mesh is sorted in an ascending order of their longitude in the pre-computation stage. Computing and rendering the potentially visible set will be same as the two-dimensional case described in Section 3.1.
If we also consider the latitude here, one sorted list of faces is not enough. In this paper, we generate multiple vertical stripes by clustering the mesh based on the latitudes of face normal vectors as shown in Figure 4. The faces in each stripe have similar latitudes and they are sorted by an increasing order of their longitudes. Computing and rendering the potentially visible set of each stripe will be same as the two-dimensional case. Figure 4 shows the potentially visible set in green for a given view point, and the three circles on the right show the different potentially visible sets for different stripes.
For the vertical clustering, in the range [ 0 , π ] of latitude θ , V + 1 uniformly spaced samples { Θ 0 , Θ 1 , Θ 2 , , Θ V } are computed. Here, Θ 0 = 0 and Θ V = π . Using these samples, we generate V stripes with a uniform range of latitude, vertically clustering all faces in the mesh model. Let the set of all faces be F. The vertically clustered stripe S i , 0 i V 1 , is defined as follows:
S i = { f k | Θ i θ k < Θ i + 1 , θ k = latitude of n k , n k = normal vector of f k F }
Here, the faces with latitude π are included in stripe S V 1 . These stripes disjointly contain all faces in the mesh. The faces of each stripe are sorted in increasing order of longitude ϕ , and the stripes are stored in one array in increasing order of latitude. Suppose that the number of faces in stripe S i is n i , and the sorted faces are f 0 i , f 1 i , , f n i i . Then, all the faces of the mesh model is stored in the following order. This clustering and ordering is pre-computed once for each model.
f 0 0 f 1 0 f n 0 0 f 0 1 f 1 1 f n 1 1 f 0 V 1 f 1 V 1 f n V 1 V 1
At runtime, we compute the potentially visible set and generate 0 2 sub-arrays of visible faces for each stripe. Figure 5 shows the vertical clustering with three stripes for the hand and the Venus model. Different colors represent faces in different stripes.

3.3. Runtime Rendering

For runtime rendering, the potentially visible sets (PVS) of stripes are computed based on the current view point and drawn in real time, using the reordered mesh model. To generate the PVS of faces for each stripe, we compute the range of potentially visible longitudes for the stripe. Then, the PVS for each stripe, which is represented as 0 2 sub-lists of the face list, is rendered. Each sub-list can be rendered with one draw call.
In orthographic projection, we can easily compute the visible range of longitudes for each stripe according to the global view direction. To compute the PVS in perspective projection, we use a bounding sphere of the mesh. For each stripe of the mesh, we also can compute the bounding cylinder as shown in Figure 4. Figure 6 shows the potentially visible sets for different view points in perspective projection. The green areas represent the potentially visible sets, and the red line shows the view direction at the center. Each stripe works the same as in the 2D case as described in Section 3.1. Since we determine the potentially visible set based on only the normal vectors, we need to assume that the face could be at any point in the model for dealing with the dynamic camera view. Therefore, the potentially visible set is the union of potentially visible sets at all points, similar to the 2D case shown in Figure 3b.
To compute the range of longitude angles for the potentially visible set, we draw a cone tangent to the bounding sphere of the mesh from the viewpoint V. This tangent cone is shown as dotted lines in Figure 7. Let I c be the plane containing the circle composed of the intersections between the tangent cone and the bounding sphere. Additionally, let I c be the plane intersecting symmetrically in the other semisphere, then I c is perpendicular to O V , and a distance to V is | O V | + r 2 / | O V | , where r is the radius of the bounding sphere. In orthographic projection, I c = I c and the distance to V is | O V | . If we assume that all faces are organized on the surface of the bounding sphere, the parallel lines of latitude represent vertically clustered stripes and all faces in front of the intersection plane I c are included in the potentially visible set. By computing the intersections between I c and the bounding cylinder of each stripe, the range of longitude angles for the stripe’s potentially visible set is computed. If the half space bounded by the plane I c does not intersect with the bounding cylinder, the potentially visible set of the stripe is empty, and is not drawn. If the half space bounded by I c includes the whole top or bottom circle of the bounding cylinder, all faces in the stripe are included in the PVS as shown in Figure 6c. We actually compute the intersections between I c and the top and the bottom circles of the bounding cylinder. Then, we take the circle whose intersections are closer to the viewpoint to include all the front faces as shown in Figure 6b. Let A and B be the intersections between I c and the closer circle of a stripe as shown in Figure 8. Additionally, let O be the center of the bounding cylinder, and α and β be the angles from the Z axis to O A and O B , respectively. The faces in a stripe are sorted in increasing order of longitude from 0 to 2 π . When α < β , the PVS is one sub-list of faces between α and β (see Figure 8a). When α > β , we cannot represent the PVS with one sub-list because 0 and 2 π are not adjacent in the array. Therefore, it is represented as two sub-lists of faces, 0 β and α 2 π (see Figure 8a). Our method only draws these sub-lists of faces, and the back faces included in this PVS are culled by a conventional back-face culling method. In orthographic projection, all back faces are culled by the PVS.
To efficiently find faces within the PVS range in the sorted list, we use indices of uniformly sampled angles in [ 0 , 2 π ] . The indexing method divides the longitude into N ranges of [ 0 , 2 π / N ) , [ 2 π / N , 2 2 π / N ) , [ 2 2 π / N , 3 2 π / N ) , , [ ( N 1 ) 2 π / N , 2 π ] , and stores the starting index of the face list for each range. In the pre-computation stage, we add faces of each stripe into a list for one of these ranges and then concatenate these lists in increasing order of longitude. When α and β are computed at runtime, the sub-lists for the PVS are efficiently generated by finding the start and the end positions of the sub-list using the indexing. Algorithm 1 shows the pseudo codes for our mesh clustering and reordering as well as the runtime rendering.
Algorithm 1: Pseudo code for rendering with mesh clustering and reordering.
Symmetry 14 00466 i001

4. Results

Our method renders 3D models efficiently by drawing only the potentially visible set (PVS), which excludes most of back faces in the CPU process. The PVS can still contain some back faces in perspective projection, and these are culled at the GPU stage using the conventional method. The mesh clustering and reordering is pre-computed once for each model. The pre-computation time is approximately proportional to the number of faces as shown in Figure 9. It took a few seconds for models with hundreds of thousands of faces. Table 1 shows the pre-computation time required for each model.
In orthographic projection, all back faces are culled by the PVS. In perspective projection, the PVS may include some back faces, and they are culled by the conventional method. The ratio of back faces culled by the PVS is based on the size of the model and the distance to the view point. For a sphere model, this ratio can be analytically calculated, and we can approximate the ratio for general models with this formula. Let r be the radius of the bounding sphere of the model and d be the distance from the center of the sphere to the view point. Then, the ratio of back faces culled by the PVS is as follows.
d r d + r
Figure 10 shows this ratio of back faces culled by the PVS according to the distance to the view point, in perspective projection. The horizontal axis is the distance from the center of the model to the view point and the vertical axis is the culling ratio, which is the ratio of front faces in the PVS. If the culling ratio is 1, as in orthographic projection, all the back faces are culled by the PVS. According to Equation (6) and Figure 10, the smaller the model size and the greater the distance to the view point, the greater the ratio of back faces culled by the PVS. When the distance between the view point and the center of the sphere is r, which means the camera is attached to the model, no back faces are culled by the PVS. However, as the distance increases, the culling ratio increases rapidly. When the distance reaches 2 r , 33 % of back faces are culled, and 50 % and 60 % of back faces are culled at distances 3 r and 4 r , respectively.
Figure 11 shows how much the PVS culls back faces of the Venus model according to the distance to the view point. The blue area represents the back faces culled by the PVS. The distances to the view point from the center of the model are 2 r , 4 r , and 6 r , respectively, where r is the radius of the bounding sphere. It shows that the greater the distance to the view point, the more back faces our method culls. The back faces not culled by the PVS are culled at the GPU process using the conventional method. Figure 12 shows the difference between the back faces culled by the PVS and all the back faces, in perspective projection. Figure 12a,c show the back faces culled by the PVS in blue, that means the grey areas show the faces in the PVS. Figure 12b,d show all the back faces in blue.
We conducted an experiment on how well our proposed method sifts the back faces of various models compared to the conventional back-face culling method. We used models with various shapes and polygon counts for the experiment. Table 2 shows the results. The ratio of back faces are different for the shape of the models and the normal vector distributions.
The larger the number of the vertical clustering V, the more finely the latitudes of the normal vectors are divided and the fewer front faces the PVS of each stripe contains. However, as the number of vertical clustering increases, the number of draw calls sent to the GPU increases, which incurs much overhead, resulting in a decrease in performance. Therefore, it is necessary to study the optimal number of vertical clustering that shows the maximum efficiency according to the model size. In order to find the performance difference according to the face count of the model and the number of vertical clustering, we conducted an experiment with spheres, which is easy to set the desired face counts. We measured the rendering time for drawing 125 spheres in various resolutions. Table 3 shows the time for rendering spheres with our method applying various vertical clustering compared to the conventional rendering method. It shows that our method does not obtain a performance improvement with low-resolution models. In low-resolution models, the time spent in the overhead of draw calls is greater than the time saved by our method because the absolute number of back faces is too small. From 65K resolutions, our method shows better performance than the conventional method and the performance improvement increases as the resolution increases. Moreover, as the number of faces in the model increases, our method performs better with more vertical clusters. Rendering time was improved by 53.8% for 65K, 63.0% for 261K, and 83.8% for 1056K at the vertical clustering showing the maximum performance. It shows better performance for bigger models. This is because the time saved by our method is much larger than the overhead of the draw call increase.
Figure 13 shows the results of our experiment on the number of longitude ranges, N. The x-axis is the number of longitude ranges (N) and the y-axis is the time difference as a percentage of the minimum rendering time for each model. The results show that the rendering time decreases as N increases and also it converges especially for large models. For the experiments in this paper, we use N = 512 . For the vertical clustering number, V, we use the results of the experiment on spheres shown in Table 3. For each model, we find the sphere whose face count is closest to the model’s face count and use the sphere’s best V value. If the face count of the model is bigger than 1056K, we use V = 256 in this paper.
We also conducted an experiment to test the rendering time for various 3D models. Models with various shapes and polygon counts were used to test the environment of various graphics applications. Table 4 shows the result in perspective projection. The number of vertical clustering was used by selecting the number that showed the best performance in each model. In the Spot and the Vase models, which do not have many polygons, the performance decreases when our method is applied. This is because the overhead of draw calls exceeds the gain by our method, as in the previous experiment with spheres. On the other hand, from the Venus model, it can be seen that the rendering time is drastically reduced. In the case of the Armadillo model, the rendering time was reduced by more than half, showing the best results. Figure 14 shows the back faces culled by our method for various models at a distance 4 r in perspective projection. It shows that many back faces are culled at an early stage by our method. In orthographic projection, our method shows the same performance regardless of the distance from the object center to the viewpoint. On average, the rendering in orthographic projection is 3.46% faster than the rendering in perspective projection when the object is at 10 r from the viewpoint, when r is the radius of the bounding sphere.
Our method, however, shows some limitations. Since the algorithm is based on pre-computation, it is difficult to apply in situations where polygons can be dynamic, such as deformations or animations. However, when applied to static objects such as trees and buildings, a significant performance improvement can be expected. Specifically, since our method has an advantage in distant objects, it will be useful in applications, such as virtual reality and flight simulations, where there are many remote background models. In addition, it is difficult to use our method in the instanced rendering technique since even the same object can be rendered differently depending on its position and the viewpoint. However, since our method shows good performance on models bigger than a certain size, it is reasonable not to use our method with the instanced rendering, which is generally used to draw small objects.

5. Conclusions

We present a mesh clustering and reordering method for efficient back-face culling in this paper. In the pre-computation step, our method vertically clusters the input triangle mesh into multiple stripes and reorders faces in each stripe based on the horizontal angles of face normal vectors. At runtime, our method generates potentially visible sub-lists of faces from the clustered and reordered mesh based on the current viewpoint. We also conducted experiments to test the performance of our method on various models with various clustering sizes. Our method show a big performance improvement, especially for large static models, and it will be useful for real-time applications. The proposed method can be easily implemented and applied to the existing system. Recently, 3D applications in mobile devices, which have relatively low hardware power, are being actively developed, so it is expected that it will be very useful in these environments. For future research, we would like to develop vertical clustering with a variable size, or adaptive vertical clustering by analyzing the shape and normal vector distributions of models. We will also study how to optimize performance in perspective projection by using a bounding volume that is more compact than a bounding sphere. We would like to extend our method for applying it to dynamic scenes as well.

Author Contributions

Conceptualization, C.H.L. and S.K.; methodology, C.H.L. and S.K.; software, S.K.; validation, C.H.L. and S.K.; formal analysis, S.K.; investigation, S.K.; resources, S.K.; writing—original draft preparation, S.K.; writing—review and editing, C.H.L.; visualization, S.K.; supervision, C.H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Research Foundation of Korea (NRF) funded by the Ministry of Education under Grant NRF-2013R1A1A2060582.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cohen-Or, D. Visibility, Problems, Techniques, and Applications. In Course Notes: SIGGRAPH 2001; ACM: New York, NY, USA, 2001; p. 30. [Google Scholar]
  2. Cohen-Or, D.; Chrysanthou, Y.L.; Silva, C.T.; Durand, F. A survey of visibility for walkthrough applications. IEEE Trans. Vis. Comput. Graph. 2003, 9, 412–431. [Google Scholar] [CrossRef] [Green Version]
  3. de Lucas, E.; Marcuello, P.; Parcerisa, J.-M.; Gonzalez, A. Visibility Rendering Order: Improving Energy Efficiency on Mobile GPUs through Frame Coherence. IEEE Trans. Parallel Distrib. Syst. 2019, 30, 473–485. [Google Scholar] [CrossRef] [Green Version]
  4. Anglada, M.; de Lucas, E.; Parcerisa, J.-M.; Aragon, J.L.; Gonzalez, A. Early Visibility Resolution for Removing Ineffectual Computations in the Graphics Pipeline. In Proceedings of the IEEE International Symposium on High Performance Computer Architecture, Washington, DC, USA, 16–20 February 2019; pp. 635–646. [Google Scholar]
  5. Gonakhchyan, V.I. Occlusion Culling Algorithm Based on Software Visibility Checks. Program. Comput. Soft. 2020, 46, 454–462. [Google Scholar] [CrossRef]
  6. Kumar, S.; Manocha, D.; Garrett, B.; Lin, M. Hierarchical back-face culling. In Proceedings of the Eurographics Workshop on Rendering, Porto, Portugal, 17–19 June 1996; pp. 231–240. [Google Scholar]
  7. Kumar, S.; Manocha, D.; Garrett, W.; Lin, M. Hierarchical back-face computation. Comput. Graph. 1999, 23, 681–692. [Google Scholar] [CrossRef] [Green Version]
  8. Zhang, H.; Kenneth, E.H. Fast backface culling using normal masks. In Proceedings of the ACM Symposium on Interactive 3D Graphics, Providence, RI, USA, 27–30 April 1997; p. 103. [Google Scholar]
  9. Pastor, O.E.M. Visibility Preprocessing Using Spherical Sampling of Polygonal Patches. In Proceedings of the Eurographics, Saarbrücken, Germany, 2–6 September 2002. [Google Scholar]
  10. Diaz-Gutierrez, P.; Bhushan, A.; Gopi, M.; Pajarola, R. Single-strips for fast interactive rendering. Vis. Comput. 2006, 22, 372–386. [Google Scholar] [CrossRef] [Green Version]
  11. Unterguggenberger, J.; Kerbl, B.; Pernsteiner, J.; Wimmer, M. Conservative Meshlet Bounds for Robust Culling of Skinned Meshes. Comput. Graph. Forum 2021, 40, 57–69. [Google Scholar] [CrossRef]
  12. Sun, Y.; Ma, J.; She, J.; Zhao, Q.; He, L. View-Dependent Progressive Transmission Method for 3D Building Models. ISPRS Int. J.-Geo-Inf. 2021, 10, 228. [Google Scholar] [CrossRef]
  13. Lyu, W.; Wu, W.; Zhang, L.; Wu, Z.; Zhou, Z. Laplacian-based 3D mesh simplification with feature preservation. Int. J. Model. Simul. 2019, 10, 1950002. [Google Scholar] [CrossRef]
  14. Nguyen, T.T.; Dahl, V.A.; Barentzen, J.A.; Dahl, A.B. Deformable Mesh Evolved by Similarity of Image Patches. In Proceedings of the IEEE International Conference on Image Processing, Taipei, Taiwan, 22–25 September 2019; pp. 2731–2735. [Google Scholar]
  15. Li, R.; Peng, Q. Surface Quality Improvement and Support Material Reduction in 3D Printed Shell Products Based on Efficient Spectral Clustering. Int. J. Adv. Manuf. Technol. 2020, 107, 4273–4286. [Google Scholar] [CrossRef]
  16. Yang, X.; Jia, X. Simple Primitive Recognition via Hierarchical Face Clustering. Comp. Vis. Media 2020, 6, 431–443. [Google Scholar] [CrossRef]
  17. Pütz, S.; Wiemann, T.; Hertzberg, J. The Mesh Tools Package-Introducing Annotated 3D Triangle Maps in ROS. Rob. Auton. Syst. 2021, 138, 103688. [Google Scholar] [CrossRef]
  18. Choi, J.; Kim, H.; Sastry, S.P.; Kim, J. A Deviation-Based Dynamic Vertex Reordering Technique for 2D Mesh Quality Improvement. Symmetry 2019, 11, 895. [Google Scholar] [CrossRef] [Green Version]
  19. Lee, G.B.; Jeong, M.; Seok, Y.; Lee, S. Hierarchical Raster Occlusion Culling. Comput. Graph. Forum 2021, 40, 489–495. [Google Scholar] [CrossRef]
  20. Xue, J.; Zhai, X.; Qu, H. Efficient Rendering of Large-Scale CAD Models on a GPU Virtualization Architecture with Model Geometry Metrics. In Proceedings of the IEEE International Conference on Service-Oriented System Engineering, San Francisco, CA, USA, 4–9 April 2019; pp. 251–2515. [Google Scholar]
  21. Koch, T.; Wimmer, M. Guided Visibility Sampling++. Proc. ACM Comput. Graph. Interact. Tech. 2021, 4, 1–16. [Google Scholar] [CrossRef]
  22. Yoon, S.E.; Salomon, B.; Gayle, R.; Manocha, D. Quick-vdr: Interactive view-dependent rendering of massive models. In Proceedings of the IEEE Visualization, Austin, TX, USA, 10–15 October 2004; pp. 131–138. [Google Scholar]
  23. Serpa, Y.R.; Rodrigues, M. A comparative study on a novel drawcall-wise visibility culling and space-partitioning data structures. In Proceedings of the XV SBGames, São Paulo, Brazil, 8–10 September 2016; pp. 36–43. [Google Scholar]
  24. Serpa, Y.R.; Rodrigues, M.A.F. A Draw Call-Oriented Approach for Visibility of Static and Dynamic Scenes with Large Number of Triangles. Vis. Comput. 2019, 35, 549–563. [Google Scholar] [CrossRef]
  25. Dong, Y.; Peng, C. Real-Time Large Crowd Rendering with Efficient Character and Instance Management on GPU. Int. J. Comput. Games Tech. 2019, 2019, 1792304. [Google Scholar] [CrossRef]
  26. Gonakhchyan, V.I. Performance Model of Graphics Pipeline for a One-Pass Rendering of 3D Dynamic Scenes. Program. Comput. Soft. 2021, 47, 522–533. [Google Scholar] [CrossRef]
  27. Zhang, L.; Wang, P.; Huang, C.; Ai, B.; Feng, W. A Method of Optimizing Terrain Rendering Using Digital Terrain Analysis. ISPRS Int. J.-Geo-Inf. 2021, 10, 666. [Google Scholar] [CrossRef]
  28. Ibrahim, M.; Rautek, P.; Reina, G.; Agus, M.; Hadwiger, M. Probabilistic Occlusion Culling Using Confidence Maps for High-Quality Rendering of Large Particle Data. IEEE Trans. Vis. Comput. Graph. 2022, 28, 573–582. [Google Scholar] [CrossRef] [PubMed]
  29. Sander, P.V.; Nehab, D.; Barczak, J. Fast triangle reordering for vertex locality and reduced overdraw. ACM Trans. Graph. 2007, 26, 89–97. [Google Scholar] [CrossRef]
  30. Han, S.; Sander, P.V. Triangle reordering for reduced overdraw in animated scenes. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, Redmond, WA, USA, 27–28 February 2016; pp. 23–27. [Google Scholar]
  31. Johnson, D.; Cohen, E. Spatialized Normal Cone Hierarchies. In Proceedings of the ACM Symposium on Interactive 3D Graphics, Research Triangle Park, NC, USA, 19–21 March 2001; pp. 129–134. [Google Scholar]
Figure 1. The overall process of our method. In the pre-computation step, the input mesh is vertically clustered, and each cluster is reordered based on the face normals. At runtime, a PVS (potentially visible set) is determined and rendered.
Figure 1. The overall process of our method. In the pre-computation step, the input mesh is vertically clustered, and each cluster is reordered based on the face normals. At runtime, a PVS (potentially visible set) is determined and rendered.
Symmetry 14 00466 g001
Figure 2. (a) Faces in a rendering scene. The faces with red arrows are front faces, and the faces with gray arrows are back faces. (b) Normal vectors of faces in the left scene. When faces are sorted by the normal directions, which is an angle in two-dimensional space, front faces can be represented as a sub-list.
Figure 2. (a) Faces in a rendering scene. The faces with red arrows are front faces, and the faces with gray arrows are back faces. (b) Normal vectors of faces in the left scene. When faces are sorted by the normal directions, which is an angle in two-dimensional space, front faces can be represented as a sub-list.
Symmetry 14 00466 g002
Figure 3. (a) A potentially visible set is the same for all surface points in orthographic projection. (b) In perspective projection, potentially visible sets are different for different positions since the view direction changes with position.
Figure 3. (a) A potentially visible set is the same for all surface points in orthographic projection. (b) In perspective projection, potentially visible sets are different for different positions since the view direction changes with position.
Symmetry 14 00466 g003
Figure 4. Vertical clustering in a three-dimensional space.
Figure 4. Vertical clustering in a three-dimensional space.
Symmetry 14 00466 g004
Figure 5. Vertical clustering with three stripes for (a) the hand model and (b) the Venus model. Different colors show different stripes.
Figure 5. Vertical clustering with three stripes for (a) the hand model and (b) the Venus model. Different colors show different stripes.
Symmetry 14 00466 g005
Figure 6. The potentially visible sets for different view points. The blue faces represent back faces and the green faces represent the potentially visible sets including all the front faces. (a) A horizontal view; (b) An arbitrary view; (c) A vertical view.
Figure 6. The potentially visible sets for different view points. The blue faces represent back faces and the green faces represent the potentially visible sets including all the front faces. (a) A horizontal view; (b) An arbitrary view; (c) A vertical view.
Symmetry 14 00466 g006
Figure 7. Computing the potentially visible set for a given view point. The parallel lines of latitude represent vertical clusters and all faces in front of the intersection plane I c are included in the potentially visible set.
Figure 7. Computing the potentially visible set for a given view point. The parallel lines of latitude represent vertical clusters and all faces in front of the intersection plane I c are included in the potentially visible set.
Symmetry 14 00466 g007
Figure 8. Computing the potentially visible set of each stripe. All faces in front of the intersection plane I c are included in the potentially visible set, which is shaded in green. The circle in each figure is the top or bottom circle of the stripe bounding cylinder whose intersections are closer to the viewpoint. We compute intersections A and B between I c and the circle, and let angles of the intersections from Z axis be α and β . (a) When α < β , the PVS is one sub-list of faces between α and β . (b) When α > β , the PVS is represented as two sub-lists of faces, 0 β and α 2 π .
Figure 8. Computing the potentially visible set of each stripe. All faces in front of the intersection plane I c are included in the potentially visible set, which is shaded in green. The circle in each figure is the top or bottom circle of the stripe bounding cylinder whose intersections are closer to the viewpoint. We compute intersections A and B between I c and the circle, and let angles of the intersections from Z axis be α and β . (a) When α < β , the PVS is one sub-list of faces between α and β . (b) When α > β , the PVS is represented as two sub-lists of faces, 0 β and α 2 π .
Symmetry 14 00466 g008
Figure 9. The pre-computation time for the number of faces.
Figure 9. The pre-computation time for the number of faces.
Symmetry 14 00466 g009
Figure 10. The PVS culling ratio according to the distance of the object from the view point, in perspective projection.
Figure 10. The PVS culling ratio according to the distance of the object from the view point, in perspective projection.
Symmetry 14 00466 g010
Figure 11. The back faces culled by the PVS, which is represented in blue, at difference distances to the view point, in perspective projection. The figures show the Venus model at distances (a) 2 r , (b) 4 r , and (c) 6 r .
Figure 11. The back faces culled by the PVS, which is represented in blue, at difference distances to the view point, in perspective projection. The figures show the Venus model at distances (a) 2 r , (b) 4 r , and (c) 6 r .
Symmetry 14 00466 g011
Figure 12. The back faces culled by our method for Bunny and Venus models, in perspective projection. (a,c) show the back faces culled by the PVS in blue, that means the grey areas show the faces in the PVS. (b,d) show all the back faces in blue.
Figure 12. The back faces culled by our method for Bunny and Venus models, in perspective projection. (a,c) show the back faces culled by the PVS in blue, that means the grey areas show the faces in the PVS. (b,d) show all the back faces in blue.
Symmetry 14 00466 g012
Figure 13. The rendering time according to the number of longitude ranges, N, for stripes. We tested N = 32 , 64 , 128 , 256 , 512 . The y-axis shows the time difference as a percentage of the minimum rendering time.
Figure 13. The rendering time according to the number of longitude ranges, N, for stripes. We tested N = 32 , 64 , 128 , 256 , 512 . The y-axis shows the time difference as a percentage of the minimum rendering time.
Symmetry 14 00466 g013
Figure 14. The back faces culled by our method for various models, in perspective projection. The viewpoint is at the left of the models, and the distance from the object center to the viewpoint is 4 r . The back faces culled by the PVS are colored in blue.
Figure 14. The back faces culled by our method for various models, in perspective projection. The viewpoint is at the left of the models, and the distance from the object center to the viewpoint is 4 r . The back faces culled by the PVS are colored in blue.
Symmetry 14 00466 g014
Table 1. Pre-computation time for mesh models.
Table 1. Pre-computation time for mesh models.
Model# FacesPrecomputation Time (s)
Spot58560.356
Vase21,3120.381
Venus39,5690.436
Sphere65,0240.533
Bunny69,6300.515
Armadillo212,5740.879
Hand654,6662.046
Dragon871,4142.589
Venus & Cupid960,0262.878
Buddha1,087,4513.110
Demon1,363,1523.967
Table 2. The ratio of back faces culled by our method at a distance 4 r from the viewpoint, where r is the radius of the bounding sphere.
Table 2. The ratio of back faces culled by our method at a distance 4 r from the viewpoint, where r is the radius of the bounding sphere.
Model# FacesAll Back FacesBack Faces Culled by PVS
Spot585656%38%
Vase21,31250%42%
Venus39,56958%42%
Sphere65,02454%33%
Bunny69,63049%47%
Armadillo212,57452%40%
Hand654,66652%40%
Dragon871,41447%35%
Venus & Cupid960,02649%38%
Buddha1,087,45146%35%
Demon1,363,15254%44%
Table 3. Rendering time in milliseconds of our method with various vertical clusters compared to the conventional method. The best performance for each resolution is indicated in bold face.
Table 3. Rendering time in milliseconds of our method with various vertical clusters compared to the conventional method. The best performance for each resolution is indicated in bold face.
# FacesConv.V, Number of Vertical Clustering
Method248163264128256
10803.594.04.355.116.43----
39683.64.044.344.986.28----
16K3.63.964.274.936.248.86---
65K9.947.407.216.706.468.92---
261K37.128.1527.7726.8825.1922.7625.47--
1056K148.54110.2102.6104.3100.4595.1886.7283.9480.8
Table 4. Rendering time for mesh models with and without our method in perspective projection, when the distance from the object center to the viewpoint is 4 r .
Table 4. Rendering time for mesh models with and without our method in perspective projection, when the distance from the object center to the viewpoint is 4 r .
Model# FacesTime w/o Our Method (ms)Time w/ Our Method (ms)Improvement
Spot58562.672.96−9.8%
Vase21,3122.772.97−6.7%
Venus39,5696.973.8481.5%
Sphere65,0247.85.1451.8%
Bunny69,63011.816.3187.6%
Armadillo212,57434.1416.63105.3%
Hand654,666107.5364.9565.6%
Dragon871,414139.6986.0162.4%
Venus & Cupid960,026149.0086.3972.5%
Buddha1,087,451167.79102.7663.3%
Demon1,363,152216.60115.1088.2%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, S.; Lee, C.H. Mesh Clustering and Reordering Based on Normal Locality for Efficient Rendering. Symmetry 2022, 14, 466. https://doi.org/10.3390/sym14030466

AMA Style

Kim S, Lee CH. Mesh Clustering and Reordering Based on Normal Locality for Efficient Rendering. Symmetry. 2022; 14(3):466. https://doi.org/10.3390/sym14030466

Chicago/Turabian Style

Kim, Sungjin, and Chang Ha Lee. 2022. "Mesh Clustering and Reordering Based on Normal Locality for Efficient Rendering" Symmetry 14, no. 3: 466. https://doi.org/10.3390/sym14030466

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop