Next Article in Journal
Entropy Generation Analysis of Desalination Technologies
Next Article in Special Issue
Quantifying Dynamical Complexity of Magnetic Storms and Solar Flares via Nonextensive Tsallis Entropy
Previous Article in Journal / Special Issue
The Nonadditive Entropy Sq and Its Applications in Physics and Elsewhere: Some Remarks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tsallis Entropy for Geometry Simplification

1
Departamento de Lenguajes y Sistemas Informáticos, Institute of New Imaging Technologies, Universitat Jaume I, Campus de Riu Sec, Castellón E-12071, Spain
2
Institut d’Informàtica i Aplicacions, Universitat de Girona, Campus Montilivi, Girona E-17071, Spain
*
Author to whom correspondence should be addressed.
Entropy 2011, 13(10), 1805-1828; https://doi.org/10.3390/e13101805
Submission received: 1 August 2011 / Revised: 20 September 2011 / Accepted: 27 September 2011 / Published: 29 September 2011
(This article belongs to the Special Issue Tsallis Entropy)

Abstract

:
This paper presents a study and a comparison of the use of different information-theoretic measures for polygonal mesh simplification. Generalized measures from Information Theory such as Havrda–Charvát–Tsallis entropy and mutual information have been applied. These measures have been used in the error metric of a surface simplification algorithm. We demonstrate that these measures are useful for simplifying three-dimensional polygonal meshes. We have also compared these metrics with the error metrics used in a geometry-based method and in an image-driven method. Quantitative results are presented in the comparison using the root-mean-square error (RMSE).

1. Introduction

Mesh simplification is the process of reducing the number of polygons used in the surface while keeping the overall shape, volume and boundaries preserved as much as possible (see Figure 1). This reduction is necessary when complex three-dimensional scenes are required, that is, where a high geometric complexity is managed because geometry can lead to bottlenecks. Simplification techniques reduce the load on the graphics processing unit or GPU.
Figure 1. Simplification example. (a) Original Model. 49694 triangles. (b) Model simplified to 12% with our algorithm using TVMI ( α = 1 ). 6285 triangles.
Figure 1. Simplification example. (a) Original Model. 49694 triangles. (b) Model simplified to 12% with our algorithm using TVMI ( α = 1 ). 6285 triangles.
Entropy 13 01805 g001
Simplification methods make use of a simplification operation and an error metric. The simplification operation gives the way in which geometry will be removed, while the error metric establishes the order in which the simplification steps will be performed.
Different criteria have been used for the purpose of simplifying polygonal meshes. The first simplification methods only considered the geometric information of the objects to establish the simplification order. These methods are fast and normally produce simplified objects with a good geometric closeness, which is suitable for some applications such as CAD/CAM, collision detection, and mesh signal processing. However, the most common use of simplification is to produce a model that is visually similar to the original. This is crucial for applications such as vehicle simulations, architectural walkthroughs, virtual reality and visualization. During the last years, methods that take visual information into account have also appeared. These methods increase simplification in those parts of the objects that are not visible for the user. They consider not only geometric properties of the models, but also visual aspects in the scene. Visual methods are usually slower than geometry-based methods. Most visual methods often work by locating several cameras around the object in order to preserve the most visually relevant areas. Many articles about simplification techniques have appeared in the literature, some surveys can be found in [1,2]. Recently, theoretic information measures, such as entropy and mutual information, have been applied for simplifying polygonal meshes [3,4]. These measures capture the information about the structure of the mesh. This approach has proven to lead to very good results in visual simplification. Others visual methods in the literature such as [5,6] also indirectly preserve some structural information, but that is not their main goal. Other more accurate measures designed to measure the structure of the mesh could improve the results of existing visual methods.
In this paper, we present a study and a comparison of the use of different information-theoretic measures for polygonal mesh simplification. The discrete entropy and mutual information [7] and their Havrda–Charvát–Tsallis generalizations [8,9] have been applied to mesh simplification. We demonstrate that these measures are useful for simplifying three-dimensional polygonal meshes. In addition, we compare the results obtained with different parameters of these generalized formulas and with other simplifications from the geometry-based method presented in [10] and from the image-based method presented in [5] using its metric. We also present a qualitative comparison using the root-mean-square error (RMSE). We must emphasize that the use of the information-theoretic measures analyzed in this paper produces a better preservation of the structural appearance of the models. The visual simplification approach by Lindstrom and Turk [5] only considers a pixel-to-pixel error which may suffer from problems in translation, rotation or scaling of the images. This method preserves some structural information but we show in this work that our measures designed to measure that information are able to outperform its results.
We have extended the viewpoint-driven simplification approach [4,11] in order to include those generalized information-theoretic measures. This simplification method uses the edge collapse operation as the simplification operation. In order to measure the error committed by a decimation operation, this approach uses a set of cameras regularly distributed surrounding the object. This error will give the associated cost to each edge of the object. This will establish the simplification order. The edge collapse operation is a simplification operation that removes an edge returning a vertex at each simplification step. All the neighboring triangles must be retriangulated after this operation in order to avoid the appearance of holes.
The remainder of this paper is organized as follows. Section 2 reviews some related work in mesh simplification and in viewpoint information measures. In Section 3, we review the viewpoint-driven simplification approach and extend the framework with new viewpoint information measures based on Tsallis entropy. Section 4 shows the results of our experiments and a comparison with a geometry-based simplification method. Finally, in Section 5 we summarize our work and propose some future work.

2. Background

2.1. Related Work

Polygonal mesh simplification is a well-studied problem in the field of computer graphics. Many good geometry-oriented proposals have been presented for this problem so far. Today, many researchers accept the quadric error metric developed by Garland and Heckbert [10] as one of the most important measures of geometric fidelity and the edge collapse operation introduced by Hoppe [12] as the preferred decimation mechanism used in many simplification algorithms.
Lindstrom and Turk [13,14] developed a purely geometric method that defines a memoryless quadric error metric to maintain the volume of the model and the surface area near the boundaries. However, this method does not distinguish between visible and hidden geometry. Simplified models do not always show high visual quality. To improve visual similarity, internal parts of the models must be more simplified than the visible surfaces.
Some improvements in geometry-oriented approaches to simplification have been the incorporation of mesh attributes such as color, normals and textures. Hoppe extended his initial work [15] by proposing a new quadric metric that includes colors and texture coordinates [16]. The QSlim algorithm [10] was also extended with those attributes [17]. Cohen et al. [18] developed an algorithm based on edge collapses that samples the vertex position, normal and color attributes of the initial mesh and then converts them into normal and texture maps. Their approach is based on a texture deviation metric.
One of the first attempts to address the problem of visual similarity was carried out by Lindstrom and Turk [5]. They developed a purely image-based metric. The cost of an edge collapse operation was determined by comparing the rendered images after the edge collapse to the initial ones measuring the deviation as the mean-square error in luminance across the pixels of the images. Lindstrom and Turk used 20 viewpoints in their implementation. Their metric provides a natural way to balance the geometric and shading properties without requiring the user to perform an arbitrary weighting of them, but has a high temporal cost. Lindstrom and Turk’s image metric [5] based on a pixel-by-pixel comparison between pairs of images is very sensitive to translations, rotations and scaling of the image contents.
Luebke and Hallen [19] presented a method to perform a view-dependent polygonal simplification using perceptual metrics. These metrics derive from a measure of low-level perceptibility of visual stimuli in humans. Later on, Williams et al. [20] extended this work for lit and textured meshes. The approach with perceptual metrics [19,20] could be inadequate to accomplish a drastic simplification. The contrast sensitivity function, CSF, can tell us whether a simplification operation is perceptible, but not which of two perceptible simplifications is better. The authors use distance to viewer to order the simplification operations to attempt to overcome this difficulty. Zhang and Turk [6] proposed a new algorithm that takes visibility into account. Visibility is defined as a function between the surfaces of the model and a surrounding sphere of cameras. The number of cameras increases both accuracy and calculation time. Zhang and Turk used up to 258 cameras. In the simplification process, their visibility measure is combined with the quadric error metric [10]. Lee et al. [21] introduced the idea of mesh saliency as a measure of regional importance for graphics meshes. This measure was incorporated into the QSlim algorithm [10].
Another approach to the problem of visual similarity in mesh simplification was addressed in [4,11]. Information Theory was applied to define new error metrics for mesh simplification. The concepts of entropy and mutual information of a viewpoint were used to propose metrics to measure the deviation introduced by a decimation operation. The main idea is to preserve the entropy, mutual information of a given set of viewpoints during the process of simplification. The authors show that this framework leads to approximations that preserve the visual appearance of the models.
Recently, Qu and Meyer [22] took into consideration the properties of the human visual system, together with the geometric aspects of a model to develop a perceptually-guided simplification algorithm. First, an importance map that indicates the visual masking of the visual patterns on the surface is computed. Then, the importance map is used to guide the simplification process using a geometry-based simplification algorithm (QSlim [10]). Visual approaches to the simplification problem such as [6,21,22] might present some practical difficulties, because the effect of the geometric measure in relation with the visibility information must be weighted.

2.2. Viewpoint Information Measures

In this section, we review both viewpoint entropy [23] and viewpoint mutual information [24]. To introduce these measures, we firstly present an information channel between a set of viewpoints and the set of polygons of an object.
The information channel V Z between the random variables V (input) and Z (output), which represent, respectively, a set of viewpoints V and the set of polygons Z of an object [24], is defined by a conditional probability matrix obtained from the projected areas of polygons at each viewpoint. These conditional probabilities represent the probability of “seeing” a determined polygon from a given viewpoint. Viewpoints will be indexed by v and polygons by z. The capital letters V and Z as arguments of p ( . ) will be used to denote probability distributions. For instance, while p ( v ) denotes the probability of a single viewpoint v, p ( V ) represents the input distribution of the set of viewpoints. The three basic elements of the viewpoint channel are:
  • Conditional probability matrix p ( Z | V ) , where each element p ( z | v ) = a z ( v ) / a t is defined by the normalized projected area of polygon z over the sphere of directions centered at viewpoint v, where a z ( v ) is the projected area of polygon z at viewpoint v and a t is the total projected area of all polygons over the sphere of directions. Conditional probabilities fulfill z Z p ( z | v ) = 1 . The background can be taken into account as any other polygon.
  • Input distribution p ( V ) , which represents the probability of selecting each viewpoint. This probability can be obtained, for instance, from the normalization of the projected area of the object at each viewpoint or assigning the same probability to each viewpoint. In this paper we have adopted the second alternative, that is, we have assigned the same importance to each viewpoint v.
  • Output distribution p ( Z ) , given by
    p ( z ) = v V p ( v ) p ( z | v )
    which represents the average projected area of polygon z.
From the previous definitions, viewpoint entropy [23] and viewpoint mutual information [24] can be defined. The viewpoint entropy (VE) of viewpoint v is defined by
H ( Z | v ) = - z Z p ( z | v ) log p ( z | v )
VE measures the degree of uniformity of the projected area distribution at viewpoint v. The maximum viewpoint entropy is obtained when a certain viewpoint can see all the polygons with the same projected area. In [23], the best viewpoint is defined as the one that has maximum VE. The conditional entropy H ( Z | V ) of the channel is given by the average of all viewpoint entropies.
The mutual information of channel V Z , which expresses the degree of dependence or correlation between the set of viewpoints and the object [24], is defined by
I ( V ; Z ) = v V p ( v ) z Z p ( z | v ) log p ( z | v ) p ( z ) = v V p ( v ) I ( v ; Z )
where I ( v ; Z ) is the viewpoint mutual information (VMI) given by
I ( v ; Z ) = z Z p ( z | v ) log p ( z | v ) p ( z )
VMI gives us the degree of dependence between the viewpoint v and the set of polygons, and it is interpreted as a measure of the quality of viewpoint v. Consequently, mutual information I ( V ; Z ) gives us the average quality of the set of viewpoints. Quality is considered here equivalent to representativeness. In [24], the best viewpoint is defined as the one that has minimum VMI, that is, the lowest values of VMI correspond to the most representative or relevant views. High values of the measure mean a high dependence between viewpoint v and the object, indicating a highly coupled view (for instance, between the viewpoint and a small number of polygons with low average visibility).

3. Simplification Algorithm

In this section, we review the viewpoint-driven approach [4,11] and extend its simplification algorithm to include new measures based on Tsallis Entropy [9,25]. The main idea of our approach is to define a function that measures the difference of information captured by a set of viewpoints before and after a simplification operation. The simplification operation reduces the complexity of the model by removing an item of the mesh. This difference of information will give us the cost of the simplification operation.
To reduce the complexity of a model, the algorithm uses the edge collapse operation. There are two slightly different edge collapse operations. One is known as edge collapse while the other is known as half edge collapse. Given an edge e joining vertices u and v, the edge collapse operation replaces e,u and v for a new vertex r, while the half edge collapse operation pulls v into u, disappearing e and leaving u in place. In both cases the operation removes the edge e along with the two polygons adjacent to it (see Figure 2).
Figure 2. The two possible half edge collapses for the edge highlighted with a thicker line. Triangles in grey will be removed.
Figure 2. The two possible half edge collapses for the edge highlighted with a thicker line. Triangles in grey will be removed.
Entropy 13 01805 g002
Naturally, the surface that results from an edge collapse deviates from the initial surface by some amount, and since the goal of simplification is to reduce the number of polygons while retaining the overall look of the surface as much as possible, it is necessary to measure such a deviation. Some methods attempt to measure the total deviation from the initial surface to the completely simplified surface, for example, by tracking an accumulated error while keeping a history of the simplification changes. Other methods attempt to measure only the cost of each individual edge collapse (the local deviation introduced by a single simplification step) and plan the entire process as a sequence of steps of increasing cost [4,10,11,13,14,17].
This iterative algorithm proceeds in two stages. In the first stage, an initial collapse cost is assigned to each and every edge in the surface. Then in the second stage, edges are processed in order of increasing cost. Collapsed edges are replaced by a vertex and the collapse cost of all the edges now incident on the replacement vertex is recalculated, affecting the order of the remaining unprocessed edges. Not all edges selected for processing are collapsed. Some can be discarded right away if they do not satisfy certain topological and geometric conditions. Brute force selection of edges can introduce mesh inconsistencies. To avoid artifacts, we consider 2-manifold edges, i.e., edges that have at most two adjacent polygons, and boundary edges, i.e., edges having one single adjacent polygon. At each step, all remaining edges are potential candidates for collapsing and the one with the lowest cost is selected. The algorithm maintains an internal data structure (a priority queue) which allows each edge to be processed in increasing cost order. After applying an edge collapse, the remaining edges that should be recalculated are simply marked as dirty. In the next iteration, if the edge extracted from the heap is dirty, it is simply discarded. Then, its cost is recomputed and it is inserted again into the heap. This heuristic was introduced in [26] to avoid unnecessary edge collapse costs.
Given an edge collapse e ( u , v ) , note that the cost of collapsing vertex u to v may be different than the cost of collapsing v to u. To determine the best half edge collapse, the algorithm would need to render both possibilities and compute their error. But this penalizes the temporal cost due to the number of framebuffer readings needed to get the projected areas of the polygons of the model. To avoid this, the approach by Melax [27] which considers polygon normals is followed. Before rendering the model, the two different half edge collapses, e ( u , v ) and e ( v , u ) , are performed and the change in curvature around the local region is measured. Finally, the half edge collapse that produces a minor change in curvature is applied. Then, the simplification deviation is computed only for that half edge collapse. The algorithm ends when it reaches the desired number of polygons.
The cost of a simplification operation in the algorithm is determined by the chosen error metric. In the next section, we introduce two new error metrics used to measure the cost of a simplification operation.

3.1. Error Metrics

Recently, viewpoint entropy and viewpoint mutual information have been applied to polygonal simplification [3,11]. As it has been shown, both measures are sensitive to the variation of the projected area distribution of the polygons and, thus, are useful to evaluate the visual distortion degree caused by the simplification process. Now, we introduce two new generalizations for viewpoint entropy and viewpoint mutual information based on Tsallis entropy [9,25], viewpoint T-entropy and viewpoint T-mutual information.

3.1.1. Viewpoint Generalized Entropy

Rényi [28] and Havrda and Charvát [8] introduced, respectively, two different forms of generalized entropies. In both cases, the Shannon entropy is a particular case of these generalized entropies when the entropic index is equal to 1. We consider here the so-called Tsallis entropy to define the viewpoint generalized entropy.
Given a random variable X with alphabet X and probability distribution { p ( x ) } , the T-entropy is defined by
H α T ( X ) = 1 - x X p ( x ) α α - 1
where the parameter α is called entropic index. The T-entropy recovers the Shannon entropy, calculated using natural logarithms, when α 1 .
We define the viewpoint T-entropy (TVE) for a set of viewpoints V and the set of polygons Z as
H α T ( Z | v ) = 1 - z Z p ( z | v ) α α - 1 = 1 - i = 0 N z ( a i a t ) α α - 1
where Z is the random variable that represents the set of polygons Z , N z is the number of polygons and { p ( z | v ) } = { a i / a t } (where a i is the area of the polygon i projected over the sphere of directions, a 0 represents the projected area of the background in open scenes, and a t = i = 0 N z a i is the total projected area of all the polygons over the sphere of directions). Maximum entropy is also obtained when a certain viewpoint can see all the polygons with the same projected area. For extension, the best viewpoint can also be defined as the one that has maximum entropy.
As a conclusion, both Shannon and generalized viewpoint entropies quantify the uniformity of the distribution of the projected polygons. These measures essentially depend on both the number of polygons seen from a viewpoint and the balance of the projected distribution of these polygons. In general, entropy increases with both the number of polygons and the degree of uniformity of the projected distribution of polygons, and decreases in the contrary case.

3.1.2. Viewpoint Generalized Mutual Information

From the Tsallis mutual information introduced by Taneja [29] and Tsallis [25], we will define the viewpoint generalized mutual information.
The Tsallis mutual information I α T ( X , Y ) between two discrete random variables X and Y (with alphabets X , Y , probability distributions { p ( x ) } , { p ( y ) } , and joint distribution { p ( x , y ) } ) is defined as
I α T ( X ; Y ) = 1 1 - α 1 - x X y Y p ( x , y ) α p ( x ) α - 1 p ( y ) α - 1
where α is the entropic index.
From this equation, the T-mutual information between V and Z is defined by
I α T ( V ; Z ) = v V p ( v ) 1 α - 1 z Z p ( z | v ) p ( z | v ) α - 1 p ( z ) α - 1 - 1 = 1 N v v V I α T ( v ; Z )
where
I α T ( v ; Z ) = 1 α - 1 z Z p ( z | v ) p ( z | v ) α - 1 p ( z ) α - 1 - 1
is the viewpoint T-mutual information (TVMI). This definition recovers VMI when α 1 .
As we have seen, the Tsallis information measures introduced in this section represent an extension of the Shannon information measures, since these are included as particular cases. The sensitivity of the Tsallis information measures for polygonal simplification can be evaluated by modifying the parameter α. This parameter allows us to adjust the T-entropy and the T-mutual information so that these measures maximally capture the distortion produced by the polygonal simplification.

3.2. Edge Collapse Cost

Given an object and a set of viewpoints, TVMI and TVE express the accessible information about the object from a given viewpoint v. The variation in any of those measures for each viewpoint can provide us with an error metric to guide the simplification process. Therefore, the simplification error deviation for an edge collapse e from all viewpoints V is defined by
C e = v V I v - I v
where I v represents either viewpoint T-mutual information (TVMI) or viewpoint T-entropy (TVE) before edge collapse e and I v afterward.

3.3. Implementation Issues

Both TVMI and TVE can be calculated iteratively. We can benefit from this property to speed up their calculation. Our measures are computed from the projected areas of the polygons of the model. In order to compute the projected areas of the polygons of the model from a viewpoint v we render the model and perform a pixel-to-pixel analysis, this means that the bottleneck in our algorithm mainly resides in the memory transfer cost. The model is rendered using OpenGL’s vertex buffer object and framebuffer object extensions. To reduce this overload, instead of analyzing the whole image, we restrict the area of reading to a small window that only includes the polygons adjacent the edge collapse. To obtain this window, first the bounding box of the edge collapse is determined and then projected onto the screen. This method reduces the temporal cost, but may lead to some slight loss of quality, because after an edge collapse some hidden polygons might appear and their contribution to the formula was not measured [4,11]. The background is considered to be another polygon, and thus the total projected area is always the image resolution. Moreover, only a few polygons change after an edge collapse. Therefore, either TVMI or TVE can be computed at the beginning of the simplification process for the entire object and then their initial values can be successively updated. This feature was exploited in our current implementation.
Edge collapse e with the lowest deviation C e (10) is chosen at every simplification step. It is important to determine several parameters such as the number of viewpoints and image resolution, since the quality of the results may change. We performed measurements with 20 regularly distributed viewpoints and rendered 256 × 256 resolution images. More viewpoints can increase quality, but also significantly raise the temporal cost [4,11].
At each step, in principle, the collapse cost should be evaluated for the entire set of remaining edges. But not every edge collapse has to affect all the remaining edges. So, we only choose a small group of edges surrounding the edge collapse to recalculate their cost. These edges are the ones that are adjacent to the vertices adjacent to the vertex v resulting from a half-edge collapse. If the whole set of edges of the model is considered the temporal cost is increased considerably, but the results are not clearly better [4,11].

4. Results and Discussion

We show the results of our experiments carried out with four models, the Galo, the Skull, the Brush and the Junk. The Skull, Brush and Junk belong to the De Espona 3D Models Collection for 3DS Max [30]. We compared our results with those obtained with QSlim [10,17] a well-known algorithm for surface simplification, which is freely available at [31]. The QSlim algorithm is a technique for surface simplification based on a quadric error metric for measuring vertex-to-plane distances. Its error metric is related to curvature. In this paper we used the latest QSlim version 2.1. In addition, we compared our results with our implementation of the image-driven simplification approach (IDS) [5]. This algorithm is a surface simplification algorithm which tries to produce approximation according to visual similarity, not to geometric similarity as in the QSlim algorithm. The image-driven simplification approach guides the simplification process by the root mean square error (RMSE), a per-pixel-difference image metric. For IDS we used the same parameters used in TVMI and TVE, that is, 20 viewpoints, 256 × 256 images and the half edge collapse as the decimation operation. We render the images for IDS during the simplification in flat (per triangle) shading. We did not presimplify the models using a geometric simplification technique (for instance, the geometry-driven “memoryless” method [13,14]) prior to using IDS as the authors carried out in their experiments. Instead, we directly simplified the original models using our own implementation of IDS without a previous simplification stage. In order to determine the visual quality of the approximations we also used RMSE between a set of multiple images of the original and the simplified model taken from a sphere of camera positions. This time, we used the 24 vertices of the small rhombicuboctahedron as camera positions, 512 × 512 images and flat shading. Both the resolution and the position and number of cameras were deliberately different from the configuration used in our simplification algorithm to perform a more accurate comparison. The RMSE metric does not reflect well how the differences between two images are perceived. A metric that considers how the human visual system (HVS) works, that is, a perceptually motivated metric would be more appropriate for mesh simplification. We think that this would be a relevant area for future research. In our simplification method (TVE and TVMI) we used the vertices of a regular dodecahedron (20 viewpoints) and 256 × 256 images. All tests were run on a Mac Pro Intel Xeon 2.8 GHz CPU with 4 GB of RAM and an NVIDIA 8800GT 512 MB GPU under Windows Vista SP2.
Table 1 illustrates the visual error measured with RMSE and the simplification time for the three models tested. Clearly both TVMI and TVE achieve better results in visual quality than the geometric approach (QSlim). Moreover, both TVMI and TVE improve the results of the IDS algorithm. In case of the Junk model, we obtain an improvement over QSlim about 30% and 21% over IDS. The QSlim algorithm is faster than our simplification method. In fact, QSlim is one of the fastest surface simplification techniques in the literature. However, QSlim’s metric is purely geometric and does not take into account the visibility of the models. In general, geometric simplification methods are quite faster than those based on some kind of measure of visibility. Nevertheless, simplification is an off-line preprocessing often carried out only once. Hence, simplification times are not so relevant as the final quality of the approximations. IDS shows simplification times similar to TVMI and TVE. The difference in visual quality between TVMI and TVE was very little in our tests. TVE accomplished the best results in the Skull and the Junk models but the difference was around 1%. Viewpoint entropy tends to balance the area of the polygons that remain after an edge collapse operation. In principle, this may be more adequate for models that have polygons of the same size with very few flat regions. Viewpoint mutual information considers the mean visibility of the polygons from the set of viewpoints and usually improves the results of viewpoint entropy when a model has many flat regions, because it is capable of increasing the simplification in those regions more than viewpoint entropy. In the Brush model, TVMI achieved better results than TVE because the model has many flat regions.
Table 1. Visual error (RMSE) and simplification time (seconds) for all models simplified with QSlim, IDS, TVE and TVMI.
Table 1. Visual error (RMSE) and simplification time (seconds) for all models simplified with QSlim, IDS, TVE and TVMI.
ModelTrianglesQSlimIDSTVETVMI
OriginalFinalRMSETimeRMSETimeαRMSETimeαRMSETime
Galo659260011.820.078.65162.441.57.53271.260.57.88296.97
Skull9934178411.060.0811.05360.78110.31285.800.510.37343.74
Brush20698120015.490.1114.86863.13113.56683.750.513.47822.36
Junk61242621213.660.5011.732436.04110.581929.760.510.732320.98
Figure 3 shows the results for the Galo model. As shown in this figure, TVE, TVMI and IDS achieve better visual results than QSlim. Clearly, the comb and tale are retained better with TVE, TVMI and IDS than with QSlim. Besides, the silhouette of the model is preserved better in both TVE and TVMI than in IDS. TVE maintains the wattle better than TVMI reducing slightly the visual error.
Figure 4a analyzes the RMSE for the Galo model simplified to 600 triangles using alpha values between 0 and 2. The best results for TVMI in this model are obtained when α = 0 . 5 and for TVE when α = 1 . 5 . TVE slightly improves the results of TVMI (around 4%). In the case of TVE, alpha values less than 1 considerably increase the visual error. However, TVE with α = 1 . 5 accomplishes an improvement (around 10%) over TVE with α = 1 .
Figure 4b analyzes the RMSE for the Galo model progressively changing the level of simplification. As depicted in this figure, the great enhancement comes when the model complexity is reduced over 20%. For values from 90% to 30%, that is, the model is not very simplified, IDS and QSlim show better values in visual similarity than TVMI and TVE. However, over 20% both TVE and TVMI clearly improve the results of IDS and QSlim.
Figure 5 shows the results for the Skull model. As shown in this figure, TVMI, IDS and TVE achieve better visual results than QSlim. Specially, it can be appreciated (see Figure 6) that the teeth are retained better with TVMI, IDS and TVE than with QSlim. As pointed out earlier, our simplification technique considers the visibility of the polygons, and the mouth region in the Skull model has a great impact on visibility. Also, we can see that the silhouette of the model is preserved better in both TVMI and TVE than in IDS. In fact, this is one of the reasons why IDS shows a similar global quality than QSlim, although as shown in Figure 6, IDS has lower error in the teeth region than QSlim.
Figure 3. Galo model. (a) Original model. T = 6592. (b) QSlim. T = 600. (c) IDS. T = 600. (d) TVE( α = 1 . 5 ). T = 600. (e) TVMI( α = 0 . 5 ). T = 600. T indicates the number of triangles. Different approximations of Galo model obtained with QSlim, IDS, TVE and TVMI. The original model is shown in (a). TVE and TVMI preserve the comb and tail better than QSlim and IDS.
Figure 3. Galo model. (a) Original model. T = 6592. (b) QSlim. T = 600. (c) IDS. T = 600. (d) TVE( α = 1 . 5 ). T = 600. (e) TVMI( α = 0 . 5 ). T = 600. T indicates the number of triangles. Different approximations of Galo model obtained with QSlim, IDS, TVE and TVMI. The original model is shown in (a). TVE and TVMI preserve the comb and tail better than QSlim and IDS.
Entropy 13 01805 g003
Figure 4. RMSE for Galo model. (a) RMSE vs. different alpha values. T = 600. (b) Decimation %. High percentage values indicate that the model has been simplified slightly. Low values correspond to a very coarse model.
Figure 4. RMSE for Galo model. (a) RMSE vs. different alpha values. T = 600. (b) Decimation %. High percentage values indicate that the model has been simplified slightly. Low values correspond to a very coarse model.
Entropy 13 01805 g004
Figure 5. Skull model. (a) Original model. T = 9934. (b) QSlim. T = 1784. (c) IDS. T = 1784. (d) TVE ( α = 1 ). T = 1783. (e) TVMI ( α = 0 . 5 ). T = 1784. Different approximations of Skull model obtained with QSlim, IDS, TVE and TVMI. The original model is shown in (a).
Figure 5. Skull model. (a) Original model. T = 9934. (b) QSlim. T = 1784. (c) IDS. T = 1784. (d) TVE ( α = 1 ). T = 1783. (e) TVMI ( α = 0 . 5 ). T = 1784. Different approximations of Skull model obtained with QSlim, IDS, TVE and TVMI. The original model is shown in (a).
Entropy 13 01805 g005
Figure 7a analyzes the RMSE for the Skull model simplified to 1784 triangles using alpha values between 0 and 2. The best results for TVMI in this model are obtained when α = 0 . 5 and for TVE when α = 1 . In the case of TVE, lower values for alpha considerably increase the visual error. However, TVMI with α = 0 . 5 accomplished and improvement (around 2%) over TVMI with α = 1 .
Figure 7b analyzes the RMSE for the Skull model progressively changing the level of simplification. As depicted in this figure, the true enhancement comes when the model complexity is reduced over 40%. For values from 90% to 40%, that is, the model is not very simplified, QSlim shows similar values in visual similarity than TVMI, TVE and IDS.
Figure 6. Close-ups of Skull model. (a) Original model. (b) QSlim. T = 1784. RMSE = 47.58. (c) IDS. T = 1784. RMSE = 42.05. (d) TVE ( α = 1 ). T = 1783. RMSE = 42.53. (e) TVMI ( α = 0.5 ). T = 1784. RMSE = 40.24. These images show that the region around the mouth, especially the teeth in the lower junk, is preserved better in TVMI, IDS and TVE than in QSlim. In the bottom row, difference images are shown. These difference images were produced by superimposing the simplified image over the original image. Here black signifies no difference, while red corresponds to maximum difference.
Figure 6. Close-ups of Skull model. (a) Original model. (b) QSlim. T = 1784. RMSE = 47.58. (c) IDS. T = 1784. RMSE = 42.05. (d) TVE ( α = 1 ). T = 1783. RMSE = 42.53. (e) TVMI ( α = 0.5 ). T = 1784. RMSE = 40.24. These images show that the region around the mouth, especially the teeth in the lower junk, is preserved better in TVMI, IDS and TVE than in QSlim. In the bottom row, difference images are shown. These difference images were produced by superimposing the simplified image over the original image. Here black signifies no difference, while red corresponds to maximum difference.
Entropy 13 01805 g006
Figure 7. RMSE for Skull model. (a) RMSE vs. different alpha values. T = 1784. (b) Decimation %.
Figure 7. RMSE for Skull model. (a) RMSE vs. different alpha values. T = 1784. (b) Decimation %.
Entropy 13 01805 g007aEntropy 13 01805 g007b
Figure 8 shows the results for the Brush model. As can be appreciated, TVMI, IDS and TVE achieve better visual results than QSlim. TVMI, IDS and TVE are able to retain the brush pins better than QSlim. As we mentioned earlier, QSlim does not consider the visibility of the model, thus it fails in the pins. Logically, other regions that have less visibility are preserved better in QSlim than in IDS, TVE and TVMI, for instance, the brush handle. Both TVE and TVMI achieve better visual results than IDS because they maintain more pins in the brush.
Figure 8. Brush model. (a) Original model. T = 20698. (b) QSlim. T = 1200. (c) IDS. T = 1199. (d) TVE ( α = 1 ). T = 1199. (e) TVMI ( α = 0.5 ). T = 1199. Different approximations of Brush model obtained with QSlim, IDS, TVE and TVMI. The original model is shown in (a). The results show that our measures (TVE and TVMI) are capable of retaining more polygons in the brush pins than IDS and QSlim.
Figure 8. Brush model. (a) Original model. T = 20698. (b) QSlim. T = 1200. (c) IDS. T = 1199. (d) TVE ( α = 1 ). T = 1199. (e) TVMI ( α = 0.5 ). T = 1199. Different approximations of Brush model obtained with QSlim, IDS, TVE and TVMI. The original model is shown in (a). The results show that our measures (TVE and TVMI) are capable of retaining more polygons in the brush pins than IDS and QSlim.
Entropy 13 01805 g008aEntropy 13 01805 g008b
Figure 9a depicts the RMSE for the Brush model simplified to 1200 triangles using alpha values between 0 and 2. The best visual results in this model are obtained when α = 0 . 5 for both TVE and TVMI. In the case of TVE, lower values for alpha considerably increase the visual error. In this model TVMI achieves an improvement around 1% over VMI (TVMI ( α = 1 )) and TVE, and around 1% over VE (TVE ( α = 1 )). The difference in visual quality between TVE and TVMI is very little in this model (about %1).
Figure 9b depicts the RMSE for the Brush model progressively changing the level of simplification. The real improvement in visual similarity comes when the model complexity is reduced over 30%. When the model is not very simplified, QSlim shows better values in visual quality than TVMI, TVE and IDS.
Figure 9. RMSE for Brush model. (a) RMSE vs. different alpha values. T = 1200. (b) Decimation %.
Figure 9. RMSE for Brush model. (a) RMSE vs. different alpha values. T = 1200. (b) Decimation %.
Entropy 13 01805 g009aEntropy 13 01805 g009b
Figure 10 shows the results for the Junk model. Both TVE and TVMI achieve the best visual results. TVMI, TVE and IDS were able to retain many ropes, whereas QSlim could not. Nevertheless, QSlim preserves the ship’s sails better than IDS. We measured RMSE in Figure 10b (QSlim) was 16.10 and in Figure 10c (IDS) was 19.16 . This means that for this particular view, QSlim accomplished better visual results than IDS, although the global RMSE for the model is lower in IDS than QSlim (see Table 1). Figure 11 shows some close-ups of the ship’s bow, clearly we can see that TVMI and TVE obtained the best results. QSlim and IDS present more red regions than TVMI and TVE. The red pixels indicate visual error in the image.
Figure 12a depicts the RMSE for the Junk model simplified to 6212 triangles using alpha values between 0 and 2. The best visual results in this model are obtained when α = 0.5 for TVMI and when α = 1 for TVE. Similar to the Skull and Brush models, lower values for alpha considerably increase the visual error for TVE. In this model TVMI achieves an improvement around 2% over VMI (TVMI ( α = 1 )). The difference in visual quality between TVE and TVMI is very little in this model (around 1%). As shown in Table 1, TVE obtains the best results in visual similarity among the methods tested for this model.
Figure 12b depicts the RMSE for the Junk model progressively changing the level of simplification. When the model complexity is reduced over 50%, all the visual methods tested (TVMI, TVE and IDS) obtained better visual quality than QSlim. But similar to our previous experiments, when the model is not very simplified, QSlim shows better results.
Figure 10. Junk model. (a) Original model. T = 61242. (b) QSlim. T = 6212. (c) IDS. T = 6211. (d) TVE ( α = 1 ). T = 6218. (e) TVMI ( α = 0.5 ). T = 6219. Different approximations of Junk model obtained with QSlim, IDS, TVE and TVMI. The original model is shown in (a). All the visual simplifications (IDS, TVE and TVMI) preserve the ropes better than the purely geometric simplification (QSlim). The silhouette of the model is retained better with TVE and TVMI than with IDS, see for example the sail at the ship’s stern in (c).
Figure 10. Junk model. (a) Original model. T = 61242. (b) QSlim. T = 6212. (c) IDS. T = 6211. (d) TVE ( α = 1 ). T = 6218. (e) TVMI ( α = 0.5 ). T = 6219. Different approximations of Junk model obtained with QSlim, IDS, TVE and TVMI. The original model is shown in (a). All the visual simplifications (IDS, TVE and TVMI) preserve the ropes better than the purely geometric simplification (QSlim). The silhouette of the model is retained better with TVE and TVMI than with IDS, see for example the sail at the ship’s stern in (c).
Entropy 13 01805 g010
Figure 11. Close-ups of Junk model. (a) Original model. (b) QSlim. T = 6212. RMSE = 30.15. (c) IDS. T = 6211. RMSE = 28.18. (d) TVE ( α = 1 ). T = 6218. RMSE = 27.46. (e) TVMI ( α = 0.5 ). T = 6219. RMSE = 26.59. The ropes, some masts and poles are retained better in TVE, TVMI and IDS than in QSlim. TVE (see (e)) achieves an improvement about 6% over IDS (see (c)) and over 14% over QSlim (see (b)).
Figure 11. Close-ups of Junk model. (a) Original model. (b) QSlim. T = 6212. RMSE = 30.15. (c) IDS. T = 6211. RMSE = 28.18. (d) TVE ( α = 1 ). T = 6218. RMSE = 27.46. (e) TVMI ( α = 0.5 ). T = 6219. RMSE = 26.59. The ropes, some masts and poles are retained better in TVE, TVMI and IDS than in QSlim. TVE (see (e)) achieves an improvement about 6% over IDS (see (c)) and over 14% over QSlim (see (b)).
Entropy 13 01805 g011
Figure 12. RMSE for Junk model. (a) RMSE vs. different alpha values. T = 6212. (b) Decimation %.
Figure 12. RMSE for Junk model. (a) RMSE vs. different alpha values. T = 6212. (b) Decimation %.
Entropy 13 01805 g012aEntropy 13 01805 g012b

5. Conclusions and Future Work

In this paper, we have presented an extension to the viewpoint-driven simplification approach [4,11]. The viewpoint-driven simplification approach belongs to those simplification methods whose main goal is to produce approximations according to visual appearance. We have incorporated two new generalized information-theoretic viewpoint selection measures, viewpoint T-entropy and viewpoint T-mutual information into the viewpoint-driven simplification algorithm. These generalized measures include Shannon Information Theory as a special case. Tsallis measures have a parameter α, when α = 1 they recover the usual Shannon entropy and mutual information. These measures give information about the structure of the model and produce high quality approximations when applied to mesh simplification, because they do not suffer from problems such as image distortion and affine transformations that pixel-wise metrics have.
Furthermore, we have compared our results with two different simplification approaches well-known in the computer graphics literature. The geometric approach, QSlim [10,17], based on quadrics, a measure related to curvature, and the visual approach, image-based simplification (IDS) [5], which uses a metric based on pixel differences among a set of render images, the root mean square error (RMSE). We also used RMSE for the quality evaluation due to its simplicity and efficiency and because it can give us an idea of visual similarity of the simplified models. For future work, it would be very interesting to develop other metrics based on computational models of human vision and perceptual factors in order to measure the quality of the approximations in simplification. As shown in our tests, using the generalized measures (TVE and TVMI) in the viewpoint-driven method clearly improves the quality of the resulting approximations compared to geometric decimation methods and even in visual simplification methods as image-based simplification. Our metrics are able to capture structural information from the models, which leads to better simplification, and this is especially noticeable when the complexity of the models is considerably reduced. In addition, our results show that our Tsallis-based measures (TVE and TVMI) slightly enhance the results of the Shannon-based measures, VE and VMI, when applied to mesh decimation.
One important fact that can be inferred from our tests is that when models are not very simplified, the geometric approach in simplification does give superior quality. We think that this is because purely geometry-based methods such as QSlim simplify all the very small polygons first, which has a very low impact on the visual quality of the mesh. Visual simplification methods such as IDS and ours remove hidden regions first and these regions might have some collateral effects on other parts of the models. The visual approach in mesh decimation is really interesting when models are simplified drastically. A hybrid approach would be more adequate in terms of time efficiency, because visual simplification methods are quite slow compared to the geometric ones.
Instead of selecting an entropic index α heuristically by experiments, an approach similar to [32] could be applied in which the entropic index α is obtained systematically by a least-squares estimation.
The current implementation of the viewpoint-driven simplification algorithm is slow, for future work it would be very useful to exploit the parallelism of the new GPU architectures in order to accelerate the rendering stages. With these new architectures, even the metrics can be calculated in the GPU. This would avoid the traffic between the GPU and the CPU, which is where the bottleneck mainly resides.

Acknowledgments

This work was supported by the Spanish Ministry of Science and Innovation (Project TIN2010-21089-C03-03 and TIN2010-21089-C03-01) and Feder Funds, Bancaixa (Project P1.1B2010-08), Generalitat Valenciana (Project PROMETEO/2010/028) and Project 2009-SGR-643 of Generalitat de Catalunya (Catalan Government).

References

  1. Cignoni, P.; Montani, C.; Scopigno, R. A comparison of mesh simplification algorithms. Comput. Graph. 1998, 22, 37–54. [Google Scholar] [CrossRef]
  2. Luebke, D. A developer’s survey of polygonal simplification algorithms. IEEE Comput. Graph. Appl. 2001, 21, 24–35. [Google Scholar] [CrossRef]
  3. Castelló, P.; Sbert, M.; Chover, M.; Feixas, M. Viewpoint Entropy-Driven Simplification. In Proceedings of 15th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, WSCG ’07, Plzen-Bory, Czech Republic, 29 January–1 February 2007; pp. 249–256.
  4. Castelló, P.; Sbert, M.; Chover, M.; Feixas, M. Viewpoint-based simplification using f-divergences. Inf. Sci. 2008, 178, 11, 2375–2388. [Google Scholar] [CrossRef]
  5. Lindstrom, P.; Turk, G. Image-driven simplification. ACM Trans. Graph. 2000, 19, 204–241. [Google Scholar] [CrossRef]
  6. Zhang, E.; Turk, G. Visibility-Guided Simplification. In Proceedings of IEEE Visualization ’02, Boston, MA, USA, 27 October–1 November 2002; Volume 31, pp. 267–274.
  7. Shannon, C.E. A mathematical theory of communication. SIGMOBILE Mob. Comput. Commun. Rev. 1948, 5, 3–55. [Google Scholar] [CrossRef]
  8. Havrda, J.; Charvát, F. Quantification method of classification processes. Concept of structural alpha-entropy. Kybernetika 1967, 3, 30–35. [Google Scholar]
  9. Tsallis, C. Possible generalization of boltzmann-gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  10. Garland, M.; Heckbert, P.S. Surface simplification using quadric error metrics. In Proceedings of the 24th Annual Conference On Computer Graphics and Interactive Techniques, SIGGRAPH ’97, Los Angeles, CA, USA, 3–8 Augugst 1997; pp. 209–216.
  11. Castelló, P.; Sbert, M.; Chover, M.; Feixas, M. Viewpoint-driven simplification using mutual information. Comput. Graph. 2008, 32, 451–463. [Google Scholar] [CrossRef]
  12. Hoppe, H.; DeRose, T.; Duchamp, T.; McDonald, J.; Stuetzle, W. Mesh optimization. In Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’93, New York, NY, USA, 1993; pp. 19–26.
  13. Lindstrom, P.; Turk, G. Fast and memory efficient polygonal simplification. In Proceedings of the IEEE Visualization ’98, Research Triangle Park, NC, USA, 18–23 October 1998; pp. 279–286.
  14. Lindstrom, P.; Turk, G. Evaluation of memoryless simplification. IEEE Trans. Vis. Comput. Graph. 1999, 5, 98–115. [Google Scholar] [CrossRef]
  15. Hoppe, H. Progressive meshes. In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’96, New Orleans, LA, USA, 4–9 August 1996; pp. 99–108.
  16. Hoppe, H. New quadric metric for simplifying meshes with appearance attributes. In Proceedings of the Conference on Visualization, VIS ’99, Alamitos, CA, USA, 24–29 October 1999; pp. 59–66.
  17. Garland, M.; Heckbert, P.S. Simplifying surfaces with color and texture using quadric error metrics. In Proceedings of the Conference on Visualization, VIS ’98, Los Alamitos, CA, USA, 18–23 October 1998; pp. 263–269.
  18. Cohen, J.; Olano, M.; Manocha, D. Appearance preserving simplification. In Proceedings of the SIGGRAPH ’98, Orlando, FL, USA, 19–24 July 1998; Volume 32, pp. 115–122.
  19. Luebke, D.; Hallen, B. Perceptually-driven simplification for interactive rendering. In Proceedings of the 12th Eurographics Workshop on Rendering Techniques, London, UK, 25–27 June 2001; pp. 223–234.
  20. Williams, N.; Luebke, D.; Cohen, J.D.; Kelley, M.; Schubert, B. Perceptually guided simplification of lit, textured meshes. In Proceedings of the 2003 symposium on Interactive 3D graphics, Monterey, CA, USA, 27–30 April 2003; pp. 113–121.
  21. Lee, C.H.; Varshney, A.; Jacobs, D.W. Mesh saliency. ACM Trans. Graph. 2004, 24, 659–666. [Google Scholar] [CrossRef]
  22. Qu, L.; Meyer, G.W. Perceptually guided polygon reduction. IEEE Trans. Vis. Comput. Graph. 2008, 14, 1015–1029. [Google Scholar] [PubMed]
  23. Vázquez, P.P.; Feixas, M.; Sbert, M.; Heidrich, W. Viewpoint selection using viewpoint entropy. In Proceedings of the Vision Modeling and Visualization Conference, VMV ’01, Stuttgart, Germany, 21–23 November 2001; pp. 273–280.
  24. Feixas, M.; Sbert, M.; González, F. A unified information-theoretic framework for viewpoint selection and mesh saliency. ACM Trans. Appl. Percept. 2009, 6, 1–23. [Google Scholar] [CrossRef]
  25. Tsallis, C. Generalized entropy-based criterion for consistent testing. Phys. Rev. E 1998, 58, 1442–1445. [Google Scholar] [CrossRef]
  26. Cohen, J.; Manocha, D.; Olano, M. Simplifying polygonal models using successive mappings. In Proceedings of IEEE Visualization ’97, Phoenix, AZ, USA, 19–24 October 1997; Yagel, R., Hagen, H., Eds.; pp. 395–402.
  27. Melax, S. A simple, fast, and effective polygon reduction algorithm. Game Dev. 1998, 44–48. [Google Scholar]
  28. Rényi, A. On measures of entropy and information. In Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability, Oakland, CA, USA, 20–30 July 1960; pp. 547–561.
  29. Taneja, I. Bivariate measures of type a and their applications. Tamkang J. Math. 1988, 19, 63–74. [Google Scholar]
  30. De Espona Infografica. 3D Enciclopedia. Available online: http://www.deespona.com/ 3denciclopedia/menu.html (accessed on 28 September 2011).
  31. QSlim Simplification Software. Available online: http://mgarland.org/software/qslim.html (accessed on 28 September 2011).
  32. Qing, X.; Sbert, M.; Lianping, X.; Jianfeng, Z. A novel adaptive sampling by tsallis entropy. In Proceedings of the Computer Graphics, Imaging and Visualisation, CGIV ’07, Bangkok, Thailand, 14–17 August 2007; pp. 5–10.

Share and Cite

MDPI and ACS Style

Castelló, P.; González, C.; Chover, M.; Sbert, M.; Feixas, M. Tsallis Entropy for Geometry Simplification. Entropy 2011, 13, 1805-1828. https://doi.org/10.3390/e13101805

AMA Style

Castelló P, González C, Chover M, Sbert M, Feixas M. Tsallis Entropy for Geometry Simplification. Entropy. 2011; 13(10):1805-1828. https://doi.org/10.3390/e13101805

Chicago/Turabian Style

Castelló, Pascual, Carlos González, Miguel Chover, Mateu Sbert, and Miquel Feixas. 2011. "Tsallis Entropy for Geometry Simplification" Entropy 13, no. 10: 1805-1828. https://doi.org/10.3390/e13101805

Article Metrics

Back to TopTop